Hackers Are Using Open AI’s ChatGPT to Launch Cyberattacks

Since its November release, Open AI’s ChatGPT has had a mind-blowing impact across industries. With capabilities to tell jokes, write academic papers, pass higher-education exams, and create phishing emails with malicious payloads, questions arise about society’s ability to manage the chatbot and others like it. 

Some cybersecurity experts predict a coming spike in low-level cyberattacks as a result of ChatGPT and ChatGPT alternatives’ capabilities. Evidence confirms claims on hacker forums that ChatGPT can be used to develop malware.

Failed attempts to create phishing emails and malware using ChatGPT suggest Open AI tweaked its code to restrict for anti-abuse reasons. However, threat actors are working collectively to bypass these blocks. 

Low-Level Hackers Using ChatGPT to Bridge Skill Gap?

Cybercrime isn’t just a line of work you can stumble into, cybercriminals are often highly knowledgeable and possess great skill. It takes time to develop the technical prowess to exploit other users or commonly used software. Now, people are using ChatGPT to write malware and phishing emails with malicious payloads. 

The service  is being exploited specifically for low-level attacks by less-skilled hackers. It seems that Open AI’s algorithm can help aspiring hackers bridge their skill gap, enabling them to launch attacks despite lacking the necessary skills. 

A report by Check Point confirms 3 cases of hackers successfully using ChatGPT to create malware. Regarding the ChatGPT-created code, the report stated it can “easily be modified to encrypt someone’s machine completely without any user interaction.” It added the script can potentially be turned into ransomware. 

The Open AI’s chatbot is also capable of writing convincing phishing emails without typos or grammar mistakes. Badly written emails are a hallmark of phishing and signal to people they may be illegitimate. With this tell-tale sign removed, more people will be tricked into believing the emails are legitimate.

Of course, humans can do these things, too. But ChatGPT is far more efficient and removes obstacles for aspiring hackers. If more people knew how to hack, cybercrime would likely increase. And with ChatGPT making cybercrime more accessible to people not willing to put in the effort, we can expect a spike in low-level cyberattacks; be on high alert. 

Did Open AI Fix Things?

I tested ChatGPT to see if it would write malware or phishing emails upon request. In response to direct requests, it gives a run-of-the-mill anti-abuse response. 

Screenshot of ChatGPT's response to request for phishing email example
It’s might refuse criminal requests, but it’s not impervious to hackers’ tricks

However, with slight rewording, it produced highly convincing emails prompting the reader to click on a link. Any hacker could use such outputs as package delivery phishing emails. The outputs are well-written and sound professional which might disrupt people’s ability to recognize it as malicious. 

Screenshot to Chat GPT's response for a request to write an email template.
Somestimes, all it takes to convince someone to do something is to say you’re definitely not up to no good.

While Open AI hasn’t publicly spoken on the issue, it may be working behind the scenes to restrict outputs and create stumbling blocks for hackers. The question is, how effective will the guardrails be?

Hackers Selling Bypasses to ChatGPT’s Malware-Writing Blocks

According to another Check Point report, cybercriminals are working collectively in underground forums to find bypasses to ChatGPT’s blocks. 

Discussions on one such forum disclose how using Telegram bots alongside Open AI’s GPT-3 model can write malicious code. This is because GPT-3 has few anti-abuse restrictions. 

One forum user is now selling the basic bypass script necessary to write malware-loaded phishes. Not only is it highly accessible, but it’s also extremely cheap. The first 20 queries are free, and from then on, buyers pay a mere $5.50 per 100 queries. 

Check Point's research from hacker forums
The paradox is being helpful to cause harm

In an interview with Tech Crunch, Check Point’s Sergey Shykevich stated his thoughts on on how ChatGPT is “another step forward in the dangerous evolution of increasingly sophisticated and effective cyber capabilities.”

Pro tip: The best thing you can do to protect yourself from phishing attacks is to know what they look like. While ChatGPT poses a stumbling block for recognizing phishing, you should still be extremely cautious about messages or emails from unknown sources.

Are We Ready For the AI Malware?

Open AI’s ChatGPT broke records for gaining the largest number of users in the shortest timespan. Its public launch marked a significant moment in AI development and its presence is already changing many industries. 

It’s also disrupting the education sector in a way it couldn’t prepare for. Its ability to write academic papers that seem sound at face value could weaken educational and scientific integrity across the board.

The effect of services like ChatGPT on the cybersecurity landscape is also worrying as it grants unskilled hackers the ability to launch attacks they would otherwise not be able to. Investigations of underground hacker forums reveal how the wheels are in motion for low-level phishing attacks to drastically rise in number.

While using a VPN can’t protect you against phishing, it can protect you from a wide range of other cyberattacks. So, adding one to your digital security toolkit is a modern necessity. 

CyberGhost VPN uses military-grade encryption and reroutes your internet traffic through secure servers. This makes it difficult for hackers to intercept your traffic and steal your personal data. Get CyberGhost VPN for your cybersecurity toolkit and stay protected from attacks. 

Leave a comment

Write a comment

Your email address will not be published. Required fields are marked*