WormGPT is a Phishing Chatbot — an Evil AI Model?

Share the joy

WormGPT — an Evil AI Model 

We have been warned before. Experts said that large language models, like ChatGPT can be used for nefarious activities. Malicious actors are now sending phishing emails at an incredible scale. 

In April this year, a security researcher broke GPT-4 in just two hours. When the company released its latest version, the hacker started entering prompts to bypass OpenAI’s safety systems. Soon after, the said hacker received homophobic statements and created phishing emails.  

Create Sophisticated Malware

Now, security researchers warned users that there’s a ChatGPT-like AI bot that anyone can use to create sophisticated malware, according to this post

“WormGPT was allegedly trained on a diverse array of data sources, particularly concentrating on malware-related data. However, the specific datasets utilised during the training process remain confidential, as decided by the tool’s author.” – Slashnext

Phishing chatbots typically use natural language processing (NLP) and machine learning algorithms to interact with users in a conversational manner, making their interactions appear more human-like and convincing. They may mimic the tone, language, and responses of legitimate customer service agents to create a sense of familiarity and credibility.

Cybercriminals are now promoting jailbreaks on ChatGPT. These hacks can engineer prompts and inputs to disclose sensitive information and produce inappropriate content. It can also create harmful code. 

How evil is the model? It has been trained on malware data. It lacks safety guardrails. Anyone using it can just prompt the system to create malicious software that is based on Python. 

How Legitimate the Email Looks? 

This evil AI model can create emails with flawless grammar. Thus, you will not think that it comes from unlawful senders. In that case, the emails are not likely to be flagged as suspicious. 

Even if you are just an amateur hacker, you can use this technology. It makes it an accessible tool for every cybercriminal out there. 

But Google Bard, OpenAI, and other companies are taking steps to stop the abuse of LLMs. Unfortunately, their efforts are not enough to combat these nefarious activities. 

Researchers from a security agency surgically modified an open-source AI model. The modified version is the one that introduced Worm GPT across the dark web. It is now being used to spread disinformation. 

What Are Its Consequences?

We don’t know yet the consequences of this kind of technology. Hover, the capabilities of AI are concerning. We have seen how it can easily generate disinformation and misinformation. Because of false information, it can easily shift public opinion. It can even sway any political campaign. The risks for users are endless. This is especially true for those who don’t suspect anything. 

Fighting cybersecurity threats is already an exceptionally challenging and complex task. Cyber threats are constantly evolving. Hackers can easily find new attack techniques to bypass anything. 

Cybercriminals are highly adaptive and innovative. Thus, it makes it challenging for cybersecurity professionals to stay ahead of the curve and anticipate the next attack. 

Many cyber attacks are highly sophisticated. They employ advanced techniques and technologies. Attackers use a combination of social engineering, malware, ransomware, and other tactics to exploit vulnerabilities in systems and networks.


Share the joy

Author: Jane Danes

Jane has a lifelong passion for writing. As a blogger, she loves writing breaking technology news and top headlines about gadgets, content marketing and online entrepreneurship and all things about social media. She also has a slight addiction to pizza and coffee.

Share This Post On