The advancement of AI technologies, such as ChatGPT from OpenAI, has created a new opportunity for business email compromise (BEC) attacks. ChatGPT is an advanced AI model that produces text resembling that of a human based on the input it receives. Cybercriminals can utilize this technology to automate the generation of fake emails that appear convincing and personalized for the receiver, raising the likelihood of a successful attack.
This technology has become a popular tool for malicious actors, providing them with faster and more efficient means of conducting cybercrimes. Research by SlashNext has discovered a new generative AI tool called WormGPT, advertised on dark web forums as a means for adversaries to launch sophisticated phishing and BEC attacks.
What is WormGPT?
“This tool presents itself as a blackhat alternative to GPT models, designed specifically for malicious activities,” security researcher Daniel Kelley said. “Cybercriminals can use such technology to automate the creation of compelling fake emails, personalized to the recipient, thus increasing the chances of success for the attack.”
The tool is described as the “biggest enemy of the well-known ChatGPT” that “lets you do all sorts of illegal stuff.” The tool uses the open-source GPT-J language model and boasts various features, including unlimited character support, chat memory retention, and code formatting capabilities.
According to reports, WormGPT was trained on various data sources, focusing on information about malware. However, the tool’s creator decided to keep the identity of the individual datasets used during training a secret.
Posts seen by ZDNet on a Telegram channel purportedly created to advertise WormGPT suggest the tool developer is developing a subscription plan for access, with prices ranging from $60 to $700. According to a channel participant, there are already more than 1,500 users of WormGPT.
What can WormGPT do?
Tools like WormGPT might be a potent weapon for a bad actor, especially as large language models (LLMs) are being abused to create convincing phishing emails and produce harmful code. ChatGPT and Google Bard are taking swift action to stop this.
To thoroughly evaluate the potential risks connected with WormGPT, the security researchers at SlashNext ran experiments focused on BEC assaults. In one experiment, they instructed WormGPT to create an email that would intimidate a gullible account manager into paying a bogus invoice.
The outcomes were disturbing. WormGPT created an email demonstrating its potential for advanced phishing and BEC attacks by being astonishingly persuasive and strategically astute. The tool is comparable to ChatGPT but without any moral or ethical restrictions. The research highlights the severe danger that generative AI tools like WormGPT offer, even in the hands of inexperienced cyber criminals.
Why is WormGPT a danger to businesses?
WormGPT’s lack of ethical constraints highlights the danger posed by generative AI, allowing even novice attackers to launch attacks quickly and on a large scale without the necessary technological resources.
Malicious generative AI systems like WormGPT help cybercriminals because they provide them the ability to:
- Craft emails with perfect language, making them appear legitimate and lowering the risk of being identified as suspicious.
- “Democratizes” the deployment of complex BEC assaults through malicious generative AI. This technique enables even novice scammers to launch attacks, making it a valuable weapon for a broader range of cyber criminals.
How to safeguard against AI-crafted BEC scams
While advantageous, the development of generative AI also opens up fresh attack methods. Strong prevention measures must be put in place. Here are some precautions organizations can take to guard against BEC frauds created by AI.
Security awareness training focused on BEC scams
Companies should create thorough and up-to-date security awareness training programs to combat BEC attacks, particularly those that utilize AI. These programs should train employees on the characteristics of BEC threats, how AI is used to amplify them, and the strategies employed by attackers. It is essential to integrate this training into employees’ ongoing professional development.
Enhanced email verification measures
Organizations should implement strict email verification procedures to protect against BEC attacks powered by AI. This includes setting up systems that automatically notify when emails are sent from outside the organization and pretend to be internal executives or vendors. Additionally, email systems that detect messages containing words like “urgent,” “sensitive,” or “wire transfer” associated with BEC attacks should be used. These measures guarantee that potentially harmful emails are carefully reviewed before any action is taken.
Link isolation and sandboxing
Finally, businesses should enhance their default email security tooling by investing in tools that offer advanced protection against phishing and BEC attacks. They should look for features such as attachment sandboxing and link isolation. Attachment sandboxing scans incoming attachments for suspicious behavior, while link isolation allows employees to open suspicious links in a protected, isolated environment and ensure their validity and legitimacy.
ITEGRITI has deep experience across critical infrastructure cybersecurity programs, compliance, risk, and audit. Contact us today to learn how we can leverage this experience to help you accomplish your cybersecurity goals.