Summary: Automation is one of the central topics of cybersecurity discussions in any professional circle today. And for a good reason – an ever-increasing number of processes have been taken over by tools such as Artificial Intelligence (AI) and Machine Learning (ML). But this shift should come with an appropriately designed cybersecurity plan that covers the new operating environment.

As Artificial Intelligence (AI) and Machine Learning (ML) wind their way further into our daily dialogue, organizations must ensure their security strategies are braced to handle the inevitable challenges that AI and ML bring. It is a time of transition for public-facing entities, and those not calibrated to the change risk being the collateral damage of Industry 4.0.

Leveraging AI/ML in Cybercrime

AI/ML is not inherently bad, but like any tool, it can be used for unscrupulous purposes. Unfortunately, attackers have not hesitated to apply this new technology to their criminal endeavors. For example, AI/ML can now be found force-multiplying the malicious reach of ploys like ransomware, phishing, and Business Email Compromise (BEC).

AI in Ransomware

Mikko Hyppönen, chief research officer at cybersecurity firm WithSecure, noted in Protocol that “We have already seen [ransomware groups] hire pen testers to break into networks to figure out how to deploy ransomware. The next step will be that they will start hiring ML and AI experts to automate their malware campaigns.” He argues that to make this happen, threat actors will pay AI/ML experts two to three times their usual salary to “go to the dark side”.

As ransomware attacks are typically tailored to the individual company or market, they have been historically difficult to scale. However, as ransomware payout rates go up, attackers are looking for a more powerful way to cash in. According to Proofpoint, the number of companies that agreed to pay a ransom increased from 34% in 2020 to 58% in 2021. This has motivated the demand for better, faster ways of disseminating ransomware that works.

One of the ways in which AI facilitates that goal is through its ability to spin up polymorphic malware. This malicious software type can change its code as it moves through a system, using an encryption key to alter its shape and signature. Combining self-propagating code with a mutation engine, it can rapidly modify its appearance. One example is Win32/VirLock ransomware. One of the first strains to leverage polymorphism, it locks computer screens, encrypts data, and alters its structure for every infected execution.

AI in Phishing

AI/ML has also changed the game for phishers. AI crawlers make it possible for phishing campaigns to ingest a large amount of public-facing data on individuals, their companies, and the industry. This level of granularity feeds into custom-made phishing emails that are increasingly convincing. Automation and machine learning also make it possible for these messages to sidestep the initial “smell test” of bad grammar. “Historically, phishing emails have been somewhat easy to spot thanks to sloppy drafting,” said Brian Finch, co-leader of the cybersecurity, data protection & privacy practice at law firm Pillsbury Winthrop Shaw Pittman. “In particular, phishing emails created by a hacker unfamiliar with a certain language [have] tended to be easy to spot due to poor grammar, illogical vocabulary, and bad spelling.”

Generative AI can create word-perfect emails in a variety of languages and styles, making it even harder to catch fraud. And let’s not forget the sheer power of AI automation. Now, each stage of the phishing process – from reconnaissance to scripting to sending to response – can be automated and performed by AI at a scale unimaginable by human operators alone.

AI in Business Email Compromise (BEC)

BEC is one of the highest-yielding, if not the highest-yielding, exploits in cyberspace today. According to the FBI’s 2022 Internet Crime (IC3) Report, BEC accounted for over $2.7 billion dollars in adjusted losses last year. By comparison, ransomware only accounted for a relatively paltry $34.3 million, a whopping 78 times less. And when money talks, attackers listen; a recent report by Fortra noted that 99% of all user-related threats are now impersonation attempts, with BEC specifically being called out.

Like with phishing, scammers are leveraging Generative AI to construct BEC emails accurately and in a myriad of different languages, immediately broadening their reach. There are even bespoke tools to facilitate this. WormGPT, an effective AI tool for cybercrime, is reported to “[lack] the ethical safeguards” of its counterpart ChatGPT. In this unfiltered and unrestricted state, it freely crafts pernicious targeted messages designed to con account managers into paying imitated invoices.

As stated in the Verizon 2023 Data Breach Investigations Report, “Social engineering has come a long way from your basic Nigerian Prince scam to tactics that are much more difficult to detect.” AI and ML are largely to thank for that.

Defending Against AI/ML Use Cases

With both rampant and irresponsible AI/ML usage congesting the threat landscape, companies need to consider their current protection strategy. And these tools can be used both ways.

As published in the journal Big Data and Cognitive Computing (BDCC), “Machine learning can be used to continuously learn and adapt to new threats, making it an effective approach to keep up with the constantly evolving tactics of ransomware attackers.” Forbes notes that “Ironically, AI itself can be a potent tool in defending against AI threats,” citing as an example that “AI-driven threat detection systems can recognize patterns of behavior that human analysts might miss.” AI/ML technology can also eliminate false positives, cutting down drastically on security alerts and freeing SOCs to do more with their time.

Additionally, OT infrastructure needs to be revamped for cybersecurity resilience on its own terms. Whether or not an organization chooses to use AI as part of their security defenses, cybercriminals are using AI in their everyday exploits, and companies need to be prepared. A report by Bridewell Consulting reveals that 86% of organizations reported detecting a cyber incident affecting their OT/ICS environments in the last 12 months. And it’s no wonder lingering legacy architecture is especially susceptible to new risks introduced by IT/OT connectivity. For context, 79% of organizations’ main OT systems are older than five years, and 84% of OT/ICS environments can be accessed from corporate networks.

To combat these, an OT-friendly cyber resilience security strategy must emphasize the following three key principles:

  1. OT Asset Discovery | Identify and catalog all vital OT components. That means the big (network devices, industrial machines) and the small (sensors and controllers). After you’ve gained visibility over your assets, perform a risk assessment, prioritize remediation, and then stay on top of your evolving network by properly cataloging and protecting new assets as they come in.
  2. Network Segmentation | Attackers will try every weak link, so set up barriers to protect your more vulnerable OT assets from attackers moving laterally through your infrastructure. Otherwise, necessary IT/OT connectivity will expose them to the vast expanse of the internet and all the threats that lurk there. This type of containment enforces least privilege access, reduces the attack surface, and helps you stay compliant with security standards like NIST and IEC 62443.
  3. Zero Trust Security | As the attack surface of your critical infrastructure expands, zero trust security assumes all transactions are untrustworthy until they have authenticated otherwise. While this is good practice anywhere, legacy OT vulnerabilities make this especially crucial for OT environments. This entails micro-segmenting networks with additional security policies, leveraging adaptive access control to further validate users with more risk, encrypting data, and practicing continuous authentication so one-time-trust is never enough to grant the “keys to the kingdom”.

Have you implemented or updated your policies, procedures, and processes to enhance protection against AI-driven threats? If not, now is the perfect time to start.

ITEGRITI has deep experience across critical infrastructure cybersecurity programs, compliance, risk, and audit. Contact us today to learn how we can leverage this experience to help you accomplish your cybersecurity goals.

Contact Us:

ITEGRITI Services: