1 August 2024

3 min read

Words of warning: The potential for criminal exploitation of Large Language Models

Cyber security
Long bridge with structural roof in repeated patterns

Recent technological advancements have brought about remarkable innovations, transforming the way we live, work, and communicate. Among these advancements, generative AI stands out as a revolutionary tool with immense potential to benefit society.

However, as with any powerful technology, it suffers from the dual use problem in that it can be used for both good and malicious purposes. One alarming aspect is its potential for abuse by threat actors, particularly in enhancing the capabilities of less sophisticated attackers and crafting more convincing phishing lures.

Empowering the unsophisticated threat actor

Generative AI has the potential to democratise cybercrime. Previously, sophisticated cyberattacks (especially those requiring custom tooling or deep understanding of the underlying technologies) were primarily the domain of skilled attackers and well-funded organisations. However, AI models capable of creating code, automating tasks, and generating plausible text can lower the barrier to entry, enabling less experienced and less resourceful individuals to launch more complex and damaging cyberattacks.

AI models capable of creating code, automating tasks, and generating plausible text can lower the barrier to entry, enabling less experienced and less resourceful individuals to launch more complex and damaging cyberattacks.”

For example AI can be used to generate destructive families of malware, such as ransomware with relatively little manual intervention. With only limited knowledge of the problem domain, a would-be attacker can input a few basic requests into a prompt (typically the interface used to interact with these technologies), and the AI can produce malicious code. Although not always immediately fully functional, this code can serve as a starting point for an attacker. This ease of creation means that there could be a proliferation of new threats, each more refined than those that came before.

Crafting convincing phishing lures

Phishing still remains one of the most effective methods for cybercriminals to gain access to sensitive information. Traditionally, low sophistication phishing attempts could often be identified by poor grammar, misspellings, and a lack of personalisation. However, generative AI can craft highly convincing emails that are indistinguishable from legitimate communications.

AI-powered language models can analyse data to understand the nuances of tone, context, and language associated with specific individuals or organisations. This allows hackers to create personalised phishing emails that appear authentic and are highly likely to deceive even the most vigilant recipients. With enough sample data, LLMs can even generate emails that mimic the writing style of an individual, such as a  company’s CEO, making it incredibly challenging for recipients to identify the emails as fraudulent.

Manipulation of LLMs

Not only can threat actors abuse the power of LLMs to create tooling and pretexts, but they can also abuse poorly designed applications incorporating LLMs insecurely into their workflow through several techniques. Most of these techniques revolve around some form of “prompt injection”, or manipulating the input provided to the LLM to get it to do something not originally intended (such as leaking sensitive training data). This can be thought of in a similar way to socially engineering (or tricking) a machine, using logic to bypass any potential guardrails built into the system. The OWASP project - famous for maintaining lists of the “Top 10” issues for Web applications, APIs and mobile applications - now has a list specifically curated for LLMs and highlights the most common issues.

Threat actors abuse the power of LLMs to create tooling and pretexts, but they can also abuse poorly designed applications incorporating LLMs insecurely into their workflow through several techniques.”

Advice for individuals and business

The rising capabilities of generative AI in the hands of threat actors can pose significant risks to both individuals and organisations. As mass phishing schemes become more sophisticated and malware creation becomes more easily accessible, the potential for higher-sophistication attacks increases.

To mitigate these risks, it is crucial for organisations and individuals to understand the art of the possible and adopt a multi-faceted approach. This includes:

  • Enhanced employee training: Perform regular training on the latest phishing tactics and bypass techniques, including those that make use of generative AI (deepfakes of images, audio and video). Raising awareness of how to recognise unsolicited and suspicious emails or direct messages through common collaboration platforms such as Microsoft Teams can also help reduce the risk of falling victim to phishing attacks.
  • Awareness of the data your LLMs have access to: When exposing an LLM’s prompt to the Internet, consider the data that was provided to that LLM and its sensitivity. Avoid providing capabilities that allow it unrestricted outbound Internet access and the ability to query internal APIs, unless explicitly necessary, in which case efforts should be taken to secure the potential misuse.
  • Regulation and ethical use: Promoting the ethical use of AI and implementing regulatory measures and safeguards to prevent its misuse can help curb the potential for abuse, although the existence of so-called “jailbroken” LLMs are likely to increase, especially as additional guardrails are placed on commercial models.

Generative AI holds tremendous promise for positive advancements in many fields, but its potential for abuse can also lead to significant risks. As we continue to develop and integrate AI technologies, it is vital to remain aware and proactive in safeguarding against their potential misuse, ensuring that these powerful tools benefit humanity rather than harm it.

Please do not hesitate to contact S-RM to discuss any aspect of generative AI.

Share this post

Subscribe to our insights

Get industry news and expert insights straight to your inbox.