Large Language Model (LLM) AI is raging across industries, sending shockwaves of awe and horror in equal measures, and raising so many questions.
Are services like OpenAI's ChatGPT a threat or friend? If they are detrimental? Should they be regulated? Can we leverage the power of AI in organisations to protect ourselves better and, automate our digital and physical worlds?
In this blog, we will explore scams generated using LLM AI, and what we can do to remain vigilant against the security risks that arise as you join the AI Party!
Like any highly disruptive technology, the good comes with the bad. We saw this, with the advent of the Cloud, bringing benefits galore and the pundits applauded the innovation.
However, not many folks talked about the darker side of the Cloud. This includes soaring carbon emissions from data centres, data security, migrating workloads, and exposing not one, but many customers to a data breach, data sovereignty, and information ownership.
These problems are still being grappled with today. Regulations such as GPDR and digital privacy laws are only just starting to catch up.
In March 2023, Elon Musk sponsored a petition with other scientists, researchers, and technology leaders to pause AI, due to the profound and detrimental impact it could have on society. Microsoft, Google, AWS, and Meta (Facebook) formed an industry working group in July 2023. The Frontier Model Forum was formed to investigate making AI transparent and safe across the industry.
So, how can we make our organisations safe against threat actor-driven AI?
Awareness: Understanding what threat AI could pose for your organisation
Large Language Model AI such as ChatGPT fundamentally makes research, coding, and complex language tasks very easy for bad actors and lowers the barrier of entry to cybercrime for everyone.
The primary, and in fact, the greatest threat to an organisation, is business email compromise (BEC). Email scams overshadow any other cyber incident cost by a significant margin. Email is the communication highway for your organisation. Most of these attacks are launched from overseas countries where English may not be the first language and, in the early days, (see below example) invoice and supply scams were easy to spot.
Article 1: An example of an overseas-based, human-produced email invoice scam.
With ChatGPT non-English-speaking threat actors can produce legitimate-looking, well-crafted, professional and highly targeted emails with a few simple ChatGPT prompts. In the below example, all that is needed to produce the output was “create an email requesting payment” prompt input into ChatGPT.
Article 2: An example of a Chat GPT-produced email-based invoice scam
In addition, the visual look and feel of emails, the tone or voice of specific companies and artwork, can all be more easily created with AI assistance.
Users need to be aware of is the increasing legitimacy of phishing emails, as well as the risk that your organisation's email security is probably not going to pick up all the threats. Users need to be more diligent when it comes to supply chain, fiscal and executive directive emails. When in doubt, we need to educate our staff/users to pick up the phone to the person inside or outside of your organisation, to contact the relevant entity to double or triple check the legitimacy of the email.
Other forms of attacks are continually emerging that use AI and show that the threat levels presented by AI, in particular, LLM like ChatGPT, continue to rise. For example:
Research: The best defence is attack!
Keeping up to date with the latest developments and threats is key. Here are some steps you can take to safeguard your business:
Use and understanding: How to make AI work for you?
But "Isn't AI bad?" you ask. The good news is that 81% of the industry believes that Cyber defence powered by AI is a good thing (source: Blackberry).
Let’s examine your options.
If you're looking at threat protection and the capabilities of AI in Cyber Security, then speak to the Superloop SASE and Cyber Security team.
Superloop Cyber Security. Work anywhere. Secured everywhere.