ChatGPT? To AI or not to AI, that is the question!


ChatGPT?  To AI or not to AI, that is the question!

Large Language Model (LLM) AI is raging across industries, sending shockwaves of awe and horror in equal measures, and raising so many questions.

Are services like OpenAI's ChatGPT a threat or friend?  If they are detrimental? Should they be regulated? Can we leverage the power of AI in organisations to protect ourselves better and, automate our digital and physical worlds?

In this blog, we will explore scams generated using LLM AI, and what we can do to remain vigilant against the security risks that arise as you join the AI Party!

Like any highly disruptive technology, the good comes with the bad.  We saw this, with the advent of the Cloud, bringing benefits galore and the pundits applauded the innovation.

However, not many folks talked about the darker side of the Cloud.  This includes soaring carbon emissions from data centres, data security, migrating workloads, and exposing not one, but many customers to a data breach, data sovereignty, and information ownership.

These problems are still being grappled with today. Regulations such as GPDR and digital privacy laws are only just starting to catch up.

In March 2023, Elon Musk sponsored a petition with other scientists, researchers, and technology leaders to pause AI, due to the profound and detrimental impact it could have on society. Microsoft, Google, AWS, and Meta (Facebook) formed an industry working group in July 2023.  The Frontier Model Forum was formed to investigate making AI transparent and safe across the industry.

So, how can we make our organisations safe against threat actor-driven AI?

  1. Awareness: Understanding what threat AI could pose for your organisation.
  2. Research: Keeping track of proof of concepts regularly published on this topic.
  3. Use and understanding:  How to make AI work for you?

Awareness: Understanding what threat AI could pose for your organisation
Large Language Model AI such as ChatGPT fundamentally makes research, coding, and complex language tasks very easy for bad actors and lowers the barrier of entry to cybercrime for everyone.

The primary, and in fact, the greatest threat to an organisation, is business email compromise (BEC). Email scams overshadow any other cyber incident cost by a significant margin. Email is the communication highway for your organisation. Most of these attacks are launched from overseas countries where English may not be the first language and, in the early days, (see below example) invoice and supply scams were easy to spot.

Article 1:  An example of an overseas-based, human-produced email invoice scam.

Re-Hashed: Phishing Email Examples — The Best & Worst ...

With ChatGPT non-English-speaking threat actors can produce legitimate-looking, well-crafted, professional and highly targeted emails with a few simple ChatGPT prompts. In the below example, all that is needed to produce the output was “create an email requesting payment” prompt input into ChatGPT.

Article 2: An example of a Chat GPT-produced email-based invoice scam

In addition, the visual look and feel of emails, the tone or voice of specific companies and artwork, can all be more easily created with AI assistance.

Users need to be aware of is the increasing legitimacy of phishing emails, as well as the risk that your organisation's email security is probably not going to pick up all the threats. Users need to be more diligent when it comes to supply chain, fiscal and executive directive emails. When in doubt, we need to educate our staff/users to pick up the phone to the person inside or outside of your organisation, to contact the relevant entity to double or triple check the legitimacy of the email.

Other forms of attacks are continually emerging that use AI and show that the threat levels presented by AI, in particular, LLM like ChatGPT, continue to rise. For example:

  • In Feb 2023, Codeblue29 a group that managed to circumvent ChatGPTs controls and duped the LLM into creating polymorphic malware, that could evade a vendor's Endpoint Detection and Response (EDR).
  • In a more recent development, a team of researchers from British Universities have trained a learning model that can steal data from listening to keyboard typing, with 95% data accuracy. Just using the sound recorded from Zoom and an iPhone microphone, the model managed to deliver an accuracy of 93%.  Stemming from a proliferation of devices that have microphones, you now no longer need to be close to, or in systems, to achieve data/credential theft.

Research:  The best defence is attack!

Keeping up to date with the latest developments and threats is key. Here are some steps you can take to safeguard your business:

  • Set up news feeds with keywords "AI-driven threats", "AI cyber-attacks" and distribute key articles to your business stakeholder groups.
  • Read each of the articles and assess the threat to your business. Take steps if required to mitigate the threat with your security team or security partner.
  • Create a security awareness newsletter to the company employees which keeps them up to date with very simple language and make it relevant to them as a user. Avoid technology jargon.

Use and understanding:  How to make AI work for you?

But "Isn't AI bad?" you ask.  The good news is that 81% of the industry believes that Cyber defence powered by AI is a good thing (source: Blackberry).

Let’s examine your options.

  • Email:  There are a growing number of companies and startups using AI to protect company emails.  For example, Abnormal Security is a leader in this space. Their platform is AI-powered and "learns" the company email behaviours and interactions.  Who better to catch the AI scammers than AI itself?
  • Cloud & Network Security:  Market leaders such as Palo Alto Networks employs broadscale machine learning (great for mathematical models) and AI for security across many of its cloud-based solutions today including XDR, network security, and endpoint. They even employ AI for operations in their SD-WAN networking platform, making troubleshooting and problem identification a breeze.
  • SIEM & SOC: Microsoft Sentinel has announced plans to bring the power of AI to cyber defence by launching Security Copilot with Sentinel (powered by the same technology that underpins ChatGPT) to help defenders see through the noise of web traffic and identify malicious activity. It will help security teams catch what others miss by correlating and summarizing data on attacks, prioritizing incidents and recommending the best course of action to swiftly remediate diverse threats, in time.

If you're looking at threat protection and the capabilities of AI in Cyber Security, then speak to the Superloop SASE and Cyber Security team.

Superloop Cyber Security.  Work anywhere.  Secured everywhere.