Blog
Insights

How will generative AI really impact cyber security?

Chris Wallis
Author
Chris Wallis
Founder & CEO

Key Points

What is generative AI and why is it growing so fast?

Generative AI is a new paradigm when it comes to AI. The AI we grew up with largely involved pattern-recognition – think self-driving cars needing to understand what’s on the road – while generative AI is responsible for creation rather than recognition – competently mimicking what it has seen in huge training data sets to produce realistic answers to questions, or images based on prompts. Harnessing the power of AI to create is one of the biggest shakeups for humanity since the invention of the computer, or the internet itself, with the potential to supercharge human productivity.  

It’s unlikely to make us all redundant though, it’s not perfect and humans are still required to check the output, remaining the operator in the same way we operate computers, and before that the plough. As it impacts many walks of life, cyber security is no exception, and many expect it to make a big impact in the years ahead.

What incidents have we seen involving generative AI?

There have been a few interesting incidents already, such as the exploitation of the AI itself. Hackers have been able to instruct generative chatbots to disable their own guard-rails (the “safe mode” they are designed to operate within) and get them to divulge information about how they work internally, or provide information they shouldn’t be able to give, like how to make a pipe bomb. This raises fascinating concerns about the AI-robot-future we are all destined to live in, and how safely we can engineer these machines to look after us. 

The UK’s NCSC has released guidance on tools like ChatGPT, likening their use with the wartime phrase “loose lips sink ships”, highlighting correctly that anything sent to these tools could inadvertently be revealed either by someone working for the AI owner, or a hacker who has accessed their systems. As a result, ChatGPT-like tools have been banned by law firms fearing that sensitive client information could be leaked. 

What threats do enterprises face because of generative AI tools?

Information leakage is an immediate and valid concern. However, it won’t be long before cyber criminals are using AI to hone their attacks. Common techniques used by hackers today involve duping their victims by sending emails imitating the CEO, asking for money transfers to be made or bank login credentials to be shared.  

If ChatGPT can explain how to remove a peanut butter sandwich from a VCR in the style of the King James Bible, it can surely be used to mimic the tone and style of writing of your CEO. That hasn’t been known to happen yet, but it’s only a matter of time.  

As technology that can mimic how you look and sound converges, think ChatGPT mixed with a Deepfake, we will enter a very difficult period where you potentially can’t trust what your boss is saying even over a video call. 

How could generative AI be used in cyber-attacks?

Phishing, vishing, smishing… are all types of attacks which are intended to dupe the victim and have huge potential for AI to improve their efficacy, and even automate to the point where attackers sit back while their algorithms fleece unsuspecting finance departments the world over. 

To an extent, AI’s may even be able to automate some more elaborate hacking techniques, although as those who have tried to use it for programming may know, it is not always great in this area.

How can businesses protect themselves from these attacks?

Businesses will need to do two things. Firstly, processes will need to improve. Getting an email from your boss asking for a bank transfer won’t cut it. Software solutions that require two-factor authentication and approval processes will help remove any spurious external emails that look like they are from your CEO. 

Secondly, communication technology itself will need to improve. When it comes down to being able to mimic someone’s voice, you will need to know that the person you’re talking to really is your boss – whether they sound like them or not! That’s a scary scenario and the tools to combat this don’t exist yet, but they will have to in the years ahead.

There’s potential of course that AI could also save us from these attacks. Since Generative AIs are trained on huge data sets, perhaps the biggest data set we have is our own set of emails and calls, so could a defensive-AI stand a better chance of detecting an offensive-AI trained on less data on how to mimic your boss?  

Perhaps, but one thing is for sure that it will raise the bar. Giving cyber criminals access to easily craft more legitimate sounding scams will mean the defenders will have to raise their game somehow, and that’s worrying given how many of these lower quality scams are already succeeding today.

What does the future hold for generative AI as threats evolve?  

It’s important to remember that many organisations are still being duped on a daily basis by non-AI enabled attacks. But that the same defences will work against AI enabled attacks too. By making sure you have robust processes for double-checking where instructions are coming from, and strong authentication protecting communication systems, you’ll defeat the attacks of today and lay the groundwork for the attacks of tomorrow. Beyond that, keep an eye out for any tools which increase the trustworthiness of your communication, as identity and authentication will be critical in tomorrow’s AI-enabled battleground. Maybe Twitter’s much discussed Blue Ticks will need to be integrated to video calling software soon.

Get our free

Ultimate Guide to Vulnerability Scanning

Learn everything you need to get started with vulnerability scanning and how to get the most out of your chosen product with our free PDF guide.

Sign up for your free 14-day trial

7 days free trial