Blog
Insights

How will generative AI really impact cyber security?

Chris Wallis
Author
Chris Wallis
Founder & CEO

Key Points

What is generative AI and why is it growing so fast?

Generative AI is a new paradigm when it comes to AI. The AI we grew up with largely involved pattern-recognition – think self-driving cars needing to understand what’s on the road – while generative AI is responsible for creation rather than recognition – competently mimicking what it has seen in huge training data sets to produce realistic answers to questions, or images based on prompts. Harnessing the power of AI to create is one of the biggest shakeups for humanity since the invention of the computer, or the internet itself, with the potential to supercharge human productivity.  

It’s unlikely to make us all redundant though, it’s not perfect and humans are still required to check the output, remaining the operator in the same way we operate computers, and before that the plough. As it impacts many walks of life, cyber security is no exception, and many expect it to make a big impact in the years ahead.

What incidents have we seen involving generative AI?

There have been a few interesting incidents already, such as the exploitation of the AI itself. Hackers have been able to instruct generative chatbots to disable their own guard-rails (the “safe mode” they are designed to operate within) and get them to divulge information about how they work internally, or provide information they shouldn’t be able to give, like how to make a pipe bomb. This raises fascinating concerns about the AI-robot-future we are all destined to live in, and how safely we can engineer these machines to look after us. 

The UK’s NCSC has released guidance on tools like ChatGPT, likening their use with the wartime phrase “loose lips sink ships”, highlighting correctly that anything sent to these tools could inadvertently be revealed either by someone working for the AI owner, or a hacker who has accessed their systems. As a result, ChatGPT-like tools have been banned by law firms fearing that sensitive client information could be leaked. 

What threats do enterprises face because of generative AI tools?

Information leakage is an immediate and valid concern. However, it won’t be long before cyber criminals are using AI to hone their attacks. Common techniques used by hackers today involve duping their victims by sending emails imitating the CEO, asking for money transfers to be made or bank login credentials to be shared.  

If ChatGPT can explain how to remove a peanut butter sandwich from a VCR in the style of the King James Bible, it can surely be used to mimic the tone and style of writing of your CEO. That hasn’t been known to happen yet, but it’s only a matter of time.  

As technology that can mimic how you look and sound converges, think ChatGPT mixed with a Deepfake, we will enter a very difficult period where you potentially can’t trust what your boss is saying even over a video call. 

How could generative AI be used in cyber-attacks?

Phishing, vishing, smishing… are all types of attacks which are intended to dupe the victim and have huge potential for AI to improve their efficacy, and even automate to the point where attackers sit back while their algorithms fleece unsuspecting finance departments the world over. 

To an extent, AI’s may even be able to automate some more elaborate hacking techniques, although as those who have tried to use it for programming may know, it is not always great in this area.

How can businesses protect themselves from these attacks?

Businesses will need to do two things. Firstly, processes will need to improve. Getting an email from your boss asking for a bank transfer won’t cut it. Software solutions that require two-factor authentication and approval processes will help remove any spurious external emails that look like they are from your CEO. 

Secondly, communication technology itself will need to improve. When it comes down to being able to mimic someone’s voice, you will need to know that the person you’re talking to really is your boss – whether they sound like them or not! That’s a scary scenario and the tools to combat this don’t exist yet, but they will have to in the years ahead.

There’s potential of course that AI could also save us from these attacks. Since Generative AIs are trained on huge data sets, perhaps the biggest data set we have is our own set of emails and calls, so could a defensive-AI stand a better chance of detecting an offensive-AI trained on less data on how to mimic your boss?  

Perhaps, but one thing is for sure that it will raise the bar. Giving cyber criminals access to easily craft more legitimate sounding scams will mean the defenders will have to raise their game somehow, and that’s worrying given how many of these lower quality scams are already succeeding today.

What does the future hold for generative AI as threats evolve?  

It’s important to remember that many organisations are still being duped on a daily basis by non-AI enabled attacks. But that the same defences will work against AI enabled attacks too. By making sure you have robust processes for double-checking where instructions are coming from, and strong authentication protecting communication systems, you’ll defeat the attacks of today and lay the groundwork for the attacks of tomorrow. Beyond that, keep an eye out for any tools which increase the trustworthiness of your communication, as identity and authentication will be critical in tomorrow’s AI-enabled battleground. Maybe Twitter’s much discussed Blue Ticks will need to be integrated to video calling software soon.

Get our free

Ultimate Guide to Vulnerability Scanning

Learn everything you need to get started with vulnerability scanning and how to get the most out of your chosen product with our free PDF guide.

Sign up for your free 14-day trial

7 days free trial
Chris Wallis explores how the technology could be deployed in security contexts.
back to BLOG

How will generative AI really impact cyber security?

Chris Wallis

What is generative AI and why is it growing so fast?

Generative AI is a new paradigm when it comes to AI. The AI we grew up with largely involved pattern-recognition – think self-driving cars needing to understand what’s on the road – while generative AI is responsible for creation rather than recognition – competently mimicking what it has seen in huge training data sets to produce realistic answers to questions, or images based on prompts. Harnessing the power of AI to create is one of the biggest shakeups for humanity since the invention of the computer, or the internet itself, with the potential to supercharge human productivity.  

It’s unlikely to make us all redundant though, it’s not perfect and humans are still required to check the output, remaining the operator in the same way we operate computers, and before that the plough. As it impacts many walks of life, cyber security is no exception, and many expect it to make a big impact in the years ahead.

What incidents have we seen involving generative AI?

There have been a few interesting incidents already, such as the exploitation of the AI itself. Hackers have been able to instruct generative chatbots to disable their own guard-rails (the “safe mode” they are designed to operate within) and get them to divulge information about how they work internally, or provide information they shouldn’t be able to give, like how to make a pipe bomb. This raises fascinating concerns about the AI-robot-future we are all destined to live in, and how safely we can engineer these machines to look after us. 

The UK’s NCSC has released guidance on tools like ChatGPT, likening their use with the wartime phrase “loose lips sink ships”, highlighting correctly that anything sent to these tools could inadvertently be revealed either by someone working for the AI owner, or a hacker who has accessed their systems. As a result, ChatGPT-like tools have been banned by law firms fearing that sensitive client information could be leaked. 

What threats do enterprises face because of generative AI tools?

Information leakage is an immediate and valid concern. However, it won’t be long before cyber criminals are using AI to hone their attacks. Common techniques used by hackers today involve duping their victims by sending emails imitating the CEO, asking for money transfers to be made or bank login credentials to be shared.  

If ChatGPT can explain how to remove a peanut butter sandwich from a VCR in the style of the King James Bible, it can surely be used to mimic the tone and style of writing of your CEO. That hasn’t been known to happen yet, but it’s only a matter of time.  

As technology that can mimic how you look and sound converges, think ChatGPT mixed with a Deepfake, we will enter a very difficult period where you potentially can’t trust what your boss is saying even over a video call. 

How could generative AI be used in cyber-attacks?

Phishing, vishing, smishing… are all types of attacks which are intended to dupe the victim and have huge potential for AI to improve their efficacy, and even automate to the point where attackers sit back while their algorithms fleece unsuspecting finance departments the world over. 

To an extent, AI’s may even be able to automate some more elaborate hacking techniques, although as those who have tried to use it for programming may know, it is not always great in this area.

How can businesses protect themselves from these attacks?

Businesses will need to do two things. Firstly, processes will need to improve. Getting an email from your boss asking for a bank transfer won’t cut it. Software solutions that require two-factor authentication and approval processes will help remove any spurious external emails that look like they are from your CEO. 

Secondly, communication technology itself will need to improve. When it comes down to being able to mimic someone’s voice, you will need to know that the person you’re talking to really is your boss – whether they sound like them or not! That’s a scary scenario and the tools to combat this don’t exist yet, but they will have to in the years ahead.

There’s potential of course that AI could also save us from these attacks. Since Generative AIs are trained on huge data sets, perhaps the biggest data set we have is our own set of emails and calls, so could a defensive-AI stand a better chance of detecting an offensive-AI trained on less data on how to mimic your boss?  

Perhaps, but one thing is for sure that it will raise the bar. Giving cyber criminals access to easily craft more legitimate sounding scams will mean the defenders will have to raise their game somehow, and that’s worrying given how many of these lower quality scams are already succeeding today.

What does the future hold for generative AI as threats evolve?  

It’s important to remember that many organisations are still being duped on a daily basis by non-AI enabled attacks. But that the same defences will work against AI enabled attacks too. By making sure you have robust processes for double-checking where instructions are coming from, and strong authentication protecting communication systems, you’ll defeat the attacks of today and lay the groundwork for the attacks of tomorrow. Beyond that, keep an eye out for any tools which increase the trustworthiness of your communication, as identity and authentication will be critical in tomorrow’s AI-enabled battleground. Maybe Twitter’s much discussed Blue Ticks will need to be integrated to video calling software soon.

Release Date
Level of Ideal
Comments
Before CVE details are published
🥳
Limited public information is available about the vulnerability.

Red teamers, security researchers, detection engineers, threat actors have to actively research type of vulnerability, location in vulnerable software and build an associated exploit.

Tenable release checks for 47.43% of the CVEs they cover in this window, and Greenbone release 32.96%.
Day of CVE publish
😊
Vulnerability information is publicly accessible.

Red teamers, security researchers, detection engineers and threat actors now have access to some of the information they were previously having to hunt themselves, speeding up potential exploit creation.

Tenable release checks for 17.12% of the CVEs they cover in this window, and Greenbone release 17.69%.
First week since CVE publish
😐
Vulnerability information has been publicly available for up to 1 week.

The likelihood that exploitation in the wild is going to be happening is steadily increasing.

Tenable release checks for 10.9% of the CVEs they cover in this window, and Greenbone release 20.69%.
Between 1 week and 1 month since CVE publish
🥺
Vulnerability information has been publicly available for up to 1 month, and some very clever people have had time to craft an exploit.

We’re starting to lose some of the benefit of rapid, automated vulnerability detection.

Tenable release checks for 9.58% of the CVEs they cover in this window, and Greenbone release 12.43%.
After 1 month since CVE publish
😨
Information has been publicly available for more than 31 days.

Any detection released a month after the details are publicly available is decreasing in value for me.

Tenable release checks for 14.97% of the CVEs they cover over a month after the CVE details have been published, and Greenbone release 16.23%.

With this information in mind, I wanted to check what is the delay for both Tenable and Greenbone to release a detection for their scanners. The following section will focus on vulnerabilities which:

  • Have CVSSv2 rating of 10
  • Are exploitable over the network
  • Require no user interaction

These are the ones where an attacker can point their exploit code at your vulnerable system and gain unauthorised access.

We’ve seen previously that Tenable have remote checks for 643 critical vulnerabilities, and OpenVAS have remote checks for 450 critical vulnerabilities. Tenable release remote checks for critical vulnerabilities within 1 month of the details being made public 58.4% of the time, but Greenbone release their checks within 1 month 76.8% of the time. So, even though OpenVAS has fewer checks for those critical vulnerabilities, you are more likely to get them within 1 month of the details being made public. Let’s break that down further.

In Figure 10 we can see the absolute number of remote checks released on a given day after a CVE for a critical vulnerability has been published. What you can immediately see is that both Tenable and OpenVAS release the majority of their checks on or before the CVE details are made public; Tenable have released checks for 247 CVEs, and OpenVAS have released checks for 144 CVEs. Then since 2010 Tenable have remote released checks for 147 critical CVEs and OpenVAS 79 critical CVEs on the same day as the vulnerability details were published. The number of vulnerabilities then drops off across the first week and drops further after 1 week, as we would hope for in an efficient time-to-release scenario.

Figure 10: Absolute numbers of critical CVEs with a remote check release date from the date a CVE is published

While raw numbers are good, Tenable have a larger number of checks available so it could be unfair to go on raw numbers alone. It’s potentially more important to understand the likelihood that OpenVAS or Tenable will release a check of a vulnerability on any given day after a CVE for a critical vulnerability is released. In Figure 11 we can see that Tenable release 61% their checks on or before the date that a CVE is published, and OpenVAS release a shade under 50% of their checks on or before the day that a CVE is published.

Figure 11: Percentage chance of delay for critical vulnerabilities

So, since 2010 Tenable has more frequently released their checks before or on the same day as the CVE details have been published for critical vulnerabilities. While Tenable is leading at this point, Greenbone’s community feed still gets a considerable percentage of their checks out on or before day 0.

I thought I’d go another step further and try and see if I could identify any trend in each organisations release delay, are they getting better year-on-year or are their releases getting later? In Figure 12 I’ve taken the mean delay for critical vulnerabilities per year and plotted them. The mean as a metric is particularly influenced by outliers in a data set, so I expected some wackiness and limited the mean to only checks released 180 days prior to a CVE being published and 31 days after a CVE being published. These seem to me like reasonable limits, as anything greater than 6 months prior to CVE details being released is potentially a quirk of the check details and anything after a 1-month delay is less important for us.

What can we take away from Figure 12?

  • We can see that between 2011 and 2014 Greenbone’s release delay was better than that of Tenable, by between 5 and 10 days.
  • In 2015 things reverse and for 3 years Tenable is considerably ahead of Greenbone by a matter of weeks.
  • But, then in 2019 things get much closer and Greenbone seem to be releasing on average about a day earlier than Tenable.
  • For both the trendline over an 11-year period is very close, with Tenable marginally beating Greenbone.
  • We have yet to have any data for 2021 for OpenVAS checks for critical show-stopper CVEs.
Figure 12: Release delay year-on-year (lower is better)

With the larger number of checks, and still being able to release a greater percentage of their remote checks for critical vulnerabilities Tenable could win this category. However, the delay time from 2019 and 2020 going to OpenVAS, and the trend lines being so close, I am going to declare this one a tie. It’s a tie.

The takeaway from this is that both vendors are getting their checks out the majority of the time either before the CVE details are published or on the day the details are published. This is overwhelmingly positive for both scanning solutions. Over time both also appear to be releasing remote checks for critical vulnerabilities more quickly.

Written by

Chris Wallis

Recommended articles

Ready to get started with your 14-day trial?
try for free