Blog
Vulnerability management

Streamlining the chaos of vulnerability management

James Harrison
Author
James Harrison
Senior Content Writer

Key Points

When our Product Lead, Andy Hornegold, was asked to talk about vulnerability management at DTX Europe, he used the Optus breach in Australia to show how a vulnerability scanner might have prevented such a dramatic and damaging breach.  

“I’ve got 15,000 vulnerabilities...”

Said a CSO recently to me recently. My response? You can’t fix them all – and if you’re trying to fix them all, you’re chasing the wrong metrics. In the time it takes to fix those 15,000 vulnerabilities, another 15,000 will take their place. It's a never-ending story.

But that CSO is not the only security professional to lose sleep over the numbers. Over 14,114 vulnerabilities have been publicly released this year (as of 10th October), 2,544 of which are critical and 5,686 rated high priority. These include some nasty headliners like ProxyNotShell and the lingering ghost of Log4Shell. And CISA’s Known Exploited Vulnerability Catalogue currently lists 837 vulnerabilities.

You could try to fix them all. And in an ideal world perhaps you could. But we don’t live in an ideal world, which is why metrics like the Exploit Probability Scoring System (EPSS) has come about. EPSS is an open, data-driven effort at estimating the probability that a software vulnerability will be exploited in the wild, designed to help security teams prioritise their vulnerability remediation efforts. But this still masks the main problem – that vulnerability management can be boiled down to a numbers game.

It’s more than just a numbers game

Vulnerabilities are a data scientist’s dream, but it’s so much more than a numbers game. We need to move beyond the numbers see the bigger picture, focusing on the vulnerabilities that have a direct impact on the organisation that are happening now. And not just on the tangible things we can see – we need to know as much about the environment and attack surface as possible.  

Because it’s what you don’t know that can have the biggest impact and potential risk to your business. In almost every Red Team engagement I’ve worked on, we managed to gain access to an environment, network, asset or system that the customer didn’t even know existed...

What you don’t know is your biggest risk

Vulnerability management can be boiled down to four key phases:

  • Detect: all the vulnerabilities
  • Prioritise: the vulnerabilities
  • Control: fix, mitigate or accept the vulnerabilities  
  • Report: vulnerabilities to stakeholders

Here we’ll focus on the detect phase. Detection is so tightly aligned with asset discovery that vulnerability management is fundamentally flawed if you don’t have visibility across your whole environment and don’t detect vulnerabilities quickly.

Without good asset management you won’t know which assets are under your control, you can’t scan all them for vulnerabilities, and you can’t know which vulnerabilities are a potential risk to your organisation.

Cloud servers are now the #1 method of entry

So where does the danger lie? According to the Hiscox 2022 Cyber Readiness Report, cloud servers are now the number one method of entry, and small-to-medium sized businesses the fastest growing sector being targeted.  

With cloud adoption continuing to increase, almost every company now relies on cloud services to some extent, and by devolving responsibility for cloud assets to developers/end users, it’s no wonder that cloud assets are the #1 attack vector. With greater flexibility and permission to expose new services to the internet without visibility from security teams or those managing risk, there are increasingly going to be assets which the organisation isn’t aware of. Let’s take a real-life example.

Optus breached

Last month a post hit Breach Forums, a site used to sell data or access. The posted claimed to have breached Optus – an Australian telco giant – and accessed the customer information of 11.2 million people, including:

  • Full name
  • Date of Birth
  • Mobile number
  • Email address
  • Physical address
  • Identification documents (passports, driving licences)

It was later confirmed that it also included Medicare numbers, the Australian equivalent of the NHS. 11.2 million users! That’s over 40% of the Australian population!

Of course, a breach of this size made the mainstream news – and with good reason. And when you hear about this kind of a breach you think: HOW?! How did an attacker get access to data of 11.2 million users from the second largest telecommunications provider in Australia? They must be pretty sophisticated attackers, throwing some zero day exploits around, right?

The Optus response

And that’s certainly the message put out by Optus. CEO Kelly Bayer Rosmarin said: “Optus has very strong cyber defences. Cyber security has a lot of focus and investment here. So, this should serve as a warning call to all organisations. There are sophisticated criminals out there, and we need all organisations to be on alert.”

But investigative journalist Jeremy Kirk wasn’t convinced and contacted the person selling the data to validate the hack. When asked how they got access, the attacker responded with an API endpoint, and stated that it had an “access control bug”. Interesting, but what does the attacker mean by “access control bug”? When asked, he got the response: “No authenticate needed”.

Kirk asked how the attacker got access to so many records if it was an API endpoint. That doesn’t sound particularly sophisticated... indeed, you could do it yourself with off-the-shelf tooling.

Not so sophisticated after all?

While no one is blaming Optus – they're not the bad guys here – Clare O’Neil, Australian Cyber Security Minister didn’t think it was a sophisticated attack either. When speaking to ABC News, she said: “What is of concern is quite a basic hack was undertaken. We should not have a telecommunications provider in this country which has effectively left the window open for data of this nature to be stolen.” It suddenly became very political.

The API wasn’t hosted on some part of a legacy environment either. According to reliable source, Kirk discovered it was hosted in Google Cloud/Apigee. Perhaps Optus has been focusing on the wrong things? That vulnerability could have been in place for weeks before it was discovered.

When the clock’s ticking...

In situations like this when time is running out, you could carry out dark web monitoring which would alert you when a compromise has happened and will buy you a few hours. You could monitor the egress of your APIs and wait for a spike in the amount of data leaving your endpoints.

OR you could carry out live detection and vulnerability scanning of those endpoints the second they hit the internet – and buy yourself a lot more time.

Because we no longer have the luxury of time

Attackers are compromising systems faster and faster: at Intruder we have had one threat group automatically compromising systems the second a certificate is registered for the domain.

The move to continuous deployment means that irregular pen testing or scanning isn’t enough to catch the latest vulnerabilities. Technology continues to speed up. We’re working in a world that wants to increase the rate of innovation, facilitate lean processes, and reach success sooner.

And we’re doing that by decreasing the overhead to deploy with CI/CD, asking developers to manage their own infrastructure, and releasing prototypes and MVPs more rapidly.

To maintain security, we need detection and visibility to increase at the same rate. Everything is evolving so fast that you simply can't wait for penetration tests to complete. You need to see what’s hitting the internet, and you need to make sure it’s secure.

It’s become so important, that CISA has introduced a binding directive for national agencies which mandates by the of Q2 2023 that they have asset discovery in place and build a vulnerability detection process on top.

Simplify the chaos and streamline management

Vulnerability management doesn’t need to be difficult. Ask yourself whether your current solutions:

  • Automatically detect new systems that are added to your cloud account
  • Identify vulnerabilities in your systems the second they’re changed or brought online
  • Identify vulnerabilities as soon as a check is available
  • Prioritise detected vulnerabilities so that you know what to fix and when
  • Allow you to add team members so that you don’t have to fixing everything yourself
  • Integrate with other solutions so that you can track vulnerabilities

You should also be tracking metrics including:

  • Time from a system coming online to being scanned for a vulnerability
  • Time to fix vulnerabilities and break it down by severity
  • Time between a vulnerability or a check being released and a scan being completed of all your known assets

We make vulnerability management effortless

We’re actively helping customers deal with these kinds of problems at Intruder. Our vulnerability scanner helps you react faster to emerging threats by proactively scanning as soon as checks are available. We reduce your attack surface by identifying services that are internet facing and shouldn’t be – like remote desktop or direct database access. You can then focus on what’s important by filtering out the non-actionable noise from your scan results.

Sign up for a free trial to get started.

Get our free

Ultimate Guide to Vulnerability Scanning

Learn everything you need to get started with vulnerability scanning and how to get the most out of your chosen product with our free PDF guide.

Sign up for your free 14-day trial

7 days free trial
When Product Lead, Andy Hornegold, was asked to speak at DTX Europe, he used the Optus breach in Australia to show how vulnerability scanning could ...
back to BLOG

Streamlining the chaos of vulnerability management

James Harrison

When our Product Lead, Andy Hornegold, was asked to talk about vulnerability management at DTX Europe, he used the Optus breach in Australia to show how a vulnerability scanner might have prevented such a dramatic and damaging breach.  

“I’ve got 15,000 vulnerabilities...”

Said a CSO recently to me recently. My response? You can’t fix them all – and if you’re trying to fix them all, you’re chasing the wrong metrics. In the time it takes to fix those 15,000 vulnerabilities, another 15,000 will take their place. It's a never-ending story.

But that CSO is not the only security professional to lose sleep over the numbers. Over 14,114 vulnerabilities have been publicly released this year (as of 10th October), 2,544 of which are critical and 5,686 rated high priority. These include some nasty headliners like ProxyNotShell and the lingering ghost of Log4Shell. And CISA’s Known Exploited Vulnerability Catalogue currently lists 837 vulnerabilities.

You could try to fix them all. And in an ideal world perhaps you could. But we don’t live in an ideal world, which is why metrics like the Exploit Probability Scoring System (EPSS) has come about. EPSS is an open, data-driven effort at estimating the probability that a software vulnerability will be exploited in the wild, designed to help security teams prioritise their vulnerability remediation efforts. But this still masks the main problem – that vulnerability management can be boiled down to a numbers game.

It’s more than just a numbers game

Vulnerabilities are a data scientist’s dream, but it’s so much more than a numbers game. We need to move beyond the numbers see the bigger picture, focusing on the vulnerabilities that have a direct impact on the organisation that are happening now. And not just on the tangible things we can see – we need to know as much about the environment and attack surface as possible.  

Because it’s what you don’t know that can have the biggest impact and potential risk to your business. In almost every Red Team engagement I’ve worked on, we managed to gain access to an environment, network, asset or system that the customer didn’t even know existed...

What you don’t know is your biggest risk

Vulnerability management can be boiled down to four key phases:

  • Detect: all the vulnerabilities
  • Prioritise: the vulnerabilities
  • Control: fix, mitigate or accept the vulnerabilities  
  • Report: vulnerabilities to stakeholders

Here we’ll focus on the detect phase. Detection is so tightly aligned with asset discovery that vulnerability management is fundamentally flawed if you don’t have visibility across your whole environment and don’t detect vulnerabilities quickly.

Without good asset management you won’t know which assets are under your control, you can’t scan all them for vulnerabilities, and you can’t know which vulnerabilities are a potential risk to your organisation.

Cloud servers are now the #1 method of entry

So where does the danger lie? According to the Hiscox 2022 Cyber Readiness Report, cloud servers are now the number one method of entry, and small-to-medium sized businesses the fastest growing sector being targeted.  

With cloud adoption continuing to increase, almost every company now relies on cloud services to some extent, and by devolving responsibility for cloud assets to developers/end users, it’s no wonder that cloud assets are the #1 attack vector. With greater flexibility and permission to expose new services to the internet without visibility from security teams or those managing risk, there are increasingly going to be assets which the organisation isn’t aware of. Let’s take a real-life example.

Optus breached

Last month a post hit Breach Forums, a site used to sell data or access. The posted claimed to have breached Optus – an Australian telco giant – and accessed the customer information of 11.2 million people, including:

  • Full name
  • Date of Birth
  • Mobile number
  • Email address
  • Physical address
  • Identification documents (passports, driving licences)

It was later confirmed that it also included Medicare numbers, the Australian equivalent of the NHS. 11.2 million users! That’s over 40% of the Australian population!

Of course, a breach of this size made the mainstream news – and with good reason. And when you hear about this kind of a breach you think: HOW?! How did an attacker get access to data of 11.2 million users from the second largest telecommunications provider in Australia? They must be pretty sophisticated attackers, throwing some zero day exploits around, right?

The Optus response

And that’s certainly the message put out by Optus. CEO Kelly Bayer Rosmarin said: “Optus has very strong cyber defences. Cyber security has a lot of focus and investment here. So, this should serve as a warning call to all organisations. There are sophisticated criminals out there, and we need all organisations to be on alert.”

But investigative journalist Jeremy Kirk wasn’t convinced and contacted the person selling the data to validate the hack. When asked how they got access, the attacker responded with an API endpoint, and stated that it had an “access control bug”. Interesting, but what does the attacker mean by “access control bug”? When asked, he got the response: “No authenticate needed”.

Kirk asked how the attacker got access to so many records if it was an API endpoint. That doesn’t sound particularly sophisticated... indeed, you could do it yourself with off-the-shelf tooling.

Not so sophisticated after all?

While no one is blaming Optus – they're not the bad guys here – Clare O’Neil, Australian Cyber Security Minister didn’t think it was a sophisticated attack either. When speaking to ABC News, she said: “What is of concern is quite a basic hack was undertaken. We should not have a telecommunications provider in this country which has effectively left the window open for data of this nature to be stolen.” It suddenly became very political.

The API wasn’t hosted on some part of a legacy environment either. According to reliable source, Kirk discovered it was hosted in Google Cloud/Apigee. Perhaps Optus has been focusing on the wrong things? That vulnerability could have been in place for weeks before it was discovered.

When the clock’s ticking...

In situations like this when time is running out, you could carry out dark web monitoring which would alert you when a compromise has happened and will buy you a few hours. You could monitor the egress of your APIs and wait for a spike in the amount of data leaving your endpoints.

OR you could carry out live detection and vulnerability scanning of those endpoints the second they hit the internet – and buy yourself a lot more time.

Because we no longer have the luxury of time

Attackers are compromising systems faster and faster: at Intruder we have had one threat group automatically compromising systems the second a certificate is registered for the domain.

The move to continuous deployment means that irregular pen testing or scanning isn’t enough to catch the latest vulnerabilities. Technology continues to speed up. We’re working in a world that wants to increase the rate of innovation, facilitate lean processes, and reach success sooner.

And we’re doing that by decreasing the overhead to deploy with CI/CD, asking developers to manage their own infrastructure, and releasing prototypes and MVPs more rapidly.

To maintain security, we need detection and visibility to increase at the same rate. Everything is evolving so fast that you simply can't wait for penetration tests to complete. You need to see what’s hitting the internet, and you need to make sure it’s secure.

It’s become so important, that CISA has introduced a binding directive for national agencies which mandates by the of Q2 2023 that they have asset discovery in place and build a vulnerability detection process on top.

Simplify the chaos and streamline management

Vulnerability management doesn’t need to be difficult. Ask yourself whether your current solutions:

  • Automatically detect new systems that are added to your cloud account
  • Identify vulnerabilities in your systems the second they’re changed or brought online
  • Identify vulnerabilities as soon as a check is available
  • Prioritise detected vulnerabilities so that you know what to fix and when
  • Allow you to add team members so that you don’t have to fixing everything yourself
  • Integrate with other solutions so that you can track vulnerabilities

You should also be tracking metrics including:

  • Time from a system coming online to being scanned for a vulnerability
  • Time to fix vulnerabilities and break it down by severity
  • Time between a vulnerability or a check being released and a scan being completed of all your known assets

We make vulnerability management effortless

We’re actively helping customers deal with these kinds of problems at Intruder. Our vulnerability scanner helps you react faster to emerging threats by proactively scanning as soon as checks are available. We reduce your attack surface by identifying services that are internet facing and shouldn’t be – like remote desktop or direct database access. You can then focus on what’s important by filtering out the non-actionable noise from your scan results.

Sign up for a free trial to get started.

Release Date
Level of Ideal
Comments
Before CVE details are published
🥳
Limited public information is available about the vulnerability.

Red teamers, security researchers, detection engineers, threat actors have to actively research type of vulnerability, location in vulnerable software and build an associated exploit.

Tenable release checks for 47.43% of the CVEs they cover in this window, and Greenbone release 32.96%.
Day of CVE publish
😊
Vulnerability information is publicly accessible.

Red teamers, security researchers, detection engineers and threat actors now have access to some of the information they were previously having to hunt themselves, speeding up potential exploit creation.

Tenable release checks for 17.12% of the CVEs they cover in this window, and Greenbone release 17.69%.
First week since CVE publish
😐
Vulnerability information has been publicly available for up to 1 week.

The likelihood that exploitation in the wild is going to be happening is steadily increasing.

Tenable release checks for 10.9% of the CVEs they cover in this window, and Greenbone release 20.69%.
Between 1 week and 1 month since CVE publish
🥺
Vulnerability information has been publicly available for up to 1 month, and some very clever people have had time to craft an exploit.

We’re starting to lose some of the benefit of rapid, automated vulnerability detection.

Tenable release checks for 9.58% of the CVEs they cover in this window, and Greenbone release 12.43%.
After 1 month since CVE publish
😨
Information has been publicly available for more than 31 days.

Any detection released a month after the details are publicly available is decreasing in value for me.

Tenable release checks for 14.97% of the CVEs they cover over a month after the CVE details have been published, and Greenbone release 16.23%.

With this information in mind, I wanted to check what is the delay for both Tenable and Greenbone to release a detection for their scanners. The following section will focus on vulnerabilities which:

  • Have CVSSv2 rating of 10
  • Are exploitable over the network
  • Require no user interaction

These are the ones where an attacker can point their exploit code at your vulnerable system and gain unauthorised access.

We’ve seen previously that Tenable have remote checks for 643 critical vulnerabilities, and OpenVAS have remote checks for 450 critical vulnerabilities. Tenable release remote checks for critical vulnerabilities within 1 month of the details being made public 58.4% of the time, but Greenbone release their checks within 1 month 76.8% of the time. So, even though OpenVAS has fewer checks for those critical vulnerabilities, you are more likely to get them within 1 month of the details being made public. Let’s break that down further.

In Figure 10 we can see the absolute number of remote checks released on a given day after a CVE for a critical vulnerability has been published. What you can immediately see is that both Tenable and OpenVAS release the majority of their checks on or before the CVE details are made public; Tenable have released checks for 247 CVEs, and OpenVAS have released checks for 144 CVEs. Then since 2010 Tenable have remote released checks for 147 critical CVEs and OpenVAS 79 critical CVEs on the same day as the vulnerability details were published. The number of vulnerabilities then drops off across the first week and drops further after 1 week, as we would hope for in an efficient time-to-release scenario.

Figure 10: Absolute numbers of critical CVEs with a remote check release date from the date a CVE is published

While raw numbers are good, Tenable have a larger number of checks available so it could be unfair to go on raw numbers alone. It’s potentially more important to understand the likelihood that OpenVAS or Tenable will release a check of a vulnerability on any given day after a CVE for a critical vulnerability is released. In Figure 11 we can see that Tenable release 61% their checks on or before the date that a CVE is published, and OpenVAS release a shade under 50% of their checks on or before the day that a CVE is published.

Figure 11: Percentage chance of delay for critical vulnerabilities

So, since 2010 Tenable has more frequently released their checks before or on the same day as the CVE details have been published for critical vulnerabilities. While Tenable is leading at this point, Greenbone’s community feed still gets a considerable percentage of their checks out on or before day 0.

I thought I’d go another step further and try and see if I could identify any trend in each organisations release delay, are they getting better year-on-year or are their releases getting later? In Figure 12 I’ve taken the mean delay for critical vulnerabilities per year and plotted them. The mean as a metric is particularly influenced by outliers in a data set, so I expected some wackiness and limited the mean to only checks released 180 days prior to a CVE being published and 31 days after a CVE being published. These seem to me like reasonable limits, as anything greater than 6 months prior to CVE details being released is potentially a quirk of the check details and anything after a 1-month delay is less important for us.

What can we take away from Figure 12?

  • We can see that between 2011 and 2014 Greenbone’s release delay was better than that of Tenable, by between 5 and 10 days.
  • In 2015 things reverse and for 3 years Tenable is considerably ahead of Greenbone by a matter of weeks.
  • But, then in 2019 things get much closer and Greenbone seem to be releasing on average about a day earlier than Tenable.
  • For both the trendline over an 11-year period is very close, with Tenable marginally beating Greenbone.
  • We have yet to have any data for 2021 for OpenVAS checks for critical show-stopper CVEs.
Figure 12: Release delay year-on-year (lower is better)

With the larger number of checks, and still being able to release a greater percentage of their remote checks for critical vulnerabilities Tenable could win this category. However, the delay time from 2019 and 2020 going to OpenVAS, and the trend lines being so close, I am going to declare this one a tie. It’s a tie.

The takeaway from this is that both vendors are getting their checks out the majority of the time either before the CVE details are published or on the day the details are published. This is overwhelmingly positive for both scanning solutions. Over time both also appear to be releasing remote checks for critical vulnerabilities more quickly.

Written by

James Harrison

Recommended articles

Ready to get started with your 14-day trial?
try for free