Blog
Insights

The battle for IoT Security has already been lost

Chris Wallis
Author
Chris Wallis
Founder & CEO

Key Points

A few weeks ago, the website of popular cyber security journalist Brian Krebs was taken offline by a previously undiscovered botnet, now known as the Mirai botnet.

The attack was interesting for a number of reasons. First of all, it was at the time the largest DDoS attack in history, peaking at 620 Gbps (although there have already been reports of bigger ones). It was also the biggest marketing gaffe of the year, as Akamai dumped Krebs from the pro bono DDoS protection services they’d been offering him, missing an opportunity to claim they’d defended the world’s biggest DDoS attack, and instantly securing themselves a load of bad press.

Most interesting of all though, it was the first big warning that the battle for security in the IoT space is being lost, and a taste of things to come.

There’s a lot of buzz in the industry at the moment around how we are going to secure all of these IoT devices (which are predicted to reach 20 billion by 2020), and a variety of new startups have sprung up claiming to do exactly that. But unfortunately, as the recent attacks show, it may already be too little too late, as the first battle has clearly already been lost, and it is my prediction that we will continue to lose the war for a good few years to come.

Why is this?

Well fundamentally it’s because IoT device manufacturers are not incentivised to spend money on securing their devices. But it’s not their fault, they aren’t incentivised because consumers don’t incentivise them with their buying patterns. And in fairness to those consumers, it’s nigh-on impossible to tell the difference between an IoT device that’s been developed securely and one that hasn’t. Consumers can tell what’s simple though, and as security controls generally impede on nice user experiences, it’s no wonder that we still see devices being shipped with default credentials.

There is also the issue that cyber security is a complex and multi-faceted beast, and fundamentally any device, even one that has been developed ‘securely’, has the potential to have new weaknesses discovered in it later on. Meaning that unless all IoT devices are hooked up to some kind of automatic patch management service, we are always going to end up with a problem on our hands.

So instead of focusing our efforts on trying to ‘secure’ something which is without doubt never going to be secure, I would argue instead that we need to start preparing for the future of which we’ve just had a glimpse, and be ready to defend ourselves from ever increasing sizes of botnets.

As the EU and other nation states become increasingly wary of where their data is being physically stored, I can see a future where internet controls are implemented at a national level, at the “cyber border”, rather than being left down to individual site owners. So for example, traffic from known-bad IP addresses of botnet members would be filtered at a nation-state firewall in the UK before reaching any of our sites.

For now though, the best way we can influence the war, is not by coming up with clever technical solutions, but as consumers, by voting with our wallets, and avoiding brands which are linked to inferior security products. And if you’d like to start today, a list of the compromised devices involved in the Mirai bonnet can be found here:

https://blog.sucuri.net/2016/09/iot-home-router-botnet-leveraged-in-large-ddos-attack.html

Get our free

Ultimate Guide to Vulnerability Scanning

Learn everything you need to get started with vulnerability scanning and how to get the most out of your chosen product with our free PDF guide.

Sign up for your free 14-day trial

7 days free trial
A few weeks ago, the website of popular cyber security journalist Brian Krebs was taken offline by a previously undiscovered botnet, now…
back to BLOG

The battle for IoT Security has already been lost

Chris Wallis

A few weeks ago, the website of popular cyber security journalist Brian Krebs was taken offline by a previously undiscovered botnet, now known as the Mirai botnet.

The attack was interesting for a number of reasons. First of all, it was at the time the largest DDoS attack in history, peaking at 620 Gbps (although there have already been reports of bigger ones). It was also the biggest marketing gaffe of the year, as Akamai dumped Krebs from the pro bono DDoS protection services they’d been offering him, missing an opportunity to claim they’d defended the world’s biggest DDoS attack, and instantly securing themselves a load of bad press.

Most interesting of all though, it was the first big warning that the battle for security in the IoT space is being lost, and a taste of things to come.

There’s a lot of buzz in the industry at the moment around how we are going to secure all of these IoT devices (which are predicted to reach 20 billion by 2020), and a variety of new startups have sprung up claiming to do exactly that. But unfortunately, as the recent attacks show, it may already be too little too late, as the first battle has clearly already been lost, and it is my prediction that we will continue to lose the war for a good few years to come.

Why is this?

Well fundamentally it’s because IoT device manufacturers are not incentivised to spend money on securing their devices. But it’s not their fault, they aren’t incentivised because consumers don’t incentivise them with their buying patterns. And in fairness to those consumers, it’s nigh-on impossible to tell the difference between an IoT device that’s been developed securely and one that hasn’t. Consumers can tell what’s simple though, and as security controls generally impede on nice user experiences, it’s no wonder that we still see devices being shipped with default credentials.

There is also the issue that cyber security is a complex and multi-faceted beast, and fundamentally any device, even one that has been developed ‘securely’, has the potential to have new weaknesses discovered in it later on. Meaning that unless all IoT devices are hooked up to some kind of automatic patch management service, we are always going to end up with a problem on our hands.

So instead of focusing our efforts on trying to ‘secure’ something which is without doubt never going to be secure, I would argue instead that we need to start preparing for the future of which we’ve just had a glimpse, and be ready to defend ourselves from ever increasing sizes of botnets.

As the EU and other nation states become increasingly wary of where their data is being physically stored, I can see a future where internet controls are implemented at a national level, at the “cyber border”, rather than being left down to individual site owners. So for example, traffic from known-bad IP addresses of botnet members would be filtered at a nation-state firewall in the UK before reaching any of our sites.

For now though, the best way we can influence the war, is not by coming up with clever technical solutions, but as consumers, by voting with our wallets, and avoiding brands which are linked to inferior security products. And if you’d like to start today, a list of the compromised devices involved in the Mirai bonnet can be found here:

https://blog.sucuri.net/2016/09/iot-home-router-botnet-leveraged-in-large-ddos-attack.html

Release Date
Level of Ideal
Comments
Before CVE details are published
🥳
Limited public information is available about the vulnerability.

Red teamers, security researchers, detection engineers, threat actors have to actively research type of vulnerability, location in vulnerable software and build an associated exploit.

Tenable release checks for 47.43% of the CVEs they cover in this window, and Greenbone release 32.96%.
Day of CVE publish
😊
Vulnerability information is publicly accessible.

Red teamers, security researchers, detection engineers and threat actors now have access to some of the information they were previously having to hunt themselves, speeding up potential exploit creation.

Tenable release checks for 17.12% of the CVEs they cover in this window, and Greenbone release 17.69%.
First week since CVE publish
😐
Vulnerability information has been publicly available for up to 1 week.

The likelihood that exploitation in the wild is going to be happening is steadily increasing.

Tenable release checks for 10.9% of the CVEs they cover in this window, and Greenbone release 20.69%.
Between 1 week and 1 month since CVE publish
🥺
Vulnerability information has been publicly available for up to 1 month, and some very clever people have had time to craft an exploit.

We’re starting to lose some of the benefit of rapid, automated vulnerability detection.

Tenable release checks for 9.58% of the CVEs they cover in this window, and Greenbone release 12.43%.
After 1 month since CVE publish
😨
Information has been publicly available for more than 31 days.

Any detection released a month after the details are publicly available is decreasing in value for me.

Tenable release checks for 14.97% of the CVEs they cover over a month after the CVE details have been published, and Greenbone release 16.23%.

With this information in mind, I wanted to check what is the delay for both Tenable and Greenbone to release a detection for their scanners. The following section will focus on vulnerabilities which:

  • Have CVSSv2 rating of 10
  • Are exploitable over the network
  • Require no user interaction

These are the ones where an attacker can point their exploit code at your vulnerable system and gain unauthorised access.

We’ve seen previously that Tenable have remote checks for 643 critical vulnerabilities, and OpenVAS have remote checks for 450 critical vulnerabilities. Tenable release remote checks for critical vulnerabilities within 1 month of the details being made public 58.4% of the time, but Greenbone release their checks within 1 month 76.8% of the time. So, even though OpenVAS has fewer checks for those critical vulnerabilities, you are more likely to get them within 1 month of the details being made public. Let’s break that down further.

In Figure 10 we can see the absolute number of remote checks released on a given day after a CVE for a critical vulnerability has been published. What you can immediately see is that both Tenable and OpenVAS release the majority of their checks on or before the CVE details are made public; Tenable have released checks for 247 CVEs, and OpenVAS have released checks for 144 CVEs. Then since 2010 Tenable have remote released checks for 147 critical CVEs and OpenVAS 79 critical CVEs on the same day as the vulnerability details were published. The number of vulnerabilities then drops off across the first week and drops further after 1 week, as we would hope for in an efficient time-to-release scenario.

Figure 10: Absolute numbers of critical CVEs with a remote check release date from the date a CVE is published

While raw numbers are good, Tenable have a larger number of checks available so it could be unfair to go on raw numbers alone. It’s potentially more important to understand the likelihood that OpenVAS or Tenable will release a check of a vulnerability on any given day after a CVE for a critical vulnerability is released. In Figure 11 we can see that Tenable release 61% their checks on or before the date that a CVE is published, and OpenVAS release a shade under 50% of their checks on or before the day that a CVE is published.

Figure 11: Percentage chance of delay for critical vulnerabilities

So, since 2010 Tenable has more frequently released their checks before or on the same day as the CVE details have been published for critical vulnerabilities. While Tenable is leading at this point, Greenbone’s community feed still gets a considerable percentage of their checks out on or before day 0.

I thought I’d go another step further and try and see if I could identify any trend in each organisations release delay, are they getting better year-on-year or are their releases getting later? In Figure 12 I’ve taken the mean delay for critical vulnerabilities per year and plotted them. The mean as a metric is particularly influenced by outliers in a data set, so I expected some wackiness and limited the mean to only checks released 180 days prior to a CVE being published and 31 days after a CVE being published. These seem to me like reasonable limits, as anything greater than 6 months prior to CVE details being released is potentially a quirk of the check details and anything after a 1-month delay is less important for us.

What can we take away from Figure 12?

  • We can see that between 2011 and 2014 Greenbone’s release delay was better than that of Tenable, by between 5 and 10 days.
  • In 2015 things reverse and for 3 years Tenable is considerably ahead of Greenbone by a matter of weeks.
  • But, then in 2019 things get much closer and Greenbone seem to be releasing on average about a day earlier than Tenable.
  • For both the trendline over an 11-year period is very close, with Tenable marginally beating Greenbone.
  • We have yet to have any data for 2021 for OpenVAS checks for critical show-stopper CVEs.
Figure 12: Release delay year-on-year (lower is better)

With the larger number of checks, and still being able to release a greater percentage of their remote checks for critical vulnerabilities Tenable could win this category. However, the delay time from 2019 and 2020 going to OpenVAS, and the trend lines being so close, I am going to declare this one a tie. It’s a tie.

The takeaway from this is that both vendors are getting their checks out the majority of the time either before the CVE details are published or on the day the details are published. This is overwhelmingly positive for both scanning solutions. Over time both also appear to be releasing remote checks for critical vulnerabilities more quickly.

Written by

Chris Wallis

Recommended articles

Ready to get started with your 14-day trial?
try for free