“The Norwegian Government hacked my startup!”
At least, that’s how the conversation started, on a Whatsapp message early on a Friday evening in October, when a concerned startup founder contacted me to ask for help.
My initial reaction was that couldn’t be the case, this must be some kind of email hoax designed to scare her into handing over some bitcoins, and it wouldn’t take me long to convince her it was nothing to worry about. But the more I listened, the more it became clear that there was something a bit more serious afoot.
It all began with an investigation into smartwatch security by the Norwegian Consumer Council (NCC), specifically looking at devices intended for children. They paid some security researchers to spend a little time looking for flaws in the most popular products on the market. When they duly found some, the NCC notified the vendors of the flaws, and instructed them that fixes would be required. Initially they told vendors they weren’t going to the media unless their requests for fixes went ignored, but no actual deadlines were given.
So far so good, right? Vendors get a free security test, and consumers get their privacy bolstered by the government. Win-win?
Well not for long. Just 30 days later, things went from win-win, to lose-lose. With no warning whatsoever to the vendors, the NCC decided to simply release the information they had to the media. Security weaknesses in kids’ smartwatches make for pretty good headlines, so it was scooped up by numerous news outlets including the Daily Mail and BBC News, leaving our unfortunate startup founder with a media ****storm to deal with, and immediately losing a number of key contracts with retailers as a result.
You might think that’s a good outcome, serves them right for selling an insecure product right? But the difficulty is, there’s no guide book for business owners, there are no regulations as to exactly what tests should be performed on products, and they are entrepreneurs, not cyber security experts.
In this case, the founder already thought she had done the right thing. It wasn’t that she had been sloppy about security. She has children of her own, and cares deeply about the security of her customers, and so she had signed up for a crowdsourced ‘bug bounty’ platform, with the promise that security researchers would proactively find flaws for her. Unfortunately, she simply didn’t understand the difference between that and a rigorous and structured security audit.
Fortunately though, Colleen, founder of TechSixtyFour (the EU Distributor for the Gator smartwatch) didn’t make the same mistake twice. She realised the best way to deal with this situation was head-on, and engaged Intruder to perform a full security audit of the latest watch and its associated mobile application. Needless to say, we found a lot more issues that even the NCC had not, and got straight to work helping the manufacturer to fix them.
She also signed up to our continuous security monitoring service for the cloud servers that the watches communicate with, so that even after the security audit was over, we would continue to ensure no weaknesses were introduced. That’s a level of security that a vast number of companies in the world would struggle to match. In fact the PCI Data Security Standard only mandates quarterly vulnerability scans. And they’re responsible for making sure all our credit cards are safe. Well no wonder our credit cards keep turning up on the internet in droves!
You might think all this is actually a good outcome, but in the cyber security industry what the NCC did is called irresponsible disclosure, and it’s often more harmful than it is helpful. Here’s why:
- By going to the media highlighting active issues with children’s smartwatches, the NCC are publicising these weaknesses for anyone to go and exploit before any fixes are available. If they really believed these issues were serious security concerns, then they would not be releasing the information until the vendors had a good chance to remediate them. At best their approach is hypocritical, at worst, it is dangerous.
- All companies make mistakes in their software, and they take time to fix. Even “Project Zero” (a team of the best security researchers in the world created by Google to take sloppy software developers to rights) decided to cast aside previous norms about waiting until the vendor has fixed issues before disclosing them to the public. But even they give vendors 90 days to fix their flaws, and even that has been deemed a controversially short timescale in the industry.
So why would the NCC do such an unreasonable thing? Well we can only really speculate, but it probably has a lot more to do with hunting headlines than it does protecting consumers.
And how about the retailers. The knee jerk reaction to drop the supplier doesn’t solve any problems long term. Clearly they must have no kind of robust process in place to ensure the products they put on their shelves are in any way secure, and the fact they drop contracts with suppliers only if the suppliers get caught out in the media says everything about the general state of security in these internet-enabled products (or now more commonly called IoT devices).
Instead of attempting to assassinate individual businesses on a whack-a-mole basis, the Norwegian Consumer Council should be working with organisations like the IoT Security Foundation, and other government departments to enforce security seal of approval schemes, define standards for responsible retailers to follow, or provide consumer education on what security features to look for in products.
And instead of dropping contracts with suppliers after the fact, retailers should be working on programmes to enforce robust levels of testing have been performed, before these products are allowed on shelves.
As for the Gator smartwatch, my view is that with the fixes they have currently put in place, your child is safer if they have this watch than without it. Realistically, your average nasty individual would not be going to the lengths that we did to (for example) jam the communication between a child’s smartwatch and their parent. That’s like saying a cheap bike lock increases the risk of bike theft because it can be cut with a simple cable-cutter. You’re still better off having it than not.
In fact, this whole situation reminds me of Theo Paphitis famously tearing apart a Trunki in one episode of Dragon’s Den. It’s not that anyone would actually do that, but the fact it is possible plays on people’s minds more than it should.
That’s not to say there weren’t some really serious concerns with the way this watch was originally implemented, it was simply way too easy to tear this particular Trunki apart.
What saddens me though is the situation is now starting to resemble a witch-hunt, with the NCC recommending that the Gator product is removed from shelves, despite the herculean effort that Colleen and the Gator manufacturer have gone to to address the most serious issues in an incredibly short timescale. They should be being held up now as an example to all other IoT manufacturers. Fixing anything in 30 days can be hard work, But the manufacturer put in a superhuman effort to fix some major architectural issues, not just simple patches. In fact, they closed down issues with such speed it would put Oracle and Microsoft to shame, and they are continuing to do more.
For now it’s clear that there is still plenty of work to be done on the security of IoT devices. Gator won’t be the last product to be caught out. But I don’t think we’ll get there by over-emphasising risks to provoke media reaction. Those are the kind of hyperbolised scare tactics that previously confined security professionals to office basements and kept us a million miles away from the board agenda. Times have changed, major hacks are in the headlines almost daily, we don’t even need these scare tactics to get attention anymore. The real question is whether we can be mature enough as an industry to make progress without harassing businesses with unrealistic deadlines, and unnecessarily scaring the public into making ill informed purchasing decisions.
Thanks to Patrick Craston and Daniel Andrew.
- Raw CVE Coverage
- Risk Rating Coverage
- Remote Check Types
- Check Publication Lead Time
- Local/Authenticated vs Remote Check Prioritisation
- Software Vendor & Package Coverage
- Headline Vulnerabilities of 2021 Coverage
- Analysis Decisions
Red teamers, security researchers, detection engineers, threat actors have to actively research type of vulnerability, location in vulnerable software and build an associated exploit.
Tenable release checks for 47.43% of the CVEs they cover in this window, and Greenbone release 32.96%.
Red teamers, security researchers, detection engineers and threat actors now have access to some of the information they were previously having to hunt themselves, speeding up potential exploit creation.
Tenable release checks for 17.12% of the CVEs they cover in this window, and Greenbone release 17.69%.
The likelihood that exploitation in the wild is going to be happening is steadily increasing.
Tenable release checks for 10.9% of the CVEs they cover in this window, and Greenbone release 20.69%.
We’re starting to lose some of the benefit of rapid, automated vulnerability detection.
Tenable release checks for 9.58% of the CVEs they cover in this window, and Greenbone release 12.43%.
Any detection released a month after the details are publicly available is decreasing in value for me.
Tenable release checks for 14.97% of the CVEs they cover over a month after the CVE details have been published, and Greenbone release 16.23%.
With this information in mind, I wanted to check what is the delay for both Tenable and Greenbone to release a detection for their scanners. The following section will focus on vulnerabilities which:
- Have CVSSv2 rating of 10
- Are exploitable over the network
- Require no user interaction
These are the ones where an attacker can point their exploit code at your vulnerable system and gain unauthorised access.
We’ve seen previously that Tenable have remote checks for 643 critical vulnerabilities, and OpenVAS have remote checks for 450 critical vulnerabilities. Tenable release remote checks for critical vulnerabilities within 1 month of the details being made public 58.4% of the time, but Greenbone release their checks within 1 month 76.8% of the time. So, even though OpenVAS has fewer checks for those critical vulnerabilities, you are more likely to get them within 1 month of the details being made public. Let’s break that down further.
In Figure 10 we can see the absolute number of remote checks released on a given day after a CVE for a critical vulnerability has been published. What you can immediately see is that both Tenable and OpenVAS release the majority of their checks on or before the CVE details are made public; Tenable have released checks for 247 CVEs, and OpenVAS have released checks for 144 CVEs. Then since 2010 Tenable have remote released checks for 147 critical CVEs and OpenVAS 79 critical CVEs on the same day as the vulnerability details were published. The number of vulnerabilities then drops off across the first week and drops further after 1 week, as we would hope for in an efficient time-to-release scenario.
While raw numbers are good, Tenable have a larger number of checks available so it could be unfair to go on raw numbers alone. It’s potentially more important to understand the likelihood that OpenVAS or Tenable will release a check of a vulnerability on any given day after a CVE for a critical vulnerability is released. In Figure 11 we can see that Tenable release 61% their checks on or before the date that a CVE is published, and OpenVAS release a shade under 50% of their checks on or before the day that a CVE is published.
So, since 2010 Tenable has more frequently released their checks before or on the same day as the CVE details have been published for critical vulnerabilities. While Tenable is leading at this point, Greenbone’s community feed still gets a considerable percentage of their checks out on or before day 0.
I thought I’d go another step further and try and see if I could identify any trend in each organisations release delay, are they getting better year-on-year or are their releases getting later? In Figure 12 I’ve taken the mean delay for critical vulnerabilities per year and plotted them. The mean as a metric is particularly influenced by outliers in a data set, so I expected some wackiness and limited the mean to only checks released 180 days prior to a CVE being published and 31 days after a CVE being published. These seem to me like reasonable limits, as anything greater than 6 months prior to CVE details being released is potentially a quirk of the check details and anything after a 1-month delay is less important for us.
What can we take away from Figure 12?
- We can see that between 2011 and 2014 Greenbone’s release delay was better than that of Tenable, by between 5 and 10 days.
- In 2015 things reverse and for 3 years Tenable is considerably ahead of Greenbone by a matter of weeks.
- But, then in 2019 things get much closer and Greenbone seem to be releasing on average about a day earlier than Tenable.
- For both the trendline over an 11-year period is very close, with Tenable marginally beating Greenbone.
- We have yet to have any data for 2021 for OpenVAS checks for critical show-stopper CVEs.
With the larger number of checks, and still being able to release a greater percentage of their remote checks for critical vulnerabilities Tenable could win this category. However, the delay time from 2019 and 2020 going to OpenVAS, and the trend lines being so close, I am going to declare this one a tie. It’s a tie.
The takeaway from this is that both vendors are getting their checks out the majority of the time either before the CVE details are published or on the day the details are published. This is overwhelmingly positive for both scanning solutions. Over time both also appear to be releasing remote checks for critical vulnerabilities more quickly.