Does the OWASP Top 10 still matter?
What is ‘OWASP’?
OWASP is the Open Web Application Security Project, an international non-profit organisation dedicated to improving web application security.
It operates on the core principle that all of its materials are freely available and easily accessible online, so that anyone anywhere can improve their own web app security. It offers a number of tools, videos, and forums to help do this – but their best-known project is the OWASP Top 10.
The Top 10
The OWASP Top 10 is a report outlining the most critical risks to web application security. Put together by a team of security experts from all over the world, the list is designed to raise awareness of the current security landscape and offer developers and security professionals invaluable insights into the latest and most widespread security risks.
It also includes a checklist and remediation advice that experts can fold into their own security practices and operations to minimise and/or mitigate the risk to their own apps.
Why you should use it
OWASP updates its Top 10 every two or three years as the web application market evolves, and it is the gold standard for some of the world’s largest organisations.
As such, you could be seen as falling short of compliance and security if you don’t address the vulnerabilities detailed in the Top 10. Conversely, integrating the list into your operations and software development shows a commitment to industry best practice.
And why you shouldn’t…
Some experts believe the OWASP Top 10 is flawed because the list is too limited and lacks context. By focusing only on the top 10 risks, it neglects the long tail. What’s more, the OWASP community often argues about the ranking, and whether the 11th or 12th belong in the list instead of something else.
There is merit to these arguments, but the OWASP Top 10 is still the leading forum for addressing security-aware coding and testing. It’s easy to understand, it helps users prioritise risk, and its actionable. And for the most part, it focuses on the most critical threats, rather than specific vulnerabilities.
So what’s the answer?
Web application vulnerabilities are bad for businesses, and bad for consumers. Big breaches can result in huge quantities of stolen data. These breaches aren’t always caused by organisations failing to address the OWASP Top 10, but they are some of the biggest issues. And there’s no point worrying about obscure zero-day flaws in your firewall if you’re not going to block injection, session capture, and XSS.
So what should you do? Firstly, train everyone in better security hygiene. Do dynamic application security testing, including penetration testing. Ensure admins adequately protect applications. And use an online vulnerability scanner.
Go beyond OWASP
Like most organisations, you may already be using a number of different cyber security tools to protect your organisation against the threats listed by OWASP. While this is a good security stance, vulnerability management can be complex and time consuming.
It doesn’t have to be. Intruder makes it easy to develop secure apps by integrating with your CI/CD pipeline to automate discovery of your cyber weaknesses.
You can perform security checks across your perimeter, including application-layer vulnerability checks, including checks for OWASP Top 10, XSS, SQL injection, CWE/SANS Top 25, remote code execution, OS command injection, and more.
Read the latest report for a more in-depth look at the OWASP Top 10. Or if you're ready to discover how Intruder can find the cyber security weaknesses in your business, sign up for a free trial today.
- Raw CVE Coverage
- Risk Rating Coverage
- Remote Check Types
- Check Publication Lead Time
- Local/Authenticated vs Remote Check Prioritisation
- Software Vendor & Package Coverage
- Headline Vulnerabilities of 2021 Coverage
- Analysis Decisions
Red teamers, security researchers, detection engineers, threat actors have to actively research type of vulnerability, location in vulnerable software and build an associated exploit.
Tenable release checks for 47.43% of the CVEs they cover in this window, and Greenbone release 32.96%.
Red teamers, security researchers, detection engineers and threat actors now have access to some of the information they were previously having to hunt themselves, speeding up potential exploit creation.
Tenable release checks for 17.12% of the CVEs they cover in this window, and Greenbone release 17.69%.
The likelihood that exploitation in the wild is going to be happening is steadily increasing.
Tenable release checks for 10.9% of the CVEs they cover in this window, and Greenbone release 20.69%.
We’re starting to lose some of the benefit of rapid, automated vulnerability detection.
Tenable release checks for 9.58% of the CVEs they cover in this window, and Greenbone release 12.43%.
Any detection released a month after the details are publicly available is decreasing in value for me.
Tenable release checks for 14.97% of the CVEs they cover over a month after the CVE details have been published, and Greenbone release 16.23%.
With this information in mind, I wanted to check what is the delay for both Tenable and Greenbone to release a detection for their scanners. The following section will focus on vulnerabilities which:
- Have CVSSv2 rating of 10
- Are exploitable over the network
- Require no user interaction
These are the ones where an attacker can point their exploit code at your vulnerable system and gain unauthorised access.
We’ve seen previously that Tenable have remote checks for 643 critical vulnerabilities, and OpenVAS have remote checks for 450 critical vulnerabilities. Tenable release remote checks for critical vulnerabilities within 1 month of the details being made public 58.4% of the time, but Greenbone release their checks within 1 month 76.8% of the time. So, even though OpenVAS has fewer checks for those critical vulnerabilities, you are more likely to get them within 1 month of the details being made public. Let’s break that down further.
In Figure 10 we can see the absolute number of remote checks released on a given day after a CVE for a critical vulnerability has been published. What you can immediately see is that both Tenable and OpenVAS release the majority of their checks on or before the CVE details are made public; Tenable have released checks for 247 CVEs, and OpenVAS have released checks for 144 CVEs. Then since 2010 Tenable have remote released checks for 147 critical CVEs and OpenVAS 79 critical CVEs on the same day as the vulnerability details were published. The number of vulnerabilities then drops off across the first week and drops further after 1 week, as we would hope for in an efficient time-to-release scenario.
While raw numbers are good, Tenable have a larger number of checks available so it could be unfair to go on raw numbers alone. It’s potentially more important to understand the likelihood that OpenVAS or Tenable will release a check of a vulnerability on any given day after a CVE for a critical vulnerability is released. In Figure 11 we can see that Tenable release 61% their checks on or before the date that a CVE is published, and OpenVAS release a shade under 50% of their checks on or before the day that a CVE is published.
So, since 2010 Tenable has more frequently released their checks before or on the same day as the CVE details have been published for critical vulnerabilities. While Tenable is leading at this point, Greenbone’s community feed still gets a considerable percentage of their checks out on or before day 0.
I thought I’d go another step further and try and see if I could identify any trend in each organisations release delay, are they getting better year-on-year or are their releases getting later? In Figure 12 I’ve taken the mean delay for critical vulnerabilities per year and plotted them. The mean as a metric is particularly influenced by outliers in a data set, so I expected some wackiness and limited the mean to only checks released 180 days prior to a CVE being published and 31 days after a CVE being published. These seem to me like reasonable limits, as anything greater than 6 months prior to CVE details being released is potentially a quirk of the check details and anything after a 1-month delay is less important for us.
What can we take away from Figure 12?
- We can see that between 2011 and 2014 Greenbone’s release delay was better than that of Tenable, by between 5 and 10 days.
- In 2015 things reverse and for 3 years Tenable is considerably ahead of Greenbone by a matter of weeks.
- But, then in 2019 things get much closer and Greenbone seem to be releasing on average about a day earlier than Tenable.
- For both the trendline over an 11-year period is very close, with Tenable marginally beating Greenbone.
- We have yet to have any data for 2021 for OpenVAS checks for critical show-stopper CVEs.
With the larger number of checks, and still being able to release a greater percentage of their remote checks for critical vulnerabilities Tenable could win this category. However, the delay time from 2019 and 2020 going to OpenVAS, and the trend lines being so close, I am going to declare this one a tie. It’s a tie.
The takeaway from this is that both vendors are getting their checks out the majority of the time either before the CVE details are published or on the day the details are published. This is overwhelmingly positive for both scanning solutions. Over time both also appear to be releasing remote checks for critical vulnerabilities more quickly.