What is attack surface management?
Attack surfaces are growing faster than security teams can keep up – to stay ahead, you need to know what’s exposed and where attackers are most likely to strike. With cloud migration dramatically increasing the number of internal and external targets, prioritizing threats and managing your attack surface from an attacker’s perspective has never been more important. In this guide, we’ll look at why it’s growing, and how to monitor and manage it properly with tools like Intruder. Let’s dive in.
What is your attack surface?
First, it’s important to understand what we mean when we talk about an attack surface. An attack surface is the sum of your digital assets that are ‘exposed’ – whether the digital assets are secure or vulnerable, known or unknown, in active use or not. This external attack surface changes continuously over time, and includes digital assets that are on-premises, in the cloud, in subsidiary networks, and in third-party environments. In short, it’s anything that a hacker can attack.
What is attack surface management?
Attack surface management (ASM) is the process of discovering these assets and services and then reducing or minimizing their exposure to prevent hackers exploiting them. Exposure can mean two things: current vulnerabilities such as missing patches or misconfigurations that reduce the security of the services or assets. But it can also mean exposure to future vulnerabilities.
Take the example of an admin interface like cPanel or a firewall administration page – these may be secure against all known current attacks today, but a vulnerability could be discovered in the software tomorrow – when it immediately becomes a significant risk. An asset doesn’t need to be vulnerable today to be vulnerable tomorrow. If you reduce your attack surface, regardless of vulnerabilities, you become harder to attack tomorrow.
So, a significant part of attack surface management is reducing exposure to possible future vulnerabilities by removing unnecessary services and assets from the internet. This what led to the Deloitte breach and what distinguishes it from traditional vulnerability management. But to do this, first you need to know what’s there.
Asset management vs vulnerability management
Often considered the poor relation of vulnerability management, asset management has traditionally been a labour intensive, time-consuming task for IT teams. Even when they had control of the hardware assets within their organization and network perimeter, it was still fraught with problems. If just one asset was missed from the asset inventory, it could evade the entire vulnerability management process and, depending on the sensitivity of the asset, could have far reaching implications for the business.
Today, it’s a whole lot more complicated. Businesses are migrating to SaaS and moving their systems and services to the cloud, internal teams are downloading their own workflow, project management and collaboration tools, and individual users expect to customize their environments. When companies expand through mergers and acquisitions too, they often take over systems they’re not even aware of – take the example of telco TalkTalk which was breached in 2015 and up to 4 million unencrypted records were stolen from a system they didn’t even know existed.
Moving security from IT to DevOps
Today’s cloud platforms enable development teams to move and scale quickly when needed. But this puts a lot of the responsibility for security into the hands of the development teams – shifting away from traditional, centralized IT teams with robust, trusted change control processes.
This means cyber security teams struggle to see what is going on or discover where their assets are. Similarly, it’s increasingly hard for large enterprises or businesses with dispersed teams – often located around the world – to keep track of where all their systems are.
As a result, organizations increasingly understand that their vulnerability management processes should be baked into a more holistic ‘attack surface management’ process because you must first know what you have exposed to the internet before you think about what vulnerabilities you have, and what fixes to prioritize.
Attack surface management if anything is the recognition that asset management and vulnerability management must go hand-in-hand. For example, an Intruder customer once told us we had a bug in Intruder’s cloud connectors, showing an IP address that he didn’t think he had. When we investigated, our connector was working fine – the IP address was in an AWS region he didn’t know was in use, hidden in the AWS console. This shows how attack surface management is as much about visibility as vulnerability management.
The 4 steps of attack surface management
By Daniel Thatcher, Intruder Security Research Engineer
- Discover your assets: start with finding all your assets. Perfect is the enemy of good - do 10% of the perfect solution and build on it.
- Streamline and automate: ensure that everyone in the company who creates infrastructure can do it in a secure way. This should include reducing barriers to make this secure way the path of least resistance, for example by automating getting visibility on new systems rather than requiring users to tell you.
- Ensure visibility: somewhere in your company there is some record of what exists. Make sure that you’re aware of all the cloud accounts in use, and that you’re pulling in data from them all. Try to keep a record of which SaaS applications are in use, and who has access to what.
- Continuously monitor: ongoing monitoring is a necessity. Members of your organization will be spinning up new infrastructure and services, and registering for new applications. On top of this, the threat landscape is always changing and new vulnerabilities are always being found. Continuous monitoring is a necessity to keep on top of it all.
Where does the attack surface stop?
If you use a SaaS tool like HubSpot, they will hold a lot of your sensitive customer data, but you wouldn’t expect to scan them for vulnerabilities – this is where a third-party risk platform comes in. You would expect HubSpot to have many cyber security safeguards in place – and you would check them for these.
Where the lines become blurred is with external agencies. Maybe you use a design agency to create a website, but you don’t have a long-term management contract in place. If that website stays live until a vulnerability is discovered and it gets breached... A famous example is the Deloitte breach. On September 25, 2017, Deloitte announced that they detected a breach of the firm’s global email server via a poorly secured admin email six months earlier.
In these instances, third party and supplier risk management software and insurance manage vendor risk data to protect businesses from issues such as data breaches or noncompliance. This assesses, monitors, and mitigates all risks that may have a negative impact on the relationship between a company and its suppliers.
Features of attack surface management tools
Various tools on the market are good for asset discovery, finding new domains which look like yours and spotting websites with similar content to your own. Your team can then check if this is a company asset or not, choose whether it’s included in your vulnerability management processes, and how it is secured. But this requires an internal resource because the tool can’t do this for you.
Similarly, some tools focus only on the external attack surface. But since a common attack vector is through employee workstations, we believe attack surface management should include internal systems too. Here are our 3 essential features that every attack surface monitoring tool should provide:
1. Asset discovery
You can’t manage an asset if you don’t know it exists. As we’ve seen, most organizations have a variety of “unknown unknowns,” such as assets housed on partner or third-party sites, workloads running in public cloud environments, IoT devices, abandoned IP addresses and credentials, and more. Intruder’s CloudBot runs hourly checks for new IP addresses or hostnames in connected AWS, Google Cloud or Azure accounts.
2. Business context
Not all attack vectors are created equal and the ‘context’ – what's exposed to the internet – is a vital part of attack surface management. Legacy tools don’t provide this context; they treat all attack surfaces (external, internal office, internal datacentre) the same, and so it's hard to prioritize vulnerabilities.
Attack surface management tools identify the gaps in your internal and external security controls to reveal the weaknesses in your security that need to be addressed and remediated first.
Intruder takes this a step further and provides insight into any given asset and the business unit the application belongs to. For example, knowing whether a compromised workload is a part of a critical application that manages bank-to-bank SWIFT transactions will help you formulate and execute your remediation plan.
3. Proactive and reactive scans
You can’t just test your attack surface once. Every day it continues to grow as you add new devices, workloads and services. As it grows the security risk grows too. Not just the risk of new vulnerabilities, but also misconfigurations, data exposures or other security gaps. It’s important to test for all possible attack vectors, and it’s important to do it continuously to prevent your understanding from becoming outdated.
Even better than continuous scanning though is a platform that can scan proactively or reactively depending on the circumstances. For example, reacting to a new cloud service being brought online by launching a scan, or proactively scanning all assets as soon as new vulnerability checks become available.
Reducing your attack surface with Intruder
Attack surface monitoring tools like Intruder do all this and more. We make sure that everything you have facing the internet is supposed to be – by making it easily searchable and explorable. Our Network View feature shows exactly what ports and services are available, including screenshots of those that have websites or apps running on them.
Most automated tools are great at spitting out data for analysts to look at, but not at reducing the ‘noise’. Intruder prioritizes issues and vulnerabilities based on context, or whether they should be on the internet at all. Combined with Intruder’s continuous monitoring and emerging threat scans, this makes it much easier and quicker to find and fix new vulnerabilities before they can be exploited.
Try Intruder for yourself
With our attack surface monitoring capabilities, Intruder is solving one of the most fundamental problems in cybersecurity: the need to understand how attackers see your organization, where they are likely to break in, and how you can identify, prioritize and eliminate risk. Ready to get started with your 14-day trial? Or get in touch for more information.
- Raw CVE Coverage
- Risk Rating Coverage
- Remote Check Types
- Check Publication Lead Time
- Local/Authenticated vs Remote Check Prioritisation
- Software Vendor & Package Coverage
- Headline Vulnerabilities of 2021 Coverage
- Analysis Decisions
Red teamers, security researchers, detection engineers, threat actors have to actively research type of vulnerability, location in vulnerable software and build an associated exploit.
Tenable release checks for 47.43% of the CVEs they cover in this window, and Greenbone release 32.96%.
Red teamers, security researchers, detection engineers and threat actors now have access to some of the information they were previously having to hunt themselves, speeding up potential exploit creation.
Tenable release checks for 17.12% of the CVEs they cover in this window, and Greenbone release 17.69%.
The likelihood that exploitation in the wild is going to be happening is steadily increasing.
Tenable release checks for 10.9% of the CVEs they cover in this window, and Greenbone release 20.69%.
We’re starting to lose some of the benefit of rapid, automated vulnerability detection.
Tenable release checks for 9.58% of the CVEs they cover in this window, and Greenbone release 12.43%.
Any detection released a month after the details are publicly available is decreasing in value for me.
Tenable release checks for 14.97% of the CVEs they cover over a month after the CVE details have been published, and Greenbone release 16.23%.
With this information in mind, I wanted to check what is the delay for both Tenable and Greenbone to release a detection for their scanners. The following section will focus on vulnerabilities which:
- Have CVSSv2 rating of 10
- Are exploitable over the network
- Require no user interaction
These are the ones where an attacker can point their exploit code at your vulnerable system and gain unauthorised access.
We’ve seen previously that Tenable have remote checks for 643 critical vulnerabilities, and OpenVAS have remote checks for 450 critical vulnerabilities. Tenable release remote checks for critical vulnerabilities within 1 month of the details being made public 58.4% of the time, but Greenbone release their checks within 1 month 76.8% of the time. So, even though OpenVAS has fewer checks for those critical vulnerabilities, you are more likely to get them within 1 month of the details being made public. Let’s break that down further.
In Figure 10 we can see the absolute number of remote checks released on a given day after a CVE for a critical vulnerability has been published. What you can immediately see is that both Tenable and OpenVAS release the majority of their checks on or before the CVE details are made public; Tenable have released checks for 247 CVEs, and OpenVAS have released checks for 144 CVEs. Then since 2010 Tenable have remote released checks for 147 critical CVEs and OpenVAS 79 critical CVEs on the same day as the vulnerability details were published. The number of vulnerabilities then drops off across the first week and drops further after 1 week, as we would hope for in an efficient time-to-release scenario.
While raw numbers are good, Tenable have a larger number of checks available so it could be unfair to go on raw numbers alone. It’s potentially more important to understand the likelihood that OpenVAS or Tenable will release a check of a vulnerability on any given day after a CVE for a critical vulnerability is released. In Figure 11 we can see that Tenable release 61% their checks on or before the date that a CVE is published, and OpenVAS release a shade under 50% of their checks on or before the day that a CVE is published.
So, since 2010 Tenable has more frequently released their checks before or on the same day as the CVE details have been published for critical vulnerabilities. While Tenable is leading at this point, Greenbone’s community feed still gets a considerable percentage of their checks out on or before day 0.
I thought I’d go another step further and try and see if I could identify any trend in each organisations release delay, are they getting better year-on-year or are their releases getting later? In Figure 12 I’ve taken the mean delay for critical vulnerabilities per year and plotted them. The mean as a metric is particularly influenced by outliers in a data set, so I expected some wackiness and limited the mean to only checks released 180 days prior to a CVE being published and 31 days after a CVE being published. These seem to me like reasonable limits, as anything greater than 6 months prior to CVE details being released is potentially a quirk of the check details and anything after a 1-month delay is less important for us.
What can we take away from Figure 12?
- We can see that between 2011 and 2014 Greenbone’s release delay was better than that of Tenable, by between 5 and 10 days.
- In 2015 things reverse and for 3 years Tenable is considerably ahead of Greenbone by a matter of weeks.
- But, then in 2019 things get much closer and Greenbone seem to be releasing on average about a day earlier than Tenable.
- For both the trendline over an 11-year period is very close, with Tenable marginally beating Greenbone.
- We have yet to have any data for 2021 for OpenVAS checks for critical show-stopper CVEs.
With the larger number of checks, and still being able to release a greater percentage of their remote checks for critical vulnerabilities Tenable could win this category. However, the delay time from 2019 and 2020 going to OpenVAS, and the trend lines being so close, I am going to declare this one a tie. It’s a tie.
The takeaway from this is that both vendors are getting their checks out the majority of the time either before the CVE details are published or on the day the details are published. This is overwhelmingly positive for both scanning solutions. Over time both also appear to be releasing remote checks for critical vulnerabilities more quickly.