Agent-Based vs Network-Based Internal Vulnerability Scanning
When I originally wrote this article in 2018, agent-based vs network-based vulnerability scanning seemed like a choice between two finely-balanced alternatives. In a post-COVID world, where everyone’s workforce is at best working from home multiple days a week, if not every day - it feels a lot more like agent-based scanning is a must, while network-based scanning is an optional extra.
This article will go in depth on the strengths and weaknesses of each approach, but let’s wind it back a second for those who aren’t sure why they should even do internal scanning in the first place.
Why perform internal vulnerability scanning?
While external vulnerability scanning can give a great overview of what you look like to a hacker, the information that can be gleaned without access to your systems can be limited. Some serious vulnerabilities can be discovered at this stage, so it’s a must for many organisations, but that’s not where hackers stop.
Techniques like Phishing, targeted malware, and watering-hole attacks all contribute to the risk that even if your externally facing systems are secure, you may still be compromised by a cyber criminal. Furthermore, an externally facing system that looks secure from a black-box perspective, may have severe vulnerabilities that would be revealed by deeper inspection of the system and software being run.
This is the gap that internal vulnerability scanning fills. Protecting the inside like you protect the outside provides a second layer of defence, making your organisation significantly more resilient to a breach. For this reason it’s also seen as a must for many organisations.
If you’re reading this article though, you are probably already aware of the value internal scanning can bring but you’re not sure which type is right for your business. This guide will help you in your search.
Types of internal scanner
Generally, when it comes to identifying and fixing vulnerabilities on your internal network, there are two competing (but not mutually exclusive) approaches: network-based internal vulnerability scanning and agent-based internal vulnerability scanning. Let’s go through each one.
What is network-based scanning?
Network-based internal vulnerability scanning is the more traditional approach, running internal network scans on a box known as a scanning ‘appliance’ that sits on your infrastructure (or more recently, on a Virtual Machine in your internal cloud).
What is agent-based scanning?
Agent-based internal vulnerability scanning is considered the more modern approach, running ‘agents’ on your devices that report back to a central server.
While “authenticated scanning” allows network-based scans to gather similar levels of information to an agent-based scan, there are still benefits and drawbacks to each approach.
Implementing this badly can cause headaches for years to come. So for organisations looking to implement internal vulnerability scans for the first time, we’ve highlighted our experiences of the differences here.
Which internal scanner is better for your business?
Coverage of Network-based vs Agent-based Internal Vulnerability Scanning
It almost goes without saying, but agents can’t be installed on everything.
Devices like printers; routers and switches; and any other specialised hardware you may have on your network, such as HP Integrated Lights-Out, which is common to many large organisations who manage their own servers, may not have an operating system that’s supported by an agent. However, they will have an IP address, which means you can scan them via a network-based scanner.
This is a double-edged sword in disguise though. Yes, you are scanning everything, which immediately sounds better. But how much value do those extra results to your breach prevention efforts bring? Those printers and HP iLO devices may infrequently have vulnerabilities, and only some of these may be serious. They may assist an attacker who is already inside your network, but will they help one break into your network to begin with? Probably not.
Meanwhile, will the noise that gets added to your results in the way of additional SSL cipher warnings, self-signed certificates, and the extra management overheads of including them to the whole process be worthwhile?
Clearly the desirable answer over time is yes, you would want to scan these assets; defence in depth is a core concept in cyber security. But security is equally never about the perfect scenario. Some organisations don’t have the same resources that others do, and have to make effective decisions based on their team size and budgets available. Trying to go from scanning nothing to scanning everything could easily overwhelm a security team trying to implement internal scanning for the first time, not to mention the engineering departments responsible for the remediation effort.
Overall it makes sense to consider the benefits of scanning everything, vs the workload it might entail, to decide whether it’s right for your organisation, or more importantly, right for your organisation at this point in time.
Looking at it from a different angle, yes network-based scans can scan everything on your network, but what about what’s not on your network?
Some company laptops get handed out and then rarely make it back into the office, especially in organisations with heavy field sales or consultancy operations. Or what about companies for whom remote working is the norm rather than the exception? Network-based scans won’t see it if it’s not on the network, but with agent-based vulnerability scanning, you can include assets in monitoring even when they are offsite.
So if you’re not using agent-based scanning, you might well be gifting the attacker the one weak link they need to get inside your corporate network: an un-patched laptop that might browse a malicious website, or open a malicious attachment. Certainly more useful to an attacker than a printer running a service with a weak SSL cipher.
Our winner: Agent-based scanning, because it will allow you broader coverage and include assets not on your network – key while the world adjusts to a hybrid of office and remote working.
Intruder uses an industry leading scanning engine that’s used by banks and governments all over the world. With over 67,000 local checks available for historic vulnerabilities, and new ones being added on a regular basis, you can be confident of its coverage.
Attribution with Network-based vs Agent-based Scanning
On fixed-IP networks such as internal server or external-facing environments, identifying where to apply fixes for vulnerabilities on a particular IP address is relatively straight-forward.
In environments where IP addresses are assigned dynamically though (usually end-user environments are configured like this to support laptops, desktops and other devices) this can become a problem. This also leads to inconsistencies between monthly reports, and makes it difficult to track metrics in the remediation process.
Reporting is a key component of most vulnerability management programmes, and senior stakeholders will want you to demonstrate that vulnerabilities are being managed effectively.
Imagine taking a report to your CISO, or IT Director, showing that you have an asset intermittently appearing on your network with a critical weakness. One month it’s there, the next it’s gone, then it’s back again…
In dynamic environments like this, using agents that are each uniquely tied to a single asset makes it simpler to measure, track and report on effective remediation activity without the ground shifting beneath your feet.
Our winner: Agent-based scanning, because it will allow for more effective measurement and reporting of your remediation efforts.
Discovery with a Network-based vs Agent-based Vulnerability Scanner
Depending how archaic or extensive your environments are, or what gets brought to the table by a new acquisition, your visibility of what’s actually in your network in the first place may be very good or very poor.
One key advantage to network-based vulnerability scanning is that you can discover assets you didn’t know you had. Not to be overlooked, asset management is a precursor to effective vulnerability management. You can’t secure it if you don’t know you have it!
Similarly to the discussion around coverage though, if you’re willing to discover assets on your network, you must also be willing to commit resources to investigating what they are, and tracking down their owners. This can lead to ownership-tennis where nobody is willing to take responsibility for the asset, and require a lot of follow up activity from the security team. Again it simply comes down to priorities. Yes, it needs to be done, but the scanning is the easy bit, you need to ask yourself if you’re also ready for the follow up.
Our winner: Network-based scanning, but only if you have the time and resource to manage what is uncovered!
Deployment of a Network-based vs Agent-based scanner
Depending on your environment, the effort of implementation and ongoing management for properly authenticated network-based scans will be greater than that of an agent-based scan. However, this heavily depends on how many operating systems you have vs how complex your network architecture is.
Simple Windows networks allow for easy rollout of agents through Group Policy installs. Similarly a well-managed server environment shouldn’t pose too much of a challenge.
The difficulties of installing agents occur where there’s a great variety of operating systems under management, as this will require a heavily tailored rollout process. Modifications to provisioning procedures will also need to be taken into account, to ensure that new assets are deployed with the agents already installed, or quickly get installed after being brought online. Modern server orchestration technologies like Puppet, Chef and Ansible can really help here.
Deploying network-based appliances on the other hand requires analysis of network visibility, i.e. from “this” position in the network ,can we “see” everything else in the network, so the scanner can scan everything?
It sounds simple enough, but as with many things in technology, it’s often harder in practice than it is on paper, especially when dealing with legacy networks, or those resulting from merger activity. For example, high numbers of VLANs will equate to high amounts of configuration work on the scanner.
For this reason, designing a network-based scanning architecture relies on accurate network documentation and understanding, which is often a challenge, even for well-resourced organisations. Sometimes, errors in understanding up-front can lead to an implementation which doesn’t match up to reality, and requires subsequent “patches” and the addition of further appliances. The end result can often be that it’s just as difficult to maintain patchwork despite original estimations seeming simple and cost-effective.
Our winner: It depends on your environment and infrastructure team’s availability!
Maintenance of Network-based vs Agent-based vulnerability scanning
Due to the situation explained in the previous section, practical considerations often mean you end up with multiple scanners on the network in a variety of physical or logical positions. This means that when new assets are provisioned, or changes are made to the network, you have to make decisions on which scanner will be responsible, and make changes to that scanner. This can place an extra burden on an otherwise busy security team. As a rule of thumb, complexity wherever not necessary should be avoided.
Sometimes, for these same reasons, appliances need to be located in places where physical maintenance is troublesome. This could be either a datacentre or a local office or branch. Scanner not responding today? Suddenly the SecOps team are picking straws for who has to roll up their sleeves and visit the datacentre.
Also, as any new VLANs are rolled out, or firewall and routing changes alter the layout of the network, scanning appliances need to be kept in sync with any changes made.
Our winner: Agent-based scanners are much easier to maintain once installed.
Concurrency and scalability of Network-based vs Agent-based vulnerability scanning
While the concept of sticking a box on your network and running everything from a central point can sound alluringly simple, if you are so lucky to have such a simple network (many aren’t), there are still some very real practicalities to consider around how that scales.
Take for example the recent vulnerability Log4shell which impacted Log4j - a logging tool used by millions of computers worldwide. With such wide exposure, it’s safe to say almost every security team faced a scramble to determine whether they were affected or not.
Even with the ideal scenario of having one centralised scanning appliance, the reality is this box cannot concurrently scan a huge number of machines. It may run a number of threads, but realistically processing power and network level limitations mean you could be waiting for many hours before it comes back with the full picture (or in some cases, a lot longer). And that’s assuming all those systems are online when you need them to be.
Agent-based vulnerability scanning on the other hand spreads the load to individual machines, meaning there’s less of a bottleneck on the network, and results can be gained much more quickly.
There’s also the reality that your network infrastructure may be ground to a halt by concurrently scanning all of your assets across the network. For this reason some network engineering teams limit scanning windows to after-hours when laptops are at home and desktops are turned off. Test environments may even be powered down to save resources.
Our winner: Agent-based scanning can overcome common problems that are not always obvious in advance, while relying on network scanning alone can lead to major gaps in coverage.
Overall, I believe in doing things incrementally, and getting the basics right before moving on to the next challenge. This is a view that I seem to share with the NCSC, the UK’s leading authority on cyber security, which frequently publishes guidance around getting the basics right.
This is because, broadly speaking, having the basic 20% of defences implemented effectively will stop 80% of the attackers out there. In contrast, advancing into 80% of the available defences but implementing them badly will likely mean you struggle to keep out the classic kid-in-bedroom scenario we’ve seen too much of in recent years.
For those organisations on an information security journey, looking to roll out vulnerability scanning solutions, here are our best recommendations:
Step 1: Ensure you have your perimeter scanning sorted with a continuous and proactive approach. Your perimeter is exposed to the internet 24/7, and so there’s no excuse for organisations who fail to respond quickly to critical vulnerabilities here.
Step 2: Next, focus on your user environment. The second most trivial route into your network will be a phishing email or drive-by download that infects a user workstation, as this requires no physical access to any of your locations. With remote work being the new norm, you need to be able to have a watch over all laptops and devices, wherever they may be. From the discussion above, it’s fairly clear that agents have the upper hand in this department.
Step 3: Your internal servers, switches and other infrastructure will be the third line of defence, and this is where internal network appliance-based scans can make a difference. Internal vulnerabilities like this can help attackers elevate their privileges and move around inside your network, but it won’t be how they get in, so it makes sense to focus here last.
Hopefully this article casts some light on what is never a trivial decision, and can cause lasting pain points for organisations with ill-fitting implementations. There are pros and cons as always, no one-size fits all, and plenty of rabbit holes to avoid. But, by considering the above scenarios, you should be able to get a feel for what is right for your organisation.
Intruder offers an agent-based internal vulnerability scanning solution which you can try for free if you sign up here.
This article was written for one of our customers after searching for robust discussions on this subject online and finding none. If you have any views or experiences to add, please do comment or get in touch, we’d love to improve this article with the experiences of others.
- Raw CVE Coverage
- Risk Rating Coverage
- Remote Check Types
- Check Publication Lead Time
- Local/Authenticated vs Remote Check Prioritisation
- Software Vendor & Package Coverage
- Headline Vulnerabilities of 2021 Coverage
- Analysis Decisions
Red teamers, security researchers, detection engineers, threat actors have to actively research type of vulnerability, location in vulnerable software and build an associated exploit.
Tenable release checks for 47.43% of the CVEs they cover in this window, and Greenbone release 32.96%.
Red teamers, security researchers, detection engineers and threat actors now have access to some of the information they were previously having to hunt themselves, speeding up potential exploit creation.
Tenable release checks for 17.12% of the CVEs they cover in this window, and Greenbone release 17.69%.
The likelihood that exploitation in the wild is going to be happening is steadily increasing.
Tenable release checks for 10.9% of the CVEs they cover in this window, and Greenbone release 20.69%.
We’re starting to lose some of the benefit of rapid, automated vulnerability detection.
Tenable release checks for 9.58% of the CVEs they cover in this window, and Greenbone release 12.43%.
Any detection released a month after the details are publicly available is decreasing in value for me.
Tenable release checks for 14.97% of the CVEs they cover over a month after the CVE details have been published, and Greenbone release 16.23%.
With this information in mind, I wanted to check what is the delay for both Tenable and Greenbone to release a detection for their scanners. The following section will focus on vulnerabilities which:
- Have CVSSv2 rating of 10
- Are exploitable over the network
- Require no user interaction
These are the ones where an attacker can point their exploit code at your vulnerable system and gain unauthorised access.
We’ve seen previously that Tenable have remote checks for 643 critical vulnerabilities, and OpenVAS have remote checks for 450 critical vulnerabilities. Tenable release remote checks for critical vulnerabilities within 1 month of the details being made public 58.4% of the time, but Greenbone release their checks within 1 month 76.8% of the time. So, even though OpenVAS has fewer checks for those critical vulnerabilities, you are more likely to get them within 1 month of the details being made public. Let’s break that down further.
In Figure 10 we can see the absolute number of remote checks released on a given day after a CVE for a critical vulnerability has been published. What you can immediately see is that both Tenable and OpenVAS release the majority of their checks on or before the CVE details are made public; Tenable have released checks for 247 CVEs, and OpenVAS have released checks for 144 CVEs. Then since 2010 Tenable have remote released checks for 147 critical CVEs and OpenVAS 79 critical CVEs on the same day as the vulnerability details were published. The number of vulnerabilities then drops off across the first week and drops further after 1 week, as we would hope for in an efficient time-to-release scenario.
While raw numbers are good, Tenable have a larger number of checks available so it could be unfair to go on raw numbers alone. It’s potentially more important to understand the likelihood that OpenVAS or Tenable will release a check of a vulnerability on any given day after a CVE for a critical vulnerability is released. In Figure 11 we can see that Tenable release 61% their checks on or before the date that a CVE is published, and OpenVAS release a shade under 50% of their checks on or before the day that a CVE is published.
So, since 2010 Tenable has more frequently released their checks before or on the same day as the CVE details have been published for critical vulnerabilities. While Tenable is leading at this point, Greenbone’s community feed still gets a considerable percentage of their checks out on or before day 0.
I thought I’d go another step further and try and see if I could identify any trend in each organisations release delay, are they getting better year-on-year or are their releases getting later? In Figure 12 I’ve taken the mean delay for critical vulnerabilities per year and plotted them. The mean as a metric is particularly influenced by outliers in a data set, so I expected some wackiness and limited the mean to only checks released 180 days prior to a CVE being published and 31 days after a CVE being published. These seem to me like reasonable limits, as anything greater than 6 months prior to CVE details being released is potentially a quirk of the check details and anything after a 1-month delay is less important for us.
What can we take away from Figure 12?
- We can see that between 2011 and 2014 Greenbone’s release delay was better than that of Tenable, by between 5 and 10 days.
- In 2015 things reverse and for 3 years Tenable is considerably ahead of Greenbone by a matter of weeks.
- But, then in 2019 things get much closer and Greenbone seem to be releasing on average about a day earlier than Tenable.
- For both the trendline over an 11-year period is very close, with Tenable marginally beating Greenbone.
- We have yet to have any data for 2021 for OpenVAS checks for critical show-stopper CVEs.
With the larger number of checks, and still being able to release a greater percentage of their remote checks for critical vulnerabilities Tenable could win this category. However, the delay time from 2019 and 2020 going to OpenVAS, and the trend lines being so close, I am going to declare this one a tie. It’s a tie.
The takeaway from this is that both vendors are getting their checks out the majority of the time either before the CVE details are published or on the day the details are published. This is overwhelmingly positive for both scanning solutions. Over time both also appear to be releasing remote checks for critical vulnerabilities more quickly.