Generally, when it comes to identifying and fixing vulnerabilities on your internal network, there are two competing (but not mutually…
back to BLOG

Agent-Based vs Network-Based Internal Vulnerability Scanning

Chris Wallis

Generally, when it comes to identifying and fixing vulnerabilities on your internal network, there are two competing (but not mutually exclusive) approaches.

There’s the more traditional approach, running internal network scans on a box known as a scanning ‘appliance’ that sits on your infrastructure (or more recently, on a Virtual Machine in your internal cloud). Then there’s the even more modern approach, running ‘agents’ on your devices that report back to a central server.

While “authenticated scanning” allows network-based scans to gather similar levels of information to an agent-based scan, there are still benefits and drawbacks to each approach. Implementing this badly can cause headaches for years to come. So for organisations looking to implement internal vulnerability scans for the first time, we’ve highlighted our experiences of the differences here.


It almost goes without saying, but agents can’t be installed on everything. One benefit to network scanning is that if it has an IP address, you can scan it. But it may not have an operating system that’s supported by an agent. We’re talking about devices like printers, routers & switches, and any other specialised hardware you may have on your network, such as HP Integrated Lights-Out, which is common to many large organisations who manage their own servers.

This is a double-edged sword in disguise though. Yes, you are scanning everything, which immediately sounds better. But how much value are those extra results to your breach prevention efforts? Those printers and HP iLO devices may infrequently have vulnerabilities, some of which may be serious. They may assist an attacker who is already inside your network, but will they help one break into your network to begin with? Probably not. Meanwhile, will the noise that gets added to your results in the way of additional SSL cipher warnings, self-signed certificates, and the extra management overheads of including them to the whole process be worthwhile?

Clearly the desirable answer over time is yes, you would want to scan these assets; defence in depth is a core concept in cyber security. But security is equally never about the perfect scenario, some organisations don’t have the same resources that others do, and have to make effective decisions based on their team size and budgets available. Trying to go from not scanning anything to scanning all the things could easily overwhelm a security team trying to implement internal scanning for the first time, not to mention the engineering departments responsible for the remediation effort.

Overall it makes sense to consider the benefits of scanning everything, vs the workload it might entail, to decide whether it’s right for your organisation, or more importantly, right for your organisation at this point in time.

Looking at it from a different angle, yes network-based scans can scan everything on your network, but what about what’s not on your network?

Some company laptops get handed out and then rarely make it back into the office, especially in organisations with heavy field sales or consultancy operations. Or what about companies for whom remote working is the norm rather than the exception? Network-based scans won’t see it if it’s not on the network, while agents allow you to include assets in monitoring even when they are offsite.

So if you’re not using agent-based scanning, you might well be gifting the attacker the one weak link they need to get inside your corporate network: an un-patched laptop that might browse a malicious website, or open a malicious attachment. Certainly more useful to an attacker than a printer running a service with a weak SSL cipher.


On fixed-IP networks such as internal server or external-facing environments, identifying where to apply fixes for vulnerabilities on a particular IP addresses is relatively straight-forward.

In environments where IP addresses are assigned dynamically though (usually end-user environments are configured like this to support laptops, desktops and other devices) this can become a problem. This also leads to inconsistencies between monthly reports, and makes it difficult to track metrics in the remediation process.

Reporting is a key component of most vulnerability management programmes, and senior stakeholders will want you to demonstrate that vulnerabilities are being managed effectively. Imagine taking a report to your CISO, or IT Director, showing that you have an asset intermittently appearing on your network with a critical weakness. One month it’s there, the next it’s gone, then it’s back again…

In dynamic environments like this, using agents that are each uniquely tied to a single asset makes it simpler to measure, track and report on effective remediation activity without the ground shifting beneath your feet.


Depending how archaic or extensive your environments are, or what gets brought to the table by a new acquisition, you may have very good or very poor visibility on what’s actually in your network in the first place.

One key advantage to network-based vulnerability scanning is that you can discover assets you didn’t know you had. Not to be overlooked, asset management is a precursor to effective vulnerability management. You can’t secure it if you don’t know you have it!

Similarly to the discussion around coverage though, if you’re willing to discover assets on your network, you must also be willing to commit resources to investigating what they are, and tracking down their owners. This can lead to ownership-tennis where nobody is willing to take responsibility for the asset, and require a lot of follow up activity from the security team. Again it simply comes down to priorities. Yes it needs to be done, but the scanning is the easy bit, you need to ask yourself if you’re also ready for the follow up.


Depending on your environment, the implementation effort of properly authenticated network-based scans can require a larger deployment effort and more ongoing management than agents do. However, this heavily depends on how many operating systems you have vs how complex your network architecture is.

Simple Windows networks allow for easy rollout of agents through Group Policy installs. Similarly a well managed server environment shouldn’t pose too much of a challenge. The difficulties of installing agents occur where the more variety of operating systems under management, the more tailoring the rollout process will need. Modifications to provisioning procedures will also need to be taken into account, to ensure that new assets are deployed with the agents already installed, or quickly get installed after being brought online. Modern server orchestration technologies like Puppet, Chef and Ansible can really help here.

Deploying network based appliances on the other hand requires analysis of network visibility, i.e. from “this” position in the network can we “see” everything else in the network, so the scanner can scan everything. It sounds simple enough, but as with many things in technology it’s often harder in practice than it is on paper, especially when dealing with legacy networks, or those resulting from merger activity. For example high numbers of VLANs will equate to high amounts of configuration work on the scanner.

For this reason, designing a network-based scanning architecture relies on accurate network documentation and understanding, which is often a challenge even for well resourced organisations. Sometimes errors in understanding up-front can lead to an implementation which doesn’t match up to reality, and requires subsequent “patches” and the addition of further appliances. The end result can often end up as a difficult to maintain patchwork despite original estimations seeming simple and cost-effective.


Due to the above, practical considerations often mean you end up with multiple scanners on the network in a variety of physical or logical positions. This means when new assets are provisioned, or changes are made to the network, you have to make decisions on which scanner will be responsible, and make changes to that scanner. This can place an extra burden on an otherwise busy security team. As a rule of thumb, complexity wherever not necessary should be avoided.

Sometimes for these same reasons appliances need to be located in places where physical maintenance is troublesome. This could be either a datacentre or a local office or branch. Scanner not responding today? Suddenly the SecOps team are picking straws for who has to roll up their sleeves and visit the datacentre.

Also as any new VLANs are rolled out, or firewall and routing changes alter the layout of the network, scanning appliances need to be kept in sync with any changes made.

Concurrency and scalability

While the concept of sticking a box on your network and running everything from a central point can sound alluringly simple, if you are so lucky to have such a simple network (many aren’t), there are still some very real practicalities to consider around how that scales.

Take for example the recent Meltdown / Spectre scare. While the vulnerabilities may not have quite lived up to their hype, it’s fair to say most security teams will have found themselves needing to rapidly answer some serious questions from senior management about where they were affected.

Even with the ideal scenario of having one centralised scanning appliance, the reality is this box can not concurrently scan a huge number of machines. It may run a number of threads, but realistically processing power and network level limitations mean you could be waiting a number of hours before it comes back with the full picture (or in some cases, a lot longer). Agents on the other hand spread the load to individual machines, meaning there’s less of a bottleneck on the network, and results can be gained in much shorter timeframes.

There’s also the reality that your network infrastructure may be ground to a halt by concurrently scanning all of your assets across the network. For this reason some network engineering teams limit scanning windows to after-hours when laptops are at home and desktops are turned off. Test environments may even be powered down to save resources. Once again it’s clear that relying on network scanning alone can lead to major gaps in coverage, and an agent-based solution can overcome common problems that are not always obvious in advance.


Overall, I believe in doing things incrementally, and getting the basics right before moving on to the next challenge. This is a view that I seem to share with the NCSC, the UK’s leading authority on cyber security, who frequently publish guidance around getting the basics right.

This is because, broadly speaking, having the basic 20% of defences implemented effectively will stop 80% of the attackers out there. In contrast, advancing into 80% of the available defences but implementing them badly will likely struggle to keep out the classic kid-in-bedroom scenario we’ve seen too much of in recent years.

For those organisations on an information security journey, looking to roll out vulnerability scanning solutions, step one is ensuring you have your perimeter scanning sorted with a continuous and proactive approach. Your perimeter is exposed to the internet 24/7, and so there’s no excuse for organisations who fail to respond quickly to critical vulnerabilities in their perimeter.

The next thing to focus on is your user environment, as the second most trivial route into your network will be a phishing email or drive-by download that infects a user workstation, as this requires no physical access to any of your locations. From the discussion above, it’s fairly clear that agents have the upper hand in this department.

Your internal servers, switches and other infrastructure will be the third line of defence, and this is where internal network appliance-based scans can make a difference. Internal vulnerabilities like this can help attackers elevate their privileges and move around inside your network, but it won’t be how they get in, so it makes sense to focus here last.

Hopefully this article casts some light on what is never a trivial decision, and can cause lasting pain points for organisations with ill fitting implementations. There are pros and cons as always, no one-size fits all, and plenty of rabbit holes to avoid. But by considering the above scenarios, you should be able to get a feel for what is right for your organisation.

This article was written for one of our customers after searching for robust discussions on this subject online and finding none. If you have any views or experiences to add, please do comment or get in touch, we’d love to improve this article with the experiences of others.

And for those who are interested in internal vulnerability scanning, Intruder now offers an agent-based solution in addition to it's external vulnerability scanning services.  Sign up here to activate your first free month trial.

Release Date
Level of Ideal
Before CVE details are published
Limited public information is available about the vulnerability.

Red teamers, security researchers, detection engineers, threat actors have to actively research type of vulnerability, location in vulnerable software and build an associated exploit.

Tenable release checks for 47.43% of the CVEs they cover in this window, and Greenbone release 32.96%.
Day of CVE publish
Vulnerability information is publicly accessible.

Red teamers, security researchers, detection engineers and threat actors now have access to some of the information they were previously having to hunt themselves, speeding up potential exploit creation.

Tenable release checks for 17.12% of the CVEs they cover in this window, and Greenbone release 17.69%.
First week since CVE publish
Vulnerability information has been publicly available for up to 1 week.

The likelihood that exploitation in the wild is going to be happening is steadily increasing.

Tenable release checks for 10.9% of the CVEs they cover in this window, and Greenbone release 20.69%.
Between 1 week and 1 month since CVE publish
Vulnerability information has been publicly available for up to 1 month, and some very clever people have had time to craft an exploit.

We’re starting to lose some of the benefit of rapid, automated vulnerability detection.

Tenable release checks for 9.58% of the CVEs they cover in this window, and Greenbone release 12.43%.
After 1 month since CVE publish
Information has been publicly available for more than 31 days.

Any detection released a month after the details are publicly available is decreasing in value for me.

Tenable release checks for 14.97% of the CVEs they cover over a month after the CVE details have been published, and Greenbone release 16.23%.

With this information in mind, I wanted to check what is the delay for both Tenable and Greenbone to release a detection for their scanners. The following section will focus on vulnerabilities which:

These are the ones where an attacker can point their exploit code at your vulnerable system and gain unauthorised access.

We’ve seen previously that Tenable have remote checks for 643 critical vulnerabilities, and OpenVAS have remote checks for 450 critical vulnerabilities. Tenable release remote checks for critical vulnerabilities within 1 month of the details being made public 58.4% of the time, but Greenbone release their checks within 1 month 76.8% of the time. So, even though OpenVAS has fewer checks for those critical vulnerabilities, you are more likely to get them within 1 month of the details being made public. Let’s break that down further.

In Figure 10 we can see the absolute number of remote checks released on a given day after a CVE for a critical vulnerability has been published. What you can immediately see is that both Tenable and OpenVAS release the majority of their checks on or before the CVE details are made public; Tenable have released checks for 247 CVEs, and OpenVAS have released checks for 144 CVEs. Then since 2010 Tenable have remote released checks for 147 critical CVEs and OpenVAS 79 critical CVEs on the same day as the vulnerability details were published. The number of vulnerabilities then drops off across the first week and drops further after 1 week, as we would hope for in an efficient time-to-release scenario.

Figure 10: Absolute numbers of critical CVEs with a remote check release date from the date a CVE is published

While raw numbers are good, Tenable have a larger number of checks available so it could be unfair to go on raw numbers alone. It’s potentially more important to understand the likelihood that OpenVAS or Tenable will release a check of a vulnerability on any given day after a CVE for a critical vulnerability is released. In Figure 11 we can see that Tenable release 61% their checks on or before the date that a CVE is published, and OpenVAS release a shade under 50% of their checks on or before the day that a CVE is published.

Figure 11: Percentage chance of delay for critical vulnerabilities

So, since 2010 Tenable has more frequently released their checks before or on the same day as the CVE details have been published for critical vulnerabilities. While Tenable is leading at this point, Greenbone’s community feed still gets a considerable percentage of their checks out on or before day 0.

I thought I’d go another step further and try and see if I could identify any trend in each organisations release delay, are they getting better year-on-year or are their releases getting later? In Figure 12 I’ve taken the mean delay for critical vulnerabilities per year and plotted them. The mean as a metric is particularly influenced by outliers in a data set, so I expected some wackiness and limited the mean to only checks released 180 days prior to a CVE being published and 31 days after a CVE being published. These seem to me like reasonable limits, as anything greater than 6 months prior to CVE details being released is potentially a quirk of the check details and anything after a 1-month delay is less important for us.

What can we take away from Figure 12?

Figure 12: Release delay year-on-year (lower is better)

With the larger number of checks, and still being able to release a greater percentage of their remote checks for critical vulnerabilities Tenable could win this category. However, the delay time from 2019 and 2020 going to OpenVAS, and the trend lines being so close, I am going to declare this one a tie. It’s a tie.

The takeaway from this is that both vendors are getting their checks out the majority of the time either before the CVE details are published or on the day the details are published. This is overwhelmingly positive for both scanning solutions. Over time both also appear to be releasing remote checks for critical vulnerabilities more quickly.

Get Our Free "Ultimate Guide to Vulnerability Scanning"
Learn everything you need to get started with vulnerability scanning and how to get the most out of your chosen product with our free PDF guide.

Written by

Chris Wallis

Recommended articles

Ready to get started with your 30-day trial?

try for free