Back to Blog

Why attack surface reduction is your first line of defense

Daniel Andrew
Daniel Andrew
Head of Security

Key Points

Security teams can’t control when the next critical vulnerability drops. What they can control is how much of their environment is exposed when it does.

Proactive attack surface reduction is about reducing unnecessary internet exposure before it becomes a problem. This article looks at why this exposure so often gets missed and practical steps for managing it deliberately.

Why does your external attack surface matter?

Your external attack surface is everything you have exposed to the internet - servers, applications, services, APIs. It's what an attacker sees before they've gotten anywhere near your internal network. The larger and less controlled it is, the more opportunities exist for exploitation.

And those opportunities are being acted on faster than ever. One of the biggest challenges of vulnerability management is how short the exploitation window has become - for the most serious vulnerabilities, disclosure to exploitation can be as short as 24 to 48 hours.

That’s not a lot of time when you consider what has to happen before a patch is deployed: running scans, waiting for results, raising tickets, agreeing priorities, implementing and verifying the fix. If disclosure lands out of hours, it will likely take even longer before you are patched.

But in many cases, vulnerable systems don’t need to be internet-facing in the first place. With visibility of the attack surface, teams can reduce unnecessary exposure upfront and avoid the scramble altogether when a new vulnerability drops.

A recent example: ToolShell

ToolShell was an unauthenticated remote code execution vulnerability in Microsoft SharePoint. If an attacker could reach it, they could run code on your server - and because SharePoint is Active Directory connected, they'd be starting in a highly sensitive part of your environment.

This was a zero-day, meaning attackers were exploiting it before a patch was available. Microsoft disclosed on a Saturday, and confirmed that Chinese state-sponsored groups had been exploiting it for up to two weeks before that. By the time most teams knew about it, opportunistic attackers were scanning for exposed instances and exploiting at scale.

Our research found thousands of publicly accessible SharePoint instances at the time of disclosure - despite the fact that SharePoint doesn't need to be internet-facing. Every one of those exposures was unnecessary - and every unpatched server was an open door.

Why exposures get missed

In a typical external scan, informational findings sit beneath hundreds of criticals, highs, mediums, and lows. But those informationals can include detections that represent real exposure risk, such as:

  • An exposed SharePoint server
  • A database exposed to the internet, such as MySQL or Postgres
  • Other protocols which should usually be reserved for the internal network, such as RDP and SNMP

Here’s a real example of what that looks like:

In vulnerability scanning terms, classifying these as informationals sometimes makes sense - the scanner doesn’t have the context that we’re scanning from the internet, and if the scanner is deployed on the same private subnet as the targets, then those issues could indeed be informational in risk profile. Conversely, an exposed database or remote access service on the internet carries real risk, even without a known vulnerability attached to it - yet. The real danger is that this risk is commonly hidden in a traditional scan report - so it slips through the gaps.

What proactive attack surface reduction actually involves

There are three key elements to making attack surface reduction work in practice.

1. Asset discovery: define your attack surface

Before you can reduce your attack surface, you need a clear picture of what you own and what's externally reachable. That starts with identifying shadow IT - systems your organization owns or operates but isn't currently scanning or monitoring.

It doesn't take much for assets to fall outside the security team's visibility. Domains can be registered independently by different teams, and in cloud environments infrastructure can be spun up in minutes. Without a process to automatically bring those assets into scope, they go unmonitored and unprotected.

Closing that gap is important, and there are three key elements we recommend having in place. First, integrating with your cloud and DNS providers so that when new infrastructure is created, it's automatically picked up and scans are started without needing human input. This is one area where defenders have a genuine advantage: you can integrate directly with your own environments. Attackers can't.

Second, use subdomain enumeration to surface externally reachable hosts that aren't in your inventory - this matters particularly after acquisitions, where you may be inheriting infrastructure you don't yet have visibility of. 

Third, identifying infrastructure hosted with smaller, unknown cloud providers. You may have a security policy that mandates development teams only use our primary cloud provider - but you need to check that practice is being followed.

If you want to go deeper on this, we've put together a full walkthrough - watch it here.

2. Treat exposure as risk

The next step is treating attack surface exposure as a risk category in its own right.

That requires a detection capability that identifies which informational findings represent an exposure and assigns appropriate severity. An exposed SharePoint instance, for example, might reasonably be treated as a medium-risk issue:

It also means carving out space for this work in how you prioritize. If strategic efforts like attack surface reduction are always competing against urgent patching, it will always lose. That might mean setting aside time each quarter to review and reduce exposure, or assigning clear ownership so someone is accountable for it - not just when a crisis hits, but routinely.

3. Continuous monitoring

Attack surface reduction isn't a one-time exercise. Exposure changes constantly - a firewall rule gets edited, a new service gets deployed, a subdomain gets forgotten - and your team needs to detect those changes quickly.

Vulnerability scans take time to complete and running full scans daily isn't usually possible. Daily port scanning is a better fit. It's lightweight, fast, and means you can detect newly exposed services as they appear. If someone edits a firewall rule and accidentally exposes Remote Desktop, you find out the day it happens - not at the next scheduled scan, which could be up to a month later.

Fewer exposed services, fewer surprises

Proactive attack surface reduction comes down to three things: knowing what's externally reachable, reducing what doesn't need to be, and detecting changes as they happen.

When unnecessary services aren't exposed in the first place, they're far less likely to be caught up in the mass exploitation that follows a critical disclosure. That means fewer surprises, less urgent scrambling, and more time to respond deliberately when new vulnerabilities emerge.

Intruder automates this process - from discovering shadow IT and monitoring for new exposures, to alerting your team the moment something changes - so your security team can stay ahead of exposure rather than reacting to it. If you want to see what's exposed in your environment, book a demo.

Get our free

Ultimate Guide to Vulnerability Scanning

Learn everything you need to get started with vulnerability scanning and how to get the most out of your chosen product with our free PDF guide.