Blog
Vulnerabilities and Threats

9 minutes to breach: the life expectancy of an unsecured MongoDB honeypot

Daniel Andrew
Author
Daniel Andrew
Head of Security Services

Key Points

Intruder's latest research shows that Mongo databases are subject to continual attacks when exposed to the internet. Attacks are carried out automatically and on average an unsecured database is compromised less than 24 hours after going online, with the fastest we measured in just 9 minutes.

A little bit of history repeating...

In 2003 the SANS Institute famously proclaimed that putting an unpatched Windows XP machine on the internet would get it breached within 40 minutes. Fast forward nearly two decades and the technologies may have moved on a bit, but so has the average time to breach.

Thankfully, not many organisations are connecting unpatched Windows XP machines to the internet these days, but it's difficult to escape the number of news stories in the last few years describing how hundreds of millions of records of data have been leaked from organisations that left databases in a similarly insecure state. In large part this is because modern databases such as MongoDB and Elasticsearch forgo security controls requiring authentication, and are unsecured by default.

Unsecured databases (ones with no passwords set) are bountiful targets for malicious actors when they're exposed to the internet. Not only are they very easy to find, by sweeping the net and looking for the usual database ports, they are also very easy to exploit. All it takes is connecting to a database, looting it, and selling the information on some dark corner of the internet; using the contents for further credential stuffing attacks, or blackmailing the owner.

We can see from the headlines how often this happens, but we wanted to know how fast it happens. So Intruder's R&D team set about creating some honeypots to find out how these attacks happen, where the threats come from, and how fast it all takes place.

Honeypots you say?

Honeypots are systems designed to detect and monitor network intrusions by posing as vulnerable machines and recording their own compromise by malicious actors. The logs generated from honeypots then provide intelligence on the tactics and techniques deployed by those attempting to gain unauthorised access.

We set up a number of unsecured MongoDB honeypots across the web. Each was filled with fake data, some of which was designed to be alluring to hackers – a table containing password hashes. The honeypots’ network traffic was monitored for malicious activity and if password hashes were exfiltrated and seen crossing the wire, this would indicate that a database was breached.

What did we find?

Our findings show that on average, a Mongo database is scanned every three hours, and breached within 13 hours of being connected to the internet. The fastest breach we recorded was carried out a mere 9 minutes after the database was set up.

After scanning a honeypot, most attackers had breached it in under two seconds, and most database connections were made using PyMongo (a Python driver for MongoDB). This indicates that breaches are carried out automatically, and likely indiscriminately – being a ‘worthwhile’ target has nothing to do with it.

At least one of the honeypots was held to ransom within a minute of connecting. The attacker erased the database’s tables and replaced them with a ransom note, requesting payment in Bitcoin for recovery of the data:

It's worth reiterating that beyond just putting these systems on the internet, we did nothing to bring attention to them. So for anyone thinking "we're too small/unknown to be targeted" or "I'll start worrying about security when we're bigger", hopefully this will serve as a reminder that you're never too small to start doing something.

Where did it come from?

Our fastest breach in less than ten minutes came from an attacker from Russian ISP ‘Skynet’, while over a dozen IP addresses were seen interacting with more than one of our globally dispersed honeypots. These IPs were all given a ‘medium’ or (mostly) ‘high’ risk rating by Auth0’s reputational blacklist service.

Some of the honeypots were scanned by the Shadowserver Foundation, an all-volunteer non-profit organisation that carries out investigations into malicious internet activity. Here in particular it seems they were conducting research into publicly accessible devices that are “trivial to exploit or abuse”.

A few of the breaches were carried out via the Tor network, a service which provides anonymity to its users, including the likes of activists, journalists and whistleblowers. For hackers, anonymity enables them to carry out malicious campaigns incognito, without fearing attribution by law enforcement.

Over half of the breaches originated from IP addresses owned by a Romanian VPS provider. Without mentioning names, or pointing fingers, the provider appears to offer bulletproof hosting services –

  • Port scanning “allowed”
  • Machines advertised with the capacity to send tens of thousands of packets per second
  • Payments can be made through anonymous cryptocurrency
  • Services rendered are “unmanaged”

It's quite possible that some of the activity we recorded was from security researchers looking for their next headline or data for their breach database. However when it comes to protecting your company, data security must be a top priority, but so should your reputation. Whether your data is breached by a malicious attacker or a well meaning researcher, you may end up in the headlines either way.

What lessons should we learn?

Even if security or DevOps teams can detect an unsecured database, among all the noise of security alerting – and recognise its potential severity – responding to and containing such a misconfiguration even in less than 13 hours may be a tall order, let alone in under 9 minutes, so prevention is a much stronger defence than cure.

What can you do to prevent this? For Mongo, access control needs to be setup manually. However, secured or not, databases shouldn’t really be directly exposed to the internet as they don't benefit from the same multi-factor authentication as other web services, exposing them to credential stuffing and brute force attacks. Instead they should be firewalled off, or secured through a VPN.

Ideally you should have some way of making this impossible via policy, rather than simply trying to detect it when it happens. Most modern cloud platforms can help in this task, and so organisations who have fully made the jump may benefit here. Detecting it when it happens is the next best thing, and a solid backup for if (or when) the policy control fails, or for cloud environments where policy enforcement is not an option.

Intruder can help with the latter. We identify and prioritise any exposed databases on your external infrastructure. For Mongo in particular, we’ll also make sure to let you know if any unsecured instances are detected.

Alongside daily checks for the latest emerging threats, we proactively monitor for more elementary problems, such as when insecure ports and services are opened up to the internet. Our reduced-noise reporting identifies and ranks areas of the perimeter to target for optimal attack surface reduction.

Improve visibility and gain oversight of your external infrastructure today by starting a free trial.

Get our free

Ultimate Guide to Vulnerability Scanning

Learn everything you need to get started with vulnerability scanning and how to get the most out of your chosen product with our free PDF guide.

Sign up for your free 14-day trial

7 days free trial
Our research shows that Mongo databases are subject to continual attacks when exposed to the internet. Attacks are carried out ...
back to BLOG

9 minutes to breach: the life expectancy of an unsecured MongoDB honeypot

Daniel Andrew

Intruder's latest research shows that Mongo databases are subject to continual attacks when exposed to the internet. Attacks are carried out automatically and on average an unsecured database is compromised less than 24 hours after going online, with the fastest we measured in just 9 minutes.

A little bit of history repeating...

In 2003 the SANS Institute famously proclaimed that putting an unpatched Windows XP machine on the internet would get it breached within 40 minutes. Fast forward nearly two decades and the technologies may have moved on a bit, but so has the average time to breach.

Thankfully, not many organisations are connecting unpatched Windows XP machines to the internet these days, but it's difficult to escape the number of news stories in the last few years describing how hundreds of millions of records of data have been leaked from organisations that left databases in a similarly insecure state. In large part this is because modern databases such as MongoDB and Elasticsearch forgo security controls requiring authentication, and are unsecured by default.

Unsecured databases (ones with no passwords set) are bountiful targets for malicious actors when they're exposed to the internet. Not only are they very easy to find, by sweeping the net and looking for the usual database ports, they are also very easy to exploit. All it takes is connecting to a database, looting it, and selling the information on some dark corner of the internet; using the contents for further credential stuffing attacks, or blackmailing the owner.

We can see from the headlines how often this happens, but we wanted to know how fast it happens. So Intruder's R&D team set about creating some honeypots to find out how these attacks happen, where the threats come from, and how fast it all takes place.

Honeypots you say?

Honeypots are systems designed to detect and monitor network intrusions by posing as vulnerable machines and recording their own compromise by malicious actors. The logs generated from honeypots then provide intelligence on the tactics and techniques deployed by those attempting to gain unauthorised access.

We set up a number of unsecured MongoDB honeypots across the web. Each was filled with fake data, some of which was designed to be alluring to hackers – a table containing password hashes. The honeypots’ network traffic was monitored for malicious activity and if password hashes were exfiltrated and seen crossing the wire, this would indicate that a database was breached.

What did we find?

Our findings show that on average, a Mongo database is scanned every three hours, and breached within 13 hours of being connected to the internet. The fastest breach we recorded was carried out a mere 9 minutes after the database was set up.

After scanning a honeypot, most attackers had breached it in under two seconds, and most database connections were made using PyMongo (a Python driver for MongoDB). This indicates that breaches are carried out automatically, and likely indiscriminately – being a ‘worthwhile’ target has nothing to do with it.

At least one of the honeypots was held to ransom within a minute of connecting. The attacker erased the database’s tables and replaced them with a ransom note, requesting payment in Bitcoin for recovery of the data:

It's worth reiterating that beyond just putting these systems on the internet, we did nothing to bring attention to them. So for anyone thinking "we're too small/unknown to be targeted" or "I'll start worrying about security when we're bigger", hopefully this will serve as a reminder that you're never too small to start doing something.

Where did it come from?

Our fastest breach in less than ten minutes came from an attacker from Russian ISP ‘Skynet’, while over a dozen IP addresses were seen interacting with more than one of our globally dispersed honeypots. These IPs were all given a ‘medium’ or (mostly) ‘high’ risk rating by Auth0’s reputational blacklist service.

Some of the honeypots were scanned by the Shadowserver Foundation, an all-volunteer non-profit organisation that carries out investigations into malicious internet activity. Here in particular it seems they were conducting research into publicly accessible devices that are “trivial to exploit or abuse”.

A few of the breaches were carried out via the Tor network, a service which provides anonymity to its users, including the likes of activists, journalists and whistleblowers. For hackers, anonymity enables them to carry out malicious campaigns incognito, without fearing attribution by law enforcement.

Over half of the breaches originated from IP addresses owned by a Romanian VPS provider. Without mentioning names, or pointing fingers, the provider appears to offer bulletproof hosting services –

  • Port scanning “allowed”
  • Machines advertised with the capacity to send tens of thousands of packets per second
  • Payments can be made through anonymous cryptocurrency
  • Services rendered are “unmanaged”

It's quite possible that some of the activity we recorded was from security researchers looking for their next headline or data for their breach database. However when it comes to protecting your company, data security must be a top priority, but so should your reputation. Whether your data is breached by a malicious attacker or a well meaning researcher, you may end up in the headlines either way.

What lessons should we learn?

Even if security or DevOps teams can detect an unsecured database, among all the noise of security alerting – and recognise its potential severity – responding to and containing such a misconfiguration even in less than 13 hours may be a tall order, let alone in under 9 minutes, so prevention is a much stronger defence than cure.

What can you do to prevent this? For Mongo, access control needs to be setup manually. However, secured or not, databases shouldn’t really be directly exposed to the internet as they don't benefit from the same multi-factor authentication as other web services, exposing them to credential stuffing and brute force attacks. Instead they should be firewalled off, or secured through a VPN.

Ideally you should have some way of making this impossible via policy, rather than simply trying to detect it when it happens. Most modern cloud platforms can help in this task, and so organisations who have fully made the jump may benefit here. Detecting it when it happens is the next best thing, and a solid backup for if (or when) the policy control fails, or for cloud environments where policy enforcement is not an option.

Intruder can help with the latter. We identify and prioritise any exposed databases on your external infrastructure. For Mongo in particular, we’ll also make sure to let you know if any unsecured instances are detected.

Alongside daily checks for the latest emerging threats, we proactively monitor for more elementary problems, such as when insecure ports and services are opened up to the internet. Our reduced-noise reporting identifies and ranks areas of the perimeter to target for optimal attack surface reduction.

Improve visibility and gain oversight of your external infrastructure today by starting a free trial.

Release Date
Level of Ideal
Comments
Before CVE details are published
🥳
Limited public information is available about the vulnerability.

Red teamers, security researchers, detection engineers, threat actors have to actively research type of vulnerability, location in vulnerable software and build an associated exploit.

Tenable release checks for 47.43% of the CVEs they cover in this window, and Greenbone release 32.96%.
Day of CVE publish
😊
Vulnerability information is publicly accessible.

Red teamers, security researchers, detection engineers and threat actors now have access to some of the information they were previously having to hunt themselves, speeding up potential exploit creation.

Tenable release checks for 17.12% of the CVEs they cover in this window, and Greenbone release 17.69%.
First week since CVE publish
😐
Vulnerability information has been publicly available for up to 1 week.

The likelihood that exploitation in the wild is going to be happening is steadily increasing.

Tenable release checks for 10.9% of the CVEs they cover in this window, and Greenbone release 20.69%.
Between 1 week and 1 month since CVE publish
🥺
Vulnerability information has been publicly available for up to 1 month, and some very clever people have had time to craft an exploit.

We’re starting to lose some of the benefit of rapid, automated vulnerability detection.

Tenable release checks for 9.58% of the CVEs they cover in this window, and Greenbone release 12.43%.
After 1 month since CVE publish
😨
Information has been publicly available for more than 31 days.

Any detection released a month after the details are publicly available is decreasing in value for me.

Tenable release checks for 14.97% of the CVEs they cover over a month after the CVE details have been published, and Greenbone release 16.23%.

With this information in mind, I wanted to check what is the delay for both Tenable and Greenbone to release a detection for their scanners. The following section will focus on vulnerabilities which:

  • Have CVSSv2 rating of 10
  • Are exploitable over the network
  • Require no user interaction

These are the ones where an attacker can point their exploit code at your vulnerable system and gain unauthorised access.

We’ve seen previously that Tenable have remote checks for 643 critical vulnerabilities, and OpenVAS have remote checks for 450 critical vulnerabilities. Tenable release remote checks for critical vulnerabilities within 1 month of the details being made public 58.4% of the time, but Greenbone release their checks within 1 month 76.8% of the time. So, even though OpenVAS has fewer checks for those critical vulnerabilities, you are more likely to get them within 1 month of the details being made public. Let’s break that down further.

In Figure 10 we can see the absolute number of remote checks released on a given day after a CVE for a critical vulnerability has been published. What you can immediately see is that both Tenable and OpenVAS release the majority of their checks on or before the CVE details are made public; Tenable have released checks for 247 CVEs, and OpenVAS have released checks for 144 CVEs. Then since 2010 Tenable have remote released checks for 147 critical CVEs and OpenVAS 79 critical CVEs on the same day as the vulnerability details were published. The number of vulnerabilities then drops off across the first week and drops further after 1 week, as we would hope for in an efficient time-to-release scenario.

Figure 10: Absolute numbers of critical CVEs with a remote check release date from the date a CVE is published

While raw numbers are good, Tenable have a larger number of checks available so it could be unfair to go on raw numbers alone. It’s potentially more important to understand the likelihood that OpenVAS or Tenable will release a check of a vulnerability on any given day after a CVE for a critical vulnerability is released. In Figure 11 we can see that Tenable release 61% their checks on or before the date that a CVE is published, and OpenVAS release a shade under 50% of their checks on or before the day that a CVE is published.

Figure 11: Percentage chance of delay for critical vulnerabilities

So, since 2010 Tenable has more frequently released their checks before or on the same day as the CVE details have been published for critical vulnerabilities. While Tenable is leading at this point, Greenbone’s community feed still gets a considerable percentage of their checks out on or before day 0.

I thought I’d go another step further and try and see if I could identify any trend in each organisations release delay, are they getting better year-on-year or are their releases getting later? In Figure 12 I’ve taken the mean delay for critical vulnerabilities per year and plotted them. The mean as a metric is particularly influenced by outliers in a data set, so I expected some wackiness and limited the mean to only checks released 180 days prior to a CVE being published and 31 days after a CVE being published. These seem to me like reasonable limits, as anything greater than 6 months prior to CVE details being released is potentially a quirk of the check details and anything after a 1-month delay is less important for us.

What can we take away from Figure 12?

  • We can see that between 2011 and 2014 Greenbone’s release delay was better than that of Tenable, by between 5 and 10 days.
  • In 2015 things reverse and for 3 years Tenable is considerably ahead of Greenbone by a matter of weeks.
  • But, then in 2019 things get much closer and Greenbone seem to be releasing on average about a day earlier than Tenable.
  • For both the trendline over an 11-year period is very close, with Tenable marginally beating Greenbone.
  • We have yet to have any data for 2021 for OpenVAS checks for critical show-stopper CVEs.
Figure 12: Release delay year-on-year (lower is better)

With the larger number of checks, and still being able to release a greater percentage of their remote checks for critical vulnerabilities Tenable could win this category. However, the delay time from 2019 and 2020 going to OpenVAS, and the trend lines being so close, I am going to declare this one a tie. It’s a tie.

The takeaway from this is that both vendors are getting their checks out the majority of the time either before the CVE details are published or on the day the details are published. This is overwhelmingly positive for both scanning solutions. Over time both also appear to be releasing remote checks for critical vulnerabilities more quickly.

Written by

Daniel Andrew

Recommended articles

Ready to get started with your 14-day trial?
try for free