Blog
Insights

Should the data on my server be encrypted?

Chris Wallis
Author
Chris Wallis
Founder & CEO

Key Points

Over the last few days I’ve heard a lot of questions in the media asking why TalkTalk didn’t have their customer data encrypted.

While it’s easy to understand why people might ask the question, it’s slightly harder to understand why encryption is not the answer. This blog post will explain the purpose of encryption, and the attacks it prevents, but more importantly, those that it doesn’t.

So why do we encrypt data? Well, encryption is supposed to make the data impossible to read, but not for everyone! Some people will still need to access it, or it would be useless.

So when my laptop has an encrypted disk, and the decryption key is based on a password I keep in my head, that means I can still use it because I know the password, but anyone who steals my laptop will have a very hard time decrypting its data.

The problem with server systems like the TalkTalk customer database, is that they are designed to be read not just by one human user, but by many other computer systems. For example, the website that customers use to check their account details; the call centre staff who use computers to look up your records, or the accounting system which needs to know billing details for customers. And all of these systems need to access the customer data, obviously, unencrypted.

This changes everything. Because now if I hack into the accounting system, then I can use it to read as much customer data as I want. Whether it’s encrypted on the actual customer database or not becomes irrelevant.

For this reason, encryption should be thought of primarily as a protection for assets which you don’t have good physical controls over, such as laptops and other mobile devices.

However, when it comes to protecting data stored on database servers with many connected systems, there are other layers of security you should be thinking of. For example, limiting access to only the required users; penetration testing connected systems to discover weaknesses that hackers might exploit, and intrusion monitoring to detect when hackers or malicious insiders may be stealing large amounts of data.

The more layers like this that you add, the more secure your data becomes. But, unless you are worried about someone breaking into your datacenter and physically stealing your database, then encryption is not the answer.

Get our free

Ultimate Guide to Vulnerability Scanning

Learn everything you need to get started with vulnerability scanning and how to get the most out of your chosen product with our free PDF guide.

Sign up for your free 14-day trial

7 days free trial
Over the last few days I’ve heard a lot of questions in the media asking why TalkTalk didn’t have their customer data encrypted.
back to BLOG

Should the data on my server be encrypted?

Chris Wallis

Over the last few days I’ve heard a lot of questions in the media asking why TalkTalk didn’t have their customer data encrypted.

While it’s easy to understand why people might ask the question, it’s slightly harder to understand why encryption is not the answer. This blog post will explain the purpose of encryption, and the attacks it prevents, but more importantly, those that it doesn’t.

So why do we encrypt data? Well, encryption is supposed to make the data impossible to read, but not for everyone! Some people will still need to access it, or it would be useless.

So when my laptop has an encrypted disk, and the decryption key is based on a password I keep in my head, that means I can still use it because I know the password, but anyone who steals my laptop will have a very hard time decrypting its data.

The problem with server systems like the TalkTalk customer database, is that they are designed to be read not just by one human user, but by many other computer systems. For example, the website that customers use to check their account details; the call centre staff who use computers to look up your records, or the accounting system which needs to know billing details for customers. And all of these systems need to access the customer data, obviously, unencrypted.

This changes everything. Because now if I hack into the accounting system, then I can use it to read as much customer data as I want. Whether it’s encrypted on the actual customer database or not becomes irrelevant.

For this reason, encryption should be thought of primarily as a protection for assets which you don’t have good physical controls over, such as laptops and other mobile devices.

However, when it comes to protecting data stored on database servers with many connected systems, there are other layers of security you should be thinking of. For example, limiting access to only the required users; penetration testing connected systems to discover weaknesses that hackers might exploit, and intrusion monitoring to detect when hackers or malicious insiders may be stealing large amounts of data.

The more layers like this that you add, the more secure your data becomes. But, unless you are worried about someone breaking into your datacenter and physically stealing your database, then encryption is not the answer.

Release Date
Level of Ideal
Comments
Before CVE details are published
🥳
Limited public information is available about the vulnerability.

Red teamers, security researchers, detection engineers, threat actors have to actively research type of vulnerability, location in vulnerable software and build an associated exploit.

Tenable release checks for 47.43% of the CVEs they cover in this window, and Greenbone release 32.96%.
Day of CVE publish
😊
Vulnerability information is publicly accessible.

Red teamers, security researchers, detection engineers and threat actors now have access to some of the information they were previously having to hunt themselves, speeding up potential exploit creation.

Tenable release checks for 17.12% of the CVEs they cover in this window, and Greenbone release 17.69%.
First week since CVE publish
😐
Vulnerability information has been publicly available for up to 1 week.

The likelihood that exploitation in the wild is going to be happening is steadily increasing.

Tenable release checks for 10.9% of the CVEs they cover in this window, and Greenbone release 20.69%.
Between 1 week and 1 month since CVE publish
🥺
Vulnerability information has been publicly available for up to 1 month, and some very clever people have had time to craft an exploit.

We’re starting to lose some of the benefit of rapid, automated vulnerability detection.

Tenable release checks for 9.58% of the CVEs they cover in this window, and Greenbone release 12.43%.
After 1 month since CVE publish
😨
Information has been publicly available for more than 31 days.

Any detection released a month after the details are publicly available is decreasing in value for me.

Tenable release checks for 14.97% of the CVEs they cover over a month after the CVE details have been published, and Greenbone release 16.23%.

With this information in mind, I wanted to check what is the delay for both Tenable and Greenbone to release a detection for their scanners. The following section will focus on vulnerabilities which:

  • Have CVSSv2 rating of 10
  • Are exploitable over the network
  • Require no user interaction

These are the ones where an attacker can point their exploit code at your vulnerable system and gain unauthorised access.

We’ve seen previously that Tenable have remote checks for 643 critical vulnerabilities, and OpenVAS have remote checks for 450 critical vulnerabilities. Tenable release remote checks for critical vulnerabilities within 1 month of the details being made public 58.4% of the time, but Greenbone release their checks within 1 month 76.8% of the time. So, even though OpenVAS has fewer checks for those critical vulnerabilities, you are more likely to get them within 1 month of the details being made public. Let’s break that down further.

In Figure 10 we can see the absolute number of remote checks released on a given day after a CVE for a critical vulnerability has been published. What you can immediately see is that both Tenable and OpenVAS release the majority of their checks on or before the CVE details are made public; Tenable have released checks for 247 CVEs, and OpenVAS have released checks for 144 CVEs. Then since 2010 Tenable have remote released checks for 147 critical CVEs and OpenVAS 79 critical CVEs on the same day as the vulnerability details were published. The number of vulnerabilities then drops off across the first week and drops further after 1 week, as we would hope for in an efficient time-to-release scenario.

Figure 10: Absolute numbers of critical CVEs with a remote check release date from the date a CVE is published

While raw numbers are good, Tenable have a larger number of checks available so it could be unfair to go on raw numbers alone. It’s potentially more important to understand the likelihood that OpenVAS or Tenable will release a check of a vulnerability on any given day after a CVE for a critical vulnerability is released. In Figure 11 we can see that Tenable release 61% their checks on or before the date that a CVE is published, and OpenVAS release a shade under 50% of their checks on or before the day that a CVE is published.

Figure 11: Percentage chance of delay for critical vulnerabilities

So, since 2010 Tenable has more frequently released their checks before or on the same day as the CVE details have been published for critical vulnerabilities. While Tenable is leading at this point, Greenbone’s community feed still gets a considerable percentage of their checks out on or before day 0.

I thought I’d go another step further and try and see if I could identify any trend in each organisations release delay, are they getting better year-on-year or are their releases getting later? In Figure 12 I’ve taken the mean delay for critical vulnerabilities per year and plotted them. The mean as a metric is particularly influenced by outliers in a data set, so I expected some wackiness and limited the mean to only checks released 180 days prior to a CVE being published and 31 days after a CVE being published. These seem to me like reasonable limits, as anything greater than 6 months prior to CVE details being released is potentially a quirk of the check details and anything after a 1-month delay is less important for us.

What can we take away from Figure 12?

  • We can see that between 2011 and 2014 Greenbone’s release delay was better than that of Tenable, by between 5 and 10 days.
  • In 2015 things reverse and for 3 years Tenable is considerably ahead of Greenbone by a matter of weeks.
  • But, then in 2019 things get much closer and Greenbone seem to be releasing on average about a day earlier than Tenable.
  • For both the trendline over an 11-year period is very close, with Tenable marginally beating Greenbone.
  • We have yet to have any data for 2021 for OpenVAS checks for critical show-stopper CVEs.
Figure 12: Release delay year-on-year (lower is better)

With the larger number of checks, and still being able to release a greater percentage of their remote checks for critical vulnerabilities Tenable could win this category. However, the delay time from 2019 and 2020 going to OpenVAS, and the trend lines being so close, I am going to declare this one a tie. It’s a tie.

The takeaway from this is that both vendors are getting their checks out the majority of the time either before the CVE details are published or on the day the details are published. This is overwhelmingly positive for both scanning solutions. Over time both also appear to be releasing remote checks for critical vulnerabilities more quickly.

Written by

Chris Wallis

Recommended articles

Ready to get started with your 14-day trial?
try for free