What Are Fake Clickjacking Bug Bounty Reports?
Beware, there’s a new trend in play, concerning a new type of fake clickjacking bug bounty reports. Read on to understand what to look out for and how to avoid being tricked.
First of all, let’s explain the term. Bug bounties, in case you’re not aware, are where an organization offers financial rewards to people who find bugs or vulnerabilities in their software. Considered by most to be an add-on rather than an alternative to robust testing of software, such a reward scheme can help businesses detect and fix issues for a fraction of the price that it would cost should they be breached. When a person finds an issue and flags it to the company offering a reward, that is known as a bug bounty report.
Maybe your organization has been contacted and notified of a bug even though you’re not running a bug bounty program. Realizing the financial benefits such information can bring to both the white hat hacker and the business, the former will often not wait for an invite to hunt for bugs and will take a more proactive approach. If you’re not a security professional or don't have a security team, it can be difficult to know whether a submitted report is genuine and if you should be concerned about your security. This article should help you.
Now, to clickjacking. This is a technique of cybercriminals to trick a user into clicking on something they believe to be fairly innocent, only to either perform a dangerous action on another website, or reveal sensitive information.. As an example, a cybercriminal could load a transparent page over a target website so that when the user clicks a button to do a seemingly simple task, it instead clicks on another button on the target website.
So what’s the problem?
It can be hard to sort the wheat from the chaff when such reports from people positioning themselves as security experts appear, but with Intruder Vanguard we can offer review of bug bounty reports, as well as run a continuous watch over your systems to identify, analyse, and remediate critical vulnerabilities faster.
As mentioned earlier, clickjacking tricks a user into performing a particular action on a target website. In order for it to work, however, the web application needs to have authenticated areas otherwise there’s no sensitive actions to be performed. If yours does not have authenticated areas, any clickjacking bug bounty report is likely to be false. If your web app does have authenticated areas, be aware that many scanners won’t be able to monitor these areas so will be unable to report clickjacking. For full coverage, our authenticated web application scanner can be used to detect this issue.
In our customer's case, they were using the header "X-Frame-Options: SAMEORIGIN" to prevent clickjacking attacks so were surprised to see that the proof-of-concept included in the report did allow their website to render in an iFrame. While the script in use will allow most websites to be rendered in an iFrame, it doesn't indicate that they're vulnerable to clickjacking. For example, even our own portal renders in an iFrame using our own similar proof of concept:
The proof-of-concept worked by requesting the website through a proxy server which implements a policy to tell browsers to allow the response to be read by the attacker's website. However, the browser would see this proxy server as a different website, so wouldn't include the normal cookies or authentication tokens in requests. This means that users wouldn't be logged in on the website, so they can't be tricked into performing sensitive actions using clickjacking. The report is misleading, and doesn't actually indicate that the website is vulnerable to clickjacking.
Low quality reports like this one are unfortunately quite common, and are often accompanied with a request for money - a practice called "beg bounties". These "issues" are often generated by scanners and then sent to as many organizations as possible without consideration of the real risk. If you would like to find out more, or get clear reports of such issues with risk ratings not designed to induce panic, then you can talk to us on our website.
How to determine a high quality report from a low one
While it’s not always easy to determine whether a report contains an issue you should be concerned about, there are a few things you can look out for to help judge the quality of a report.
High quality bug bounty reports will usually be specific to your situation. They will clearly explain the impact of a vulnerability, framing it in terms of what can be done on your website, and how that affects your organisation. This is in contrast to lower quality reports, which are often templated and talk about the risk in generic terms which may not apply to you. The proof-of-concept in high quality reports will often be tailored to your website, and should clearly demonstrate the impact of the vulnerability without causing any damage.
In our customer's case the report was very clearly templated, even copying a couple of sentences directly from an old report on HackerOne:
Proxy protection NOT used , i can bypass X-Frame-Options header and recreate clickjacking on the whole domain. I see that you don't have a reverse proxy protection that allows attackers to proxy your website rather than iframe it.
While not directly related to the quality of the report, we’ve found that bug hunters who enquire about bounties up front without providing any details of their findings usually provide poor reports. If you inform them that you either can’t offer a bounty without first seeing the report, or don’t offer bounties at all, many of them won’t respond.
Here at Intruder, we’re ready to help you avoid costly data breaches through continuous automated scanning, direct access to expert security professionals and assistance in determining the genuine bug bounty submissions from the fakes.
Our ability to prove deeper, find more vulnerabilities - or in this case validate potential weaknesses - can have a direct and significant impact on your business. Sign up for a 14 day free trial today.
- Raw CVE Coverage
- Risk Rating Coverage
- Remote Check Types
- Check Publication Lead Time
- Local/Authenticated vs Remote Check Prioritisation
- Software Vendor & Package Coverage
- Headline Vulnerabilities of 2021 Coverage
- Analysis Decisions
Red teamers, security researchers, detection engineers, threat actors have to actively research type of vulnerability, location in vulnerable software and build an associated exploit.
Tenable release checks for 47.43% of the CVEs they cover in this window, and Greenbone release 32.96%.
Red teamers, security researchers, detection engineers and threat actors now have access to some of the information they were previously having to hunt themselves, speeding up potential exploit creation.
Tenable release checks for 17.12% of the CVEs they cover in this window, and Greenbone release 17.69%.
The likelihood that exploitation in the wild is going to be happening is steadily increasing.
Tenable release checks for 10.9% of the CVEs they cover in this window, and Greenbone release 20.69%.
We’re starting to lose some of the benefit of rapid, automated vulnerability detection.
Tenable release checks for 9.58% of the CVEs they cover in this window, and Greenbone release 12.43%.
Any detection released a month after the details are publicly available is decreasing in value for me.
Tenable release checks for 14.97% of the CVEs they cover over a month after the CVE details have been published, and Greenbone release 16.23%.
With this information in mind, I wanted to check what is the delay for both Tenable and Greenbone to release a detection for their scanners. The following section will focus on vulnerabilities which:
- Have CVSSv2 rating of 10
- Are exploitable over the network
- Require no user interaction
These are the ones where an attacker can point their exploit code at your vulnerable system and gain unauthorised access.
We’ve seen previously that Tenable have remote checks for 643 critical vulnerabilities, and OpenVAS have remote checks for 450 critical vulnerabilities. Tenable release remote checks for critical vulnerabilities within 1 month of the details being made public 58.4% of the time, but Greenbone release their checks within 1 month 76.8% of the time. So, even though OpenVAS has fewer checks for those critical vulnerabilities, you are more likely to get them within 1 month of the details being made public. Let’s break that down further.
In Figure 10 we can see the absolute number of remote checks released on a given day after a CVE for a critical vulnerability has been published. What you can immediately see is that both Tenable and OpenVAS release the majority of their checks on or before the CVE details are made public; Tenable have released checks for 247 CVEs, and OpenVAS have released checks for 144 CVEs. Then since 2010 Tenable have remote released checks for 147 critical CVEs and OpenVAS 79 critical CVEs on the same day as the vulnerability details were published. The number of vulnerabilities then drops off across the first week and drops further after 1 week, as we would hope for in an efficient time-to-release scenario.
While raw numbers are good, Tenable have a larger number of checks available so it could be unfair to go on raw numbers alone. It’s potentially more important to understand the likelihood that OpenVAS or Tenable will release a check of a vulnerability on any given day after a CVE for a critical vulnerability is released. In Figure 11 we can see that Tenable release 61% their checks on or before the date that a CVE is published, and OpenVAS release a shade under 50% of their checks on or before the day that a CVE is published.
So, since 2010 Tenable has more frequently released their checks before or on the same day as the CVE details have been published for critical vulnerabilities. While Tenable is leading at this point, Greenbone’s community feed still gets a considerable percentage of their checks out on or before day 0.
I thought I’d go another step further and try and see if I could identify any trend in each organisations release delay, are they getting better year-on-year or are their releases getting later? In Figure 12 I’ve taken the mean delay for critical vulnerabilities per year and plotted them. The mean as a metric is particularly influenced by outliers in a data set, so I expected some wackiness and limited the mean to only checks released 180 days prior to a CVE being published and 31 days after a CVE being published. These seem to me like reasonable limits, as anything greater than 6 months prior to CVE details being released is potentially a quirk of the check details and anything after a 1-month delay is less important for us.
What can we take away from Figure 12?
- We can see that between 2011 and 2014 Greenbone’s release delay was better than that of Tenable, by between 5 and 10 days.
- In 2015 things reverse and for 3 years Tenable is considerably ahead of Greenbone by a matter of weeks.
- But, then in 2019 things get much closer and Greenbone seem to be releasing on average about a day earlier than Tenable.
- For both the trendline over an 11-year period is very close, with Tenable marginally beating Greenbone.
- We have yet to have any data for 2021 for OpenVAS checks for critical show-stopper CVEs.
With the larger number of checks, and still being able to release a greater percentage of their remote checks for critical vulnerabilities Tenable could win this category. However, the delay time from 2019 and 2020 going to OpenVAS, and the trend lines being so close, I am going to declare this one a tie. It’s a tie.
The takeaway from this is that both vendors are getting their checks out the majority of the time either before the CVE details are published or on the day the details are published. This is overwhelmingly positive for both scanning solutions. Over time both also appear to be releasing remote checks for critical vulnerabilities more quickly.