Laurens van der Blom
Laurens van der Blom

Software architect. Security professional (CISSP). Fitness/bootcamp guru. Obstacle runner. Ski lunatic.



Disclosing responsibly with responsible disclosure

Published: Saturday, February 2, 2019
Last modified: Thursday, October 24, 2019
Word count: 1635. Estimated reading time: 8 minutes.
Share:

TL;DR

If you want to contribute to improvements in the internet's security, then you basically have two options if you want to play it safe: 1) Work for a security research firm; and/or 2) Target only environments run by organisations with a responsible disclosure policy and/or bug bounties. In other cases, just be sure that you've weighed your options and have decided that it's worth your efforts.

Introduction

When the following reaches the news, then you know it's bad, very bad. ( ReBoot, anyone?)

Shocking indeed. It's not the first time something like this has happened. Here's a recap (from a little further back) of similar cases where legal threats were not uncommon:

White hat security researchers, who play an important role in protecting people's digital life and fighting cybercrime, are not always lucky and take pretty big hits when their publications of security risks and vulnerabilities are not appreciated by those responsible for these security risks and vulnerabilities in the first place. They often face legal threats which can be very difficult to fight against.

Such things can quickly turn into nightmares for the people involved who simply want to do good for everyone. All the more reason to think about this and how such unfortunate events can be prevented. Luckily, some people have already done that for us. In the course of the past few years there have been some major shifts in the way the publications are handled by governments and companies in a positive way, but we're not there quite yet.

Update 2019-02-09: Unfortunately, not everyone gets it. Recently, there has been some pretty bad developments regarding a disclosure about a serious security vulnerability in a casino's server. Its API is publicly accessible without authentication and leaks sensitive data. The vendor, Atrient, who owns the technology, denies this, in spite of clear evidence. That leads to all kinds of trouble. You can read the story here. So, yes, we're not quite there yet.

Responsible disclosure

In order to assist the security researchers, there's this thing called responsible disclosure, which provides guidelines, rules and means to report a security risk or vulnerability to an organisation in a responsible manner. The cooperation between governments as well as companies and the internet community is very valuable and is necessary in order to improve the internet's security. This approach is basically like scaling up the resources immensively to provide a better and safer internet.

The key here is that information systems are intertwined in our lives, every single day and night. Of course, they are also connected with each other, every single millisecond. The world operates on such systems all the time. When a problem is found, such as downtime, it can have a huge impact in our communities and potentially disrupt our lives. Security problems add another degree to this impact, especially when it comes to personal data, for instance identity thefts.

Such information systems are run by governments and companies and it may be difficult to contact them in case of security problems, as legal threats may be around the corner. However, organisations can provide their own responsible disclosure policy to make their stance clear regarding security matters. Acting within the boundaries and rules of this policy helps the security researcher stay away from legal threats, paving the way for a constructive cooperation between the security researcher and the organisations.

This is where security.txt comes in, as it allows publication of such a policy in a clear and standardised way. A security researcher won't have to search the inner depths of a website or contact the organisation via ambiguous channels for the exact policy: it's already there and security.txt points the right way. This website also has one.

The following tweet shows how it's not done:

Having security.txt would have made it much easier, pointing the security researcher in the right direction regarding security matters. Owners of the affected information systems – these would be the organisations themselves – are responsible for fixing the security problems.

There are also platforms to make the correspondence easier, such as the one from the NCSC in the UK:

In the Netherlands the Dutch NCSC have provided guidelines as well and help security researchers to do their work. NCSC acts as an intermediate between the security researcher and a governmental organisation. Basically, when a security risk or vulnerability has been found in a system that is owned and run by a governmental organisation, the researcher can simply contact NCSC and they will assist the researcher in disclosing the details of the problem. That way, the organisation can fix it before the details are made public, as to share the knowledge with the internet community, which helps to prevent other similar problems elsewhere on the internet.

Please consult your local security authority (such as NCSC in the UK and the Netherlands) for the exact guidelines and rules regarding responsible disclosure as well as any specific responsible disclosure policies of organisations before doing anything. Never do more than necessary to prove that there is a security problem, do not rely on forbidden means (according to the aforementioned guidelines and rules) to do that, and keep it a secret between you and the affected organisation(s), at least until the problem has been fixed.

Bug bounties

A step further than responsible disclosure is the one where you're getting paid for it, the so called bug bounty. The European Commission is funding several bug bounty programmes in order to stimulate research for security risks and vulnerabilities in free open source projects that EU institutions employ in their environments. A monetary reward will be given in return for anything found to be of considerable risk, according to their criteria.

Not only that, there are also private organisations that fund their own bug bounty programme in order to improve the security implementation of their own software, such as Google and Facebook.

In both cases, researchers receive money for their efforts. There are a few platforms where this is made possible, such as HackerOne and intigriti. It may be worth your while to check it out and help out improving the security of free open source software projects or proprietary software projects of companies. Just be sure to play nice and follow the rules.

Full disclosure or publicly shaming

There's another way to get organisations moving when a security problem has been found, although it must be used wisely: full disclosure, or better known as publicly shaming. Security expert Troy Hunt has written an excellent article about this subject, so for that I redirect you to his article.

The major factor here is that no one wants bad publicity and social media can be extremely effective in achieving that. It just might be the nudge that organisations need in order to fix whatever security problems they may have. However, it's only effective when correspondences are kept professional, polite and to the point, as to help the organisations identify the problem and allow them to fix it, pretty much in the same way that the guidelines and rules of responsible disclosure prescribe. In fact, although tweeting or posting about it makes the problem public instantly, it's roughly the same as responsible disclosure when the problem is approached in a civil manner. The public just adds some extra (positive) pressure.

For larger security problems with more impact it still may be wise to follow responsible disclosure without going public first. Unfortunately, there's no clear line between “small security problems” and “large security problems”, as it's rather subjective, in spite of any responsible disclosure policy that may be present.

For instance, I don't think there's something wrong with reporting on social media that a static (portion of a) website is served using HTTP instead of HTTPS (interesting read here about this, by the way; you'd be surprised how many large websites still use insecure HTTP).

However, it's dangerous to publicly report the fact that some API of some website leaks all personal details, including hashed passwords (see the referred article in the beginning of this post).

So, for that one I can only say that using your common sense, while following responsible disclosure guidelines and rules if present, may be the safest and most pragmatic way to tackle security related matters together with governmental organisations and companies. If you want sure-fire protection, then you may be better off working for a security firm, as such a firm has the manpower and the financials to protect their researchers. Of course, there's also the fact that such firms usually offer legal penetration testing as a (paid) service, so their customers always give permission and define the boundaries and rules for penetration testing.

Update 2019-10-24: One of the readers, Natali Koslorova, mentioned another source that may be interesting for a bit more in-depth coverage about this topic. You can find it here.