Kate O’Flaherty explores the ins and outs of vulnerability disclosure and shines a light on the intricate process of flaw finding
As an increasing number of products are released without security built into their design, the vulnerability disclosure landscape is booming. It’s seeing many security researchers choosing to make their money by finding and reporting vulnerabilities through bug bounty programs such as those managed by Google and Facebook. Others work independently, in addition to their day job and sometimes without pay.
It’s a tough and competitive job that requires a certain type of mindset. Generally, security researchers like ‘playing’ with things; taking them apart and figuring them out. This way of thinking is extremely valuable to vendors because it means the researcher is often able to offer a solution to the issue as they report it.
Yet, it’s a difficult job that requires dedication: security researchers can face major issues when finding and reporting vulnerabilities. Indeed, they can even expose themselves to significant risks from disgruntled vendors. “Depending on the vendor, you open yourself up to legal action – and some companies don’t take kindly to disclosure,” says Andy Gill, a penetration tester and security consultant.
At the same time, getting through to the right people is a challenge when operating outside of a bug bounty program, says security researcher Chrissy Morgan. Adding to this, contact details for security departments are often missing from firms’ websites. “Sometimes this can make researchers feel uncomfortable,” says Morgan. “We might need to then release details to other staff inside a company who really should not be privy to this information.”
Independent security researcher Sean Wright agrees. “You find an issue, but then you have to work out who to report it to. There is security.txt – a proposed standard which allows websites to define security policies – but it’s not widely used.”
The resulting issues when reporting vulnerabilities can lead to desperate measures. Security researchers often have to use LinkedIn to contact CISOs and inform them of the problem. “You can get a limited response,” concedes Wright. Worse still, he adds, “you sometimes get no reply, but you find the issue has been fixed by the company.”
"You find an issue, but then you have to work out who to report it to"
Challenges in Reporting
It can be hard to find the right person when reporting an issue, agrees Gemma Moore, director at Cyberis – and the starting point is often a customer service number. However, she says, on the other hand some companies have a well-defined process. “Firms such as Facebook have bug bounties, so there’s a proper process.”
Large and well-established firms such as Microsoft and Cisco have set up channels to allow people to report vulnerabilities easily, says Jonny Milliken, manager, research team at Alert Logic. However, he points out that “the patching timeline is challenging for bigger organizations if the vulnerability affects a large number of products.”
At the same time, companies are not always receptive to researchers. Gill has had major issues when reporting directly to vendors. “Some of the bugs I’ve reported have taken months – and in one case, years – to fix.”
Gill cites the example of a “pretty critical vulnerability” he reported to a software vendor. “They took almost a year to fix it – this issue affected more than 10,000 installs on hospitality and government applications.”
Morgan has also faced a particularly challenging incident. “A domain registrar was disclosing information. I could run a python script against their website to retrieve all ‘Whois’ registrant details for European domains. This is classed as ‘thick data’ and domain name registry Nominet had quite clearly stated this was not to be disclosed post-GDPR.
“They were relying upon someone visiting the webpage to trigger an update script. This would then redact data from the website and show ‘thin data.’ The thing is, they said it wasn’t an issue in the way they were doing things; refused to talk to me any further about it and proceeded to change the specific bit of code the next day. The denial was the concerning part, I’m just glad someone internally saw the flaw in their logic and got it fixed.”
“The worst thing we get is silence,” says Ken Munro, partner at Pen Test Partners. However, he also highlights a positive response from a vendor. “We did a vulnerability disclosure with Cisco recently and they were amazing – they acknowledged it within minutes and kept us posted throughout the process.”
"The worst thing we get is silence"
Disclosure Models
It’s easy to get it wrong and companies aren’t always as receptive as they should be to vulnerability reports, so what is the best way of doing things? There are essentially three models of vulnerability disclosure, says Marco Rottigni, chief technical security officer EMEA at Qualys.
The first is non-disclosure – where no information about the vulnerability is published or shared with the affected vendor. The next is full disclosure – where information about the vulnerability is published immediately, with or without a patch from the vendor. Lastly, coordinated disclosure sees information about the vulnerability published after the vendor has had the time to prepare a patch.
Of course, the preferred model is coordinated disclosure, Rottigni says. “It allows the vendors to develop and test the patch before the details are made public; end-users can patch their systems as soon as the details are made public and researchers can get credit for their work.”
However, bad practice among security researchers can give others a bad name. Trust can be eroded by researchers who publish without notifying the vendor.
Sometimes a researcher has a vendetta, says Wright, citing the example of a researcher thought to have mental health problems, who released multiple zero-day exploits without giving Microsoft the opportunity to patch. “That’s irresponsible and unprofessional and doesn’t give the company a chance.”
At the same time, exploitation of vulnerabilities among researchers is an “absolute no-no,” says Munro. “It is important to stay on the right side of the law.”
“If I do want to break into anything, I make sure it’s on my own network,” says Wright. “When I do touch companies’ infrastructure, I make sure it’s things that are publicly available. A good example is a database – if you find records open, don’t go and download the whole thing, take one or two. Otherwise, you open yourself up to GDPR implications and other threats.”
"If I do want to break into anything, I make sure it’s on my own network"
Best Practice
It’s clear what shouldn’t be done regarding vulnerability disclosure, but adding to the complexity is the fact that there isn’t an ‘official’ regulated best practice. Morgan points out that there are standards for vulnerability management targeted at vendors: ISO29147 and ISO30111. However, she says: “Many companies unfortunately do not have a process in place.
“From a researcher’s point of view, I think it is very circumstance dependent,” adds Morgan. “Many of us rely on our own moral compass, my mantra is: Do no harm, take no sh*t.”
Reporting vulnerabilities ethically can mean varying the time a vendor is given to fix the issue, depending on its seriousness. The general guideline when it comes to responsible disclosure is 60 to 90 days typically,” says Gill. However, he says, for more serious vulnerabilities, some researchers may opt for a shorter window.
“We give them 90 days and extend that if the organization has good reason,” says Munro. “Most of the time, all we want is a response.”
“There is a lot of nuance around how long you give them to fix it,” agrees Moore. “The more mature vendors will normally be receptive and provide a timeline to fix things. If they are being open with you, you will hold off on publishing until a fix is ready.
“However, sometimes vendors have little interest in fixing the issue. Then you have to decide whether disclosing it publicly with no fix in place is in the best interest to get the vendor to acknowledge it. It’s a judgement call – if you have found it, someone else might too.”
Similar issues need to be taken into account when creating successful bug bounty programs, Hazel Koch, bug bounty advisor at HackerOne, tells Infosecurity. Yet, lots of companies don’t have vulnerability disclosure policies in place, she says. “If companies don’t have a policy and an ethical hacker finds a vulnerability, everyone is in a difficult position. The hacker doesn’t know how to communicate and it puts both the company and the ethical hacker at a disadvantage.”
It’s certainly a complex landscape to navigate, but is the industry better off as a result of vulnerability disclosure? Wright thinks it is. “The vast majority of researchers are finding issues and reporting them for the greater good and helping companies out regardless of how they react. They often do this in their own time and with their own resources.”
“Vulnerability disclosure makes us all safer,” agrees Moore. “It acts as an incentive for vendors to take security seriously, fix issues and bake security into their design.”
Timeline to Disclosure
- First a security researcher will find an issue. If there is a bug bounty program, they will report it via this. Otherwise, they will find a preferred point of contact at the vendor and arrange to securely communicate the vulnerability details
- Once communication has been established, ideally the researcher and vendor will agree a timeline. Google’s Project Zero sticks to a 90-day timeline for a fix. Travis Biehn, technical strategist and research lead at Synopsys says: “45 days is a good rule of thumb, but there isn’t a one-size-fits-all.”
- If a timeline has been agreed, the researcher will regularly check in, while maintaining a balance to ensure contact isn’t too frequent
- If the company does not fix the issue before the agreed time, or if the vendor does not respond to frequent contact by the researcher, it’s now time to decide whether to release limited details about the vulnerability or to extend the timeline