Phil Muncaster investigates whether an ongoing dispute between Google and Microsoft could change the way we fix security flaws in the future
At the start of the year, Microsoft and Google became embroiled in a very public spat over vulnerability disclosure. The two computing giants, never the best of friends, became more animated than we’ve seen them for some time, exchanging barbed comments via blog posts and social channels. The reason? Google’s Project Zero initiative, announced last July, and its strict rule of revealing vendors’ software bugs publicly after 90 days if they have not been patched.
So who exactly is the bad guy in all of this? Microsoft, for failing to patch as quickly as Google would like, or the Mountain View giant, for disclosing flaws before security fixes were ready? And is the ongoing dispute likely to change how the industry deals with vulnerability disclosure?
A Bit of History
It all kicked off when Google decided to release details of a Windows flaw just two days before it was due to be fixed in January’s Patch Tuesday. The bug itself was not particularly critical, needing a victim machine to have already been compromised in order to work. However, plenty of commenters let their feelings be known on the related Google Security Research forum post.
“Automatically disclosing this vulnerability when a deadline is reached with absolutely zero context strikes me as incredibly irresponsible and I’d have expected a greater degree of care and maturity from a company like Google,” wrote one user.
Microsoft then waded in with a strongly worded blog post from Chris Betz, senior director of the Microsoft Security Response Center.
“Although following through keeps to Google’s announced timeline for disclosure, the decision feels less like principles and more like a ‘gotcha’, with customers the ones who may suffer as a result. What’s right for Google is not always right for customers,” Betz wrote in that post. “We urge Google to make protection of customers our [combined] primary goal.”
This didn’t seem to deter Google, which released details of several additional Microsoft product flaws in the weeks that followed. Here’s the twist though. One batch of disclosures came about before the 90-day deadline, after Microsoft effectively told the web giant that the flaws were so small they were not worth patching. This is despite several of them – including an elevation of privilege issue and an information disclosure bug – being marked as ‘high severity’ by the Project Zero researcher in question.
The waters have been further muddied by Microsoft’s somewhat controversial decision in January to effectively make its Advanced Notification Service (ANS) private. Redmond claims the decision was taken to meet customers’ evolving needs – in other words, that most firms have automatic updates or proper patching regimes which render the public blog posts and notices irrelevant. However, experts argue it was a retrograde step that could at best be viewed as an attempt to hamper transparency into product flaws, and at worse a cynical move designed to make money by forcing customers to upgrade to ‘premier’ status.
Who’s Right?
Google relented recently and allowed vendors a further 14-day grace period on top of the mandatory 90 if a patch is already slated for release, as well as promising not to disclose flaws on US public holidays or at the weekend. But there’s still a fair bit of bad blood about how it has handled the whole affair.
So is this a dispute we should really take sides on? For Nigel Stanley, cybersecurity practice director at consultancy OpenSky, neither firm has covered itself in glory.
“Both Microsoft and Google need to grow up and understand that great care needs to be taken in disclosing vulnerabilities in a calm, controlled way,” he tells Infosecurity. “This will reduce the opportunities for exploits to be developed and give over-worked sysadmins a chance to test and then patch their systems. Instead of throwing stones, those that live in glass houses need to give their neighbors support for the benefit of the broader industry.”
For Ed Skoudis, SANS Institute fellow, Google needs to be a bit more aware of the sheer complexity involved and the huge resources that are needed to create and test fixes for certain vulnerabilities.
“As [Google’s] systems are in the cloud with code they control, there are few hurdles to them throwing resources at a problem and getting fixes out in 90 days or less. Project Zero is a way of Google draining a swamp very quickly,” he tells Infosecurity.
“However, they don’t have the extended enterprise customer base with lots of on-premise software and legacy systems along with strict controls around applying patches,” Skoudis adds. “In some cases, 90 days is just not reasonable and a rushed fix might actually lead to more problems than it solves.”
In fact, that exact scenario has occurred several times of late, most notably in August 2014 after an August Patch Tuesday fix locked computers with the dreaded Blue Screen of Death.
Responsible Disclosure
Most commentators, software vendors, and security researchers agree that responsible disclosure is the best way forward. The problem is, they don’t agree on exactly what ‘responsible’ means.
Some take the extreme view that unless a flaw is made public immediately, the vendor will procrastinate, downplay its importance and possibly even use legal means to silence the researcher – while the bad guys are working on crafting attacks in the meantime. Others say the vendor should be informed privately and given a decent amount of time to fix the flaw.
However, once again the debate rages as to how much time should be allowed and for which kind of flaws, according to Secunia director of research and security, Kasper Lindgaard.
"Instead of throwing stones, those that live in glass houses need to give their neighbors support"Nigel Stanley, OpenSky
Infosecurity asked Lindgaard what represents ‘a timely fashion’ when it comes to giving vendors a chance to fix a vulnerability, before disclosing it.
“Our policy is to give vendors six months to fix the vulnerability and issue a patch, and for a huge majority of the vendors that is plenty of time," he says. "But it is also necessary to be flexible and adapt to circumstances: you have to look at the individual vulnerability – at how critical it is, how complex it is to fix, and how widespread the vulnerable product is.”
Jim Fox, director in KPMG’s cyber team, is actively involved in pen-testing and vulnerability identification. He argues that the most important thing from the vendor’s point of view is to be transparent with its customers.
Even if there’s not a patch immediately available, he explains, the vendor could produce a way to mitigate the problem which would quickly keep customers secure in the meantime – or their customers could come up with their own temporary solutions. Either way, Fox believes the Common Vulnerability Scoring System (CVSS) provides a ready-made, commonly understood framework which could help them prioritize newly discovered flaws.
This is essential given the sheer volume of black hats out there researching flaws, he tells Infosecurity.
“People are taking a methodical approach to identifying and exploiting vulnerabilities in widespread systems. To think only one person will find them is crazy,” Fox adds. “You don’t need to put out a press release each time you find a flaw – that’s irresponsible. But at the same time, if you alert a vendor, say they have a week or 10 days to tell their customers and announce a patch or at least mitigation, that’s fair. The vendors don’t move faster because it’s disruptive for them, so you need to make it in their best interests to do it.”
ISACA international vice president, Ramsés Gallego, agrees that greater transparency is the way forward.
“The most important thing to do in the vulnerability management dimension, from a vendor perspective, is communication,” he tells Infosecurity. Gallego believes that in the cyber era, threats will always exist – it’s not a matter of if a company faces a vulnerability, it’s when and how quickly they’ll then recognize and mend it.
A Troubled Future?
Yet for Fox, Microsoft is moving not towards greater transparency, but away from it, as witnessed by its decision in January to end its Advance Notification Service. He argues that failing to inform all customers through notifications means many won’t even be aware of vulnerabilities which malware writers are actively developing exploits for.
“The only people to get hurt will be those who need to defend themselves. Less transparency is a mistake; I rarely learn of a vulnerability through a press release,” he adds.
So what of the future for vulnerability disclosure? Can the vendor community afford to pour more resources into developing timely patches or will the quality of security fixes suffer, and patching times inevitably get longer as the sheer volume of flaws identified mounts?
Skoudis thinks Google’s gung-ho approach could yet have negative consequences.
“Unfortunately, there is a risk that Google may incite copycats that are maybe less wedded to a ‘don’t be evil’ philosophy,” he argues. “In future, we could have others pushing out zero days into the public forums that are incredibly dangerous without warning. And then what started out as a positive approach could turn into a major issue for everybody.”
In the meantime, it’s the sysadmins – the “poor bloody infantry” – who will be forced to pick up the pieces, according to Stanley.
“Some vendors forget that there is a world outside of their products and that sysadmins are having to test and apply patches from multiple vendors, often at the same time,” he says.
It’s difficult to forsee a time when this will ever change.
This feature was originally published in the Q2 2015 issue of Infosecurity – available free in print and digital formats to registered users