How do you know the code you’re using can be trusted? It’s a very important question – organizations and developers need to know code is genuine and hasn’t been tampered with, or they could risk inadvertently introducing malicious software into their organization.
However, establishing this trust is difficult – how do you verify that a piece of software developed by a third party in a far-flung part of the world hasn’t been tampered with? That’s where code signing comes in. Code signing is used to digitally verify a program or piece of software, confirming who developed the code and guaranteeing it hasn’t been corrupted or altered since it was signed.
Whether it came from a software provider like Microsoft, or a member of your DevOps team, a code-signing certificate should mean the code is safe to use.
Unfortunately, because code signing certificates provide this level of trust in the associated code, they’ve become a target for cyber-criminals and a process designed to help protect the enterprise has been re-risked. If hackers get hold of code-signing certificates from legitimate companies, they can sign any code they like – such as malicious code they’ve developed to siphon user information.
Thanks to the code signing certificate, unsuspecting users will assume the application is from the legitimate source, and is therefore safe to install and use. This puts businesses at risk of a host of crimes, such as fraud and data theft, all of which can have serious repercussions for an organization’s reputation, not to mention the bottom line. So how can companies combat this threat and ensure that code signed software is the real deal?
The reality of code signing compromise
Code signing is a vital security tool when used properly, but the key question for companies is how to maintain control and guard against threats as hackers look to manipulate the code signing process.
Some businesses have tried heavily restricting developers’ ability to sign code, hoping this will help stem the tide. Yet this approach doesn’t really address the core problem, as there is implicit trust in the current code signing system that any signed code is waved through without further investigation.
In recent years, there’s been several examples of companies with poor code signing processes being attacked, with perhaps the most notorious being Avast. Two years ago, hackers broke through Avast’s defences and found unprotected code signing keys, allowing them to infect Avast’s CCleaner programme with malware and then code sign it to appear as legitimate software. The hackers were then able to issue this malicious software to Avast customers, evading detection from security tools.
It shows how valuable such attacks are to hackers that, two years later, Avast’s CCleaner is still being targeted as a way to distribute code signed malware.
One of the reasons this attack was so successful and remained under the radar for so long was that most of Avast’s security was focused on the border wall, looking for external attackers; they hadn’t considered their code signing certificates could be at risk.
Looking for infected applications or software doesn’t help prevent these types of attacks, and securing border walls alone won’t protect code signing credentials. Ultimately, defending against these attacks requires strong internal processes around code signing itself; the only way to maintain control and security is through improving code signing security processes.
The DevOps disconnect
The code signing system has also run into challenges due to evolutions in the software development process, namely DevOps. For organizations looking to stay competitive in a fierce market, DevOps has become essential to speedy innovation, allowing companies to release more updates and products than ever before.
However, the speed at which DevOps developers work often creates security holes. Processes are skipped and corners cut to get that next release out as fast as possible. Rather than waiting for an approved code signing certificate, many developers either just leave code unsigned or use a self-signed certificate authority within their environment.
As DevOps environments aren’t particularly well protected, any unsigned or self-signed code could potentially have come from anyone, anywhere. Developers using this code can’t be sure it wasn’t built by a hacker with access to their environment.
That's a security problem. Organizations don’t really think about how the code they use is transmitting data and performing functions that need to be secured – even in a development environment.
If organizations aren’t tracking all their DevOps certificates correctly and managing them well, they won’t know if any have been revoked, who’s signing the active certificates, or if they are even being signed at all. Some organizations don't even know whether they're using certificates in DevOps, or if their micro services and containers have got certificates on them at all.
Knowing the who, what, when, where, why
Keeping code signing keys safely locked up has never been enough. Organizations need processes in place that guarantee code signing keys are only used in authorized situations; for authorized code, with authorized certificates and by authorized signatories, with specific people required to approve their use.
Businesses must be able to track every code signing operation that happens anywhere in their company, but given the speed of development and the need for visibility across the entire enterprise, securing the code signing process manually is not a viable option. Companies need to automate code signing certificate use to ensure they are being used at the necessary pace, across the workflow. This allows organizations to spot any anomalies quickly enough to remediate without impacting employees and customers, while maintaining the pace of development.
In today’s digital world, it is imperative that businesses protect their code signing process – knowing what code was signed, with which certificate, using which code signing tool, on what machine and by who.
If an organization doesn't know whether their own code can be trusted, businesses will be operating in a cybersecurity wild west, with their reputation and bottom line both at risk.