In a world where hard data and agents powered by Artificial Intelligence algorithms increasingly drive decisions, we do not have a way to measure ‘security’ in a software solution quickly. There are many aspects to software security, including the set of practices in place to deal with incoming vulnerabilities.
The principal element to consider, which affects the resilience of the code itself, is the attack surface. This is defined as the sum of the open interfaces presented by a software component.
For example, the attack surface of an operating system kernel is the set of system calls it supports. For a hypervisor, it is the set of virtual hardware interfaces and hypercalls it exposes. The size of the attack surface is inversely proportional to the security of the software - a smaller attack surface translates into more robust software.
Almost everybody is aware of this fundamental principle, but almost everybody ignores it. For example, today containers are an unstoppable force of disruption, replacing virtual machines by the dozens. Yet, containers have a far larger attack surface than virtual machines.
Nonetheless, the direction is clear. Experts claim that we should not worry, because, with the right configuration, containers are just as secure as virtual machines. In reality, we do not have a way to measure the attack surface and we cannot measure security.
Anybody can make any claims as they are unverifiable. Without a clear, well-understood, and uncontroversial metric, we keep developing unproven security ‘improvements’. When we switch from 1 GbE to 10 GbE, we know what we are getting exactly. When we turn on SELinux, we do not know for certain. Thus, it is unsurprising that so many users switch it off on occasion, which creates a whole new set of problems.
Researchers are working intensively to come up with novel approaches and tools to measure a surface of attack, but the problem is difficult to solve. It is made even more complicated by plenty of tunables in modern software configurations. We are still years away from a turnkey solution. In the meantime, security management is left to gut feelings.
Although the attack surface is still not measurable, security vulnerabilities certainly are. We can associate an unequivocal number to the amount and gravity of security vulnerabilities affecting a software project over time. Using past data is not as effective as measuring the current code base because it only allows us to make assessments a posteriori.
The code keeps changing; past vulnerabilities are, by definition, not weaknesses in the present. However, they are significant from a statistical perspective. As Daniel Kahneman demonstrated in "Thinking Fast and Slow," even simple statistical algorithms outperform humans in complex decision-making.
In that case, the algorithm is based on the measure of past advisories affecting a software project: if their number and gravity are higher than the ones of other projects, then it is less secure. At the time of choosing a software component for the company stack, due diligence should include vulnerability measurements, and we should pay attention to their results.
However, today it is arduous to put into practice. Not all projects maintain a publicly accessible list of security vulnerabilities that affected their code bases. The searchability of the CVE database is improving, but extracting all issues relevant for a particular version and configuration is still challenging.
Some open source projects do not even tag commits as security patches, making it impossible to discern security fixes from any other commits. The open source equivalent of security by obscurity. In these cases, it is up to Linux distributions (distros) to keep security records.
However, typically distros concentrate all advisories for their full software stack to a single list, not searchable by component. Besides, all distros use different versions of given software, applying various patches on top of it, but their advisories do not straightforwardly apply to upstream releases.
Thus, projects that push the burden of publishing security advisories to distros leave all their power users, the ones that consume their software from source, without any security support. The few open source projects that maintain a well-kept list of security advisories, such as Xen Project, are rewarded by alarming articles in high profile magazines about their security statuses.
This negative publicity provides a strong incentive to newer open source projects to hide their advisories and obfuscate their security issues.
It is up to us, as decision makers, to reverse this perverse cycle by supporting projects that maintain an open and unambiguous record of their security issues. It is up to us, as thought leaders, to influence other projects to introduce transparent security processes and record keeping. With publicly and easily accessible data, we will be able to make fully informed decisions and verify claims made by industry experts.