Despite how long we’ve been developing software and related security and audit efforts, we still seem to be treading water. Cyber-criminals are still wreaking havoc, as evidenced by high-profile incidents such as Kaseya and SolarWinds, and less prominent attacks that nonetheless inflict severe damage on enterprises and their customers, often connected to supply chains. So, why do these types of incidents — despite all the great tools and increased automation we have at our disposal — keep happening? Until there is a better understanding, every software developer on the planet should be thinking, “I could have been just like Kaseya, but I was lucky.”
One vital reality to begin with: implementing DevSecOps is part of the answer to producing secure software, but it’s not enough. Even companies on the DevSecOps path need to add more ingredients to their CI/CD (continuous integration and deployment) pipelines to shore up supply chain risks that often prove problematic. That means injecting more capability, leveraging capability maturity models and quicker processes are essential.
When new features are added to software, organizations use regression tests to show that the software still performs as expected even with the updated features. Unfortunately, complete testing, including full regression testing, can take weeks or even longer to run. Hence, a central challenge that companies encounter is releasing patches for discoveredvulnerabilities more quickly. This leaves practitioners in an unenviable position: do they run their full test suite and take the risk that the vulnerability isn’t exploited in the meantime, or do they “patch and pray” without working through the full test suite? Unfortunately, hope is not a sound strategy. When I choose the latter route, I end up playing whack-a-mole with new problems that arise since the update was released without sufficient testing.
Static application security testing (SAST) and dynamic application security testing (DAST) should be considered table stakes to address these challenges. What is needed are methodologies that can test more quickly than large suites of traditional regression and other tests, such as network comparison application security testing (NCAST). This provides a quick, side-by-side comparison of the requests and responses at the network level, using either test or production traffic. Any differences that are detected are either expected or flagged as requiring further probing. Once there are no more disparities, the product can be released with peace of mind.
As a professional community, we have embraced the idea of using lots of vendors and open-source software, creating huge supply chain exposure. Software composition analysis tools are commonly used to highlight the third-party software part of your code and warn of potential problems, especially from a licensing standpoint. However, they tend not to be as effective at calling out security considerations. For example, let’s say a piece of open-source software is being used by one of the third parties. Practitioners don’t generally know if that third-party software has unexpected vulnerabilities or something was maliciously added during the build process to distribute malicious code deliberately. Whether inadvertent or not, this can cause significant supply chain problems. In addition to utilizing NCAST, another technique that can be helpful here is doing multiple builds and then comparing the results, so there are separate locations — with separate administrators — where the builds are done. That makes it harder for an insider to make a mistake or for the build to be hacked.
Most software producers well understand the root cause conundrum referenced above. However, many security and IT audit professionals are unaware of why software vulnerabilities continue to be exploited. Their natural inclination is to ask third-party suppliers a lot of questions about the security of their code. Yet, they’re often not attuned to the underlying cause: vendors’ conundrum about playing patch-and-pray or being more conservative by conducting thorough testing, thereby extending the window in which the vulnerability can be exploited.
Security and IT audit professionals should be focusing more on questions around the DevSecOps pipeline and what is being done to ensure additional ingredients such as NCAST and quicker, automated testing are in place and consideration of multiple builds and comparisons. Until additional measures like these are more uniformly implemented, all the traditional tools and automation in the world won’t be enough to prevent the continuing barrage of attacks taking advantage of vulnerable pipelines and products — and companies will need impeccable luck to avoid calamity.