Famed venture capitalist Marc Andreessen coined the phrase, “Software is eating the world” in a Wall Street Journal op-ed back in 2011. More than a decade later, this couldn’t be truer with more devices, running more software, and creating a larger, more complex attack surface to combat and manage. Yet, we still rely on the cryptographic methods created nearly half a century ago.
When complexity in systems grows, so does the complexity of its problems. The defense industry is perfectly capable of building dependable machines out of unreliable parts. Why then do we struggle so much when software is involved?
Across industries, better systems have been built by standing on the shoulders of its prior successes. Just as advances in materials science have led to higher speed turbines and subsequently lighter, faster, and more maneuverable aircraft, modern software systems have been built on advances in technology. In the software world these advances come in the form of packages or libraries. By splitting functionality in these libraries, we’re able to differentiate and specialize.
A simple example to reinforce the point follows. The very first applications were monolithic running on a specific hardware platform. Differentiation gave rise to specialization, and so the database was separated from the presentation layer and user interface. The business application logic was separate again still. This allowed databases to advance in terms of speed and scale independently from UIs, networking, and even the storage layer underneath.
This specialization drove costs down and performance up. It also gave rise to a supply chain, where various independent vendors create and contribute different parts to the overall product. At a macro level, complex systems such as the aircraft make use of many components, each having its own software systems. All parts standing on the fruits of prior success. As system complexity grows, the ability to understand the “knock-on” effects of failure of individual components reduces. It also presents the opportunity for remediation.
In an aircraft there are several different ways to measure key telemetry i.e., airspeed, altitude, etc. Each has different measurement criteria, different vendors, different software stacks. This diversifies away the risk of one single component being unreliable.
Software systems, unfortunately, are not designed in a manner where redundancy is built in. Instead, modern software systems rely on patching and updates as flaws are found. This approach has consequences. System downtime is often incurred as a patch is created and applied – and only if the failure is noticed. This hits cryptography especially hard, as failures in cryptography allow an adversary to eavesdrop – and when one is successfully spying, they tend to keep that fact a secret.
In the world of cryptography there is a widespread belief that it is somehow unbreakable because it is based on mathematics. This couldn’t be further from the truth. The merits of the math notwithstanding, implementations have bugs – on average 10 to 20 bugs per 1,000 lines of code. Keys and certificates sometimes leak. Human error is usually present, either in the form of insufficient programming skills, lack of ongoing training, or simple implementation errors.
The fact is single points of failure in cryptography exist and are commonplace. Yet, the industry suffers from crypto amnesia and a scattered way of looking at cryptography which has allowed breaches to take place and attacks to happen.
This begs the question: how can we build resilient software systems out of unreliable parts? The earlier observation has a direct applicability: redundancy in algorithms, implementations, and software components are all solutions to diversify away the risks, just as we do in physical engineering where lives are at stake. Engineering disciplines, outside of software, are acutely aware of the need to build-in redundancy and create a management layer to administer this added complexity. In software, no such management exists.
The answers lie in policy control and interoperability.
Policy to encode the rules of governance, and interoperability to enable the required redundancy and agility to continue operating under degraded conditions. For system integrators, most of which have written software, a deliberate emphasis must be made on resiliency.
The software engineering culture, starting at the product management level, must demand that the software system is able to continue to serve, even as individual components are defeated. Software engineers too must evolve their thinking, asking themselves, “what if the component I use no longer serves as I expect?”
And finally, as it pertains to cryptography, keep in mind that one might never know that a component has failed. Only as we evolve our thinking in software engineering will we be able to get away from the culture of panic patching and updating.
Dr. Vincent Berk is Chief Strategy Officer at Quantum Xchange, a provider of crypto-diverse security products and services.
Have an Opinion?
This article is an Op-Ed and the opinions expressed are those of the author. If you would like to respond, or have an editorial of your own you would like to submit, please email C4ISRNET Senior Managing Editor Cary O’Reilly.