As technology advances so does our ability to encrypt data, with neural networks now capable of learning how to keep data safe. With so much innovation at our fingertips, Davey Winder explores where else encryption might go in the future.
The neural nets of Google Brain have worked out how to create encryption, without being taught the specifics of cryptographic algorithms. We do know the encryption was shared key, symmetric stuff; but we don’t know exactly how it works which limits the practical applications. However, encryption built on machine learning alone is impressive enough to make us wonder where else encryption might go in the future.
Any security model, cryptographic or otherwise, is conceptually an arms race between the adversaries that wish to subvert the technology and those that rely on the technology to secure their information. “For a very long time the Data Encryption Standard (DES) was the gold standard for symmetric cryptography” says Ed Moyle, director of thought leadership and research at ISACA, “but it’s considered far too weak for most commercial applications today.”
Was it a solid and reliable choice when it was developed back in the 70’s? Absolutely. Fast forward and we appear to be seeing more and more problems with crypto models. Certificate validation checking can be challenging, as can trust in the underlying certificate authorities that issue the digital artefacts foundational to the transactional security model we all trust in our online business. However, rather than there being something wrong with the current technology, Moyle sees it more as a natural evolution, refinement, innovation and development of the technology. With time, he adds, we get better at it and develop new techniques.
Unfortunately, so do our adversaries; and identifying them is not always as easy as you might imagine. “The UK Government, amongst others, is following an approach to security that seemingly aims to weaken encryption,” argues Giuseppe Sollazzo, a senior systems analyst at St George’s University of London, who recently won the Open Data Champion award from the Open Data Institute. Sollazzo is referring to the Investigatory Powers Bill, passed by Parliament and granted Royal Assent, which contains legal requirements for software companies to insert backdoors into code that would essentially bypass any encryption.
What’s more, this is not an issue just affecting the UK. Senator Richard Burr has been re-elected as chair of the Senate intelligence committee and supported legislation last year to force similar backdoors in the US, a move that is seemingly backed by Donald Trump.
Something has got to change as far as encryption is concerned, and one of the greatest challenges crypto faces is that of usability. Maarten Van Horenbeeck, director of Forum for Incident Response and Security Teams (FIRST), says that while failures of encryption technology do happen – most often because of implementation errors rather than new mathematical advances – they’re usually not critical. “The security and crypto communities have a habit of rolling up our sleeves, getting to work, and fixing the bug” he says. That community is also investing a lot of time and money in developing the future shape of encryption; so what does it look like?
Quantum Crypto
Quantum cryptography can best be described by using the Heisenberg Uncertainty Principle, which basically says that you cannot observe something without changing what you are looking at. The basic unit of light with quantum properties, a photon, is at the heart of this system. Single photons are fired along a fiber optic cable at a rate of a million per second between network nodes. Light detectors will determine a secret key from these photons to encode the data across the communications channel, but here’s the thing, if anyone tries to eavesdrop on that channel then they create a disturbance and Heisenberg kicks in which scrambles the photons and alerts to the presence of an observer. That will close down that link and another effort will be made to establish a connection, and this can go on until either the unwanted observer goes away or a predetermined cut-off point. It’s a method of not encrypting and sending data until it’s known that nobody is observing the process.
The trouble is that for quantum cryptography to work in the real world, it needs every entity to possess a direct and uninterrupted optical link with every other entity. If there were just 100 endpoints in all, that’s 4950 fiber optic cables to connect them all. Even if you move away from the practical problems of quantum crypto, there are theoretical ones to overcome. Not least that the whole point of it is to reduce our reliance on mathematical cryptographic algorithms when, truth be told, these remain pretty trustworthy. The problem with modern crypto is implementation; that’s where the failures usually occur and yet implementation is likely to be much more complex in quantum systems.
Post-Quantum Key-Exchange
A bigger risk sits with the impact of quantum computing on modern-day crypto. The development of computers capable of breaking the encryption we use today, and doing so with ease, poses a major problem for data that is supposed to stay secret for the next 20 years. “Cryptographers have been investigating algorithms that would be resistant to quantum computers”, Van Horenbeeck says, “making progress using new techniques such as lattice-based cryptography and hash-based digital signatures.” Google has already started working on how to secure the connections between Chrome on the desktop and its servers in a post-quantum world. If you use Chrome Canary, you might already be part of that experiment with the ‘New Hope’ post-quantum key-exchange algorithm.
Fully Homomorphic Encryption
Then there’s the possibility of fully homomorphic encryption (FHE) which could solve some pressing privacy problems with the likes of the cloud, for example. Data within a FHE ecosystem could be passed to an untrusted person to perform arbitrary computations upon but without it having to be decrypted and so remaining private throughout. If only it were here, now; but it isn’t. Craig Gentry, who made the discovery of FHE as a concept, admits it would take around a trillion percent increase in computing time to perform a simple Google search using encrypted keywords. The promise is partially true in that partially homomorphic encryption (PHE) solutions do exist, but they can only do a fraction of what a proper FHE solution could. FHE allows arbitrary computations on the encrypted data, while PHE do not; they can only multiply or add but cannot do both.
Honey Encryption
Finally, taking a rather different approach, is Honey Encryption. Being developed by Ari Juels, a former chief scientist at security vendor RSA, and Thomas Ristenpart of the University of Wisonsin, Honey Encryption serves up a bunch of fake data whenever an attacker gets a key or password wrong. That fake data is close enough to the real data that the hacker can’t tell if it is real or not. Indeed, if the hacker does get the right password and access to the real data, they won’t know it as it will just be lost within all the very similar fake data. The problem holding it back right now is working out how to produce believable data fakes for all data types. If they can get that right then Honey Encryption could reduce the threat surface significantly.
Google Brain: Can Neural Networks Create Workable Encryption?
The Google Brain team, based in Mountain View, managed to establish three separate neural networks with different goals. Alice needed to send a secure message to Bob which, in turn, had to decrypt it. Eve’s task was to eavesdrop and decrypt the message. Alice and Bob were given a shared key but none of the AI’s were instructed how encryption works or what cryptographic techniques to use. What they were given was a loss function which determined failure; Eve would fail if it couldn’t guess the original plaintext closely enough, as would Bob while Alice would fail if Eve’s guesses were better than random. Beyond all sharing the same, independently initialized, neural network architecture they were left to their own devices. Alice got the plaintext and the key, Bob got the ciphertext and the key, Eve just the ciphertext. The results were not immediately a new type of encryption, however over time Alice and Bob evolved a system that produced very few errors and allowed Bob to correctly reconstruct the plaintext. Eve never got close, and when it did do better than random guessing Alice and Bob upped the game and tweaked their crypto system. The takeaway from this was that neural networks can learn to protect their own communications with a decent measure of success, just by instructing them to value secrecy highly enough.
Dr Raghav Bhaskar, who has a PhD in Cryptography and worked with Microsoft Research until 2014 in their Security and Cryptography research group, and is currently CEO of 'frictionless 2FA' start-up AppsPicket, believes that secrecy of a shared key was what really mattered here and that’s no surprise. “A more interesting output of the experiment” he says “would have been if the neural networks came up with encryption algorithms that were either drastically different from what we have today or very similar.”