Benjamin David examines the potential future impact – both good and bad – of emerging technologies on cybersecurity
Cybersecurity is at a critical point as 2021 comes to an end. As new technologies continue to gain traction, many organizations realize the weaknesses around standardized authentication, static networking and trust-based security models. Additionally, organizations are beginning to wake up to the realities of artificial intelligence (AI) weaponization and the future challenges around quantum computing.
Alongside the general trend of technologization, specific trends this year have undoubtedly bolstered the need to look to technological solutions. For example, consider the critical findings of IBM’s Cost of a Data Breach Report 2021. It found that the average cost of a data breach rose from $3.86m to $4.24m in 2021, the highest average total cost recorded in the 17-year history of the report.
Remote work was cited as a critical factor in why the cost of a data breach increased, with compromised credentials blamed for causing the most breaches. On the plus side, AI had the most considerable cost-mitigating effect, and adopting a zero trust approach helped reduce breach costs.
Cybercrime is also becoming far more scalable. As a result, it’s easier for threat actors to leverage more organized and sophisticated attacks. The transition to remote and hybrid working has hugely impacted the cyber-threat landscape over the past two years. Security policies have been rewritten, best practices reshaped and new challenges burgeoned. This year, cyber-attacks have not only caused minor disruptions in operations but they’ve also caused significant financial and reputational losses.
So, what does the future of cybersecurity look like? Despite the challenges seen in 2021, evolving technologies can help progress cybersecurity and assuage the increasingly nefarious and disruptive cyber-threat landscape.
Artificial Intelligence and Machine Learning
One of the most promising technologies in cybersecurity is undoubtedly artificial intelligence (AI), particularly machine learning (ML). For those unfamiliar with the terms, AI is the larger concept of machines carrying out tasks in a way that we would consider “smart.” Conversely, ML is a subset of AI. It provides statistical methods and algorithms to automatically enable machines to learn from prior experiences and data, allowing the program to change its behavior accordingly.
AI has had a colossal impact on cybersecurity in recent years. It’s possible to use AI to automate jobs and make decisions much quicker than humans ever could. AI can also analyze user behavior faster and on a larger scale.
A recent study by Capgemini compounds this point, showing that 69% of organizations see AI as integral to speedy and timely responses to cyber-attacks. Joseph Carson, chief security scientist and advisory CISO of Thycotic, echoes this view, emphasizing that it is only through AI and automation that organizations can “scale to meet the needs from the continuous acceleration of our digital society.” Additionally, AI can detect vulnerabilities in computer systems and business networks in real-time, assisting organizations to focus on essential security tasks.
When focusing on the benefits of AI, cybersecurity experts often cite the need to move away from the traditional checkbox security approach. Joseph Carson points out that this traditional security approach “only focuses on resolving the known risks of the past.” Conversely, he argues that what is needed is a “more adaptive cybersecurity, which is dynamic like a living organism that can evolve and reduce future risks.” Carson also argues that organizations must automate an increase in “security controls when the threats are high and reduce them when the threats are low.”
"Machine learning and deep learning will only be successful if we are sharing threat intelligence globally so that origins of cyber-criminals can be quickly identified..."Joseph Carson
ML currently offers legions of benefits. Fraud detection, for example, becomes possible since ML algorithms can learn from historical fraud patterns and detect them in future transactions. Moreover, ML can protect an organization by understanding normal behavior between users and devices. This helps enforce the ‘normal’ within an organization, providing good defense against the unknown-unknowns, stopping attackers dead in their tracks.
Yet, cybersecurity experts often cite other shifts in practices that need to go hand-in-hand with AI adoption. Carson argues, “machine learning and deep learning will only be successful if we are sharing threat intelligence globally so that origins of cyber-criminals can be quickly identified, and nation-states can put more pressure on governments who provide safe havens for cyber-criminals.”
Hardware Authentication
Standard password authentication is well-known for its quandaries. Unfortunately, password cracking has not been quelled in the last two decades. Even if users are doing a more proficient job at picking passwords, and even if organizations are doing a better job of forcing users to select strong passwords (known as ‘complex password policy’), password cracking has kept pace. As a result, businesses are starting to realize en masse that a more secure form of authentication is required. This is where hardware authentication enters the equation.
Hardware authentication is an approach that essentially authenticates a user through devices like smartphones, laptops or any hardware system held by an authorized user. This authentication could be in the form of a facial recognition verification system or fingerprint scanner to grant access to a device. This will eliminate the need for passwords, argues Amy Stokes-Waters, co-founder of ZeroFX and senior business development manager at Cognisys. She stresses that “as we move towards the need for a passwordless society, one critical piece of technology we can look at is hardware-based authentication. Tokens that users can carry with them can eliminate the need for one-time passwords sent via text messages, and when coupled with a single sign-on solution, can prevent users from ever having to enter a password again.”
Of course, these technologies have been around for a few years, particularly prevalent on personal mobile devices. Yet, it’s likely that multi-mode biometric authentication will become increasingly common. This approach combines identifiers like facial characteristics, fingerprint, hand geometry, retinal patterns, iris, signature, voice and gait analysis to secure high-reliability biometrics matching. Combining biometric factors in this manner can fashion nearly fault-proof biometric matching that is considerably more reliable than single-factor authentication.
Yet, several obstacles are often flagged when discussing multi-mode biometric authentication. For starters, an attacker might offer a fake biometric to a sensor; an attacker might alter hardware/software in the data-capture device and an attack might alter hardware/software in the signal-processing device. Additionally, an attacker might try to extract and change stored biometric information.
Adaptive Networks
Investigating how networks operate is promising from the perspective of technological advancements, particularly in cybersecurity. Adaptive networks are one such area. Adaptive networks support bandwidth demands and the ever-increasing requirement to modernize and secure an organization’s workforce, providing high-caliber connectivity and quicker services.
But what is an adaptive network? An adaptive network uses a combination of analytics and intelligence, control and automation applications and programmable infrastructure. These automated and programmable networks can manage, track, configure and adjust to shifting needs.
An essential layer in an adaptive network is the programmable infrastructure layer, which acts as a sensor and engenders real-time data on network vulnerabilities and efficiency. Another important one is the analytics layer, which provides critical network insights. It utilizes ML to inspect data according to performance, giving cybersecurity teams more precise insight into network complications and threats. The control and automation applications layer that can simplify the way teams manage their networks is also important. By utilizing software-defined network structures and multi-domain service orchestration (MDSO) that sits atop each domain and orchestrates services from end to end, organizations can achieve software-defined network structures and multi-domain service orchestration.
Zero Trust Model
Zero trust is a critical framework in which security teams can reconcile the highly complex threat landscape and mobile workforce to protect remote and in-office users. It might seem strange to term zero trust a technology since, in many ways, it is more of a concept, even a paradigm. Yet, solutions that assist organizations in achieving a zero trust model are becoming increasingly present and promising.
Zero trust is, in many ways, a reaction to a breakdown in traditional security models. With remote working becoming more ubiquitous, it is no longer merely optional for organizations to consider neoteric approaches to bolster their security controls. Indeed, Stokes-Waters points out that zero trust has become a necessity for organizations: “Zero trust is becoming increasingly crucial as attackers find their way through initial defenses and embed themselves within networks for weeks, or often months, at a time.”
Zero trust is established on the principle of ensuring stringent access controls and withholding trust by default, even if already within the network perimeter. The benefit is securing constant vigilance and reducing access to information for employees and computer processes to a need-to-know structure. As a result, resources and networks only have access to things they need, and access is terminated as soon as that requirement ends.
To create a zero trust framework, security teams must implement controls and technologies across the IT estate. A pioneering approach that will likely be more common in the future sees cybersecurity teams leverage secure access service edge (SASE) architecture. This is a type of cloud-based united threat management (UTM) service. The benefit here is that it can continually evaluate network traffic, remote devices and users’ security and compliance by applying the necessary restriction when the corresponding policy is triggered. Other technologies help to achieve a zero trust model by segmenting key elements such as applications and the corporate network.
Quantum Computing
Quantum mechanics is a field within physics exploring how, on a fundamental level, the world works. By looking at the world on a quantum level, we see particles having more than one state at any given time, and these states can be correlated even if separated over a colossal distance. What has become known as ‘quantum computing’ sees the leveraging of this strange phenomenon for the sake of processing information in profoundly innovative ways.
"Like a ticking quantum time bomb, quantum computers of the future could crack today's encrypted messages"
Consider classical computing. The smallest element is a ‘bit,’ which can either be 0 or 1. The quantum equivalent is called a qubit¸which can also be 0 or 1. However, a ‘qubit’ can also be in superposition – any combination of 0 and 1. A calculation using two bits will necessitate four calculations. Conversely, a quantum computer can do this calculation on all four states simultaneously. We can scale this: 1000 qubits would have more power than the most powerful supercomputer.
Why, though, is this relevant to cybersecurity? Even though large-scale quantum computers are not available commercially, considering quantum cybersecurity solutions is an exigent demand. For example, a threat actor can capture secure communications of interest today. When quantum computers are commercially available, the computer power that could be leveraged could crack encryption and discern information about the communication. After all, as Brad LaPorte, partner at High Tide Advisors, highlights, “encrypted data has significantly more value when it might eventually be decrypted and exploited.”
LaPorte also points to the dangers around the weaponization of quantum computing: “Like a ticking quantum time bomb, quantum computers of the future could crack today’s encrypted messages. Meaning any encrypted data that is collected now could be cracked later for malicious use and abuse. Cyber-criminals and nation-state threat actors may be (or most likely are) collecting data on a mass scale to be used in the future once this technology is available. As double extortion tactics continue to increase, this may find its way into ransomware demands sooner rather than later while this tech is being developed.”
Putting aside risks, quantum cybersecurity offers the chance to build more resilient and captivating possibilities to secure essential and personal data – for example, quantum-secure communications, particularly quantum key distribution (QKD). Here, superposition plays a key role concerning qubits since, when a hacker tries to observe them in transit, their quantum state ‘collapses’ to either 1 or 0. The benefit that this offers to cybersecurity is that a hacker cannot tamper with qubits without leaving a trail. This is very important for the transmission of highly sensitive data, providing an ultra-secure network.
Fight Fire with Fire
The upswing in technology and digital connectivity and the increasing numbers of cyber-threats have trumpeted the need for innovative cybersecurity. Evolving cybersecurity technologies can manage cyber-risks by filling security gaps that depend on manual processes affected by an enduring cybersecurity skills shortage and the administrative strains of managing data security. With technologies such as AI, hardware authentication, adaptive networking, zero trust and quantum computing, cybersecurity teams can strengthen their posture and better manage the increasingly dangerous and disturbing cyber-threat landscape.