Each October marks Cyber Security Awareness Month, with the aim of enhancing cybersecurity awareness and cyber threats. This year, a big focus is social engineering attacks – among the most insidious and prevalent of these are phishing attacks, which aim to deceive people into disclosing sensitive information so that criminals gain unauthorized access to critical systems.
Threat actors thrive on probability, relying on the unwitting clicks of employees. It’s time for companies to fortify their defenses and provide employees with training and resources to help them spot and report a suspected attack. With AI and its associated technologies emerging as a possible game-changer in the evolution of cyber-attacks, we need investment in humans who can develop, deploy and manipulate this technology as a force for good, or we risk losing out to malicious actors.
Adapting to a Digital Age
Phishing attacks have evolved significantly over the years. Research has found a 61% increase in the rate of phishing attacks in 2022 compared with 2021. Slowly but surely, we are all growing more wary of seemingly innocent requests, usually via email or on websites, and are more cautious in sharing personal information and data. But with the aid of new techniques, cybercriminals are more cunning, employing a wider range of tactics to breach organizations’ defenses.
Hackers have become adept at crafting convincing emails, messages and websites that impersonate trusted entities. They manipulate human psychology, using fear, curiosity or urgency to coerce employees into taking actions that compromise security. From convincing an employee to click on a malicious link to tricking them into divulging login credentials, the art of social engineering is reaching new heights.
Moreover, AI is becoming a powerful tool in the arsenal of cybercriminals. Attackers can leverage generative AI like ChatGPT to create convincing, personalized messages, often in multiple languages, that cater to the victim’s interests, making it even more challenging to discern the fraudulent from the genuine.
This technology also makes the average cybercriminal much more dangerous as AI helps threats “pass the smell test,” resulting in an uptick of successful attacks. A recent report confirms these fears about generative AI’s potential to create more sophisticated email attacks, with many security leaders either having already received AI-generated email attacks or strongly suspecting that this was the case.
In this rapidly evolving landscape, organizations should realize that security breaches originating from these increasingly deceptive social engineering tactics cannot be blamed squarely on unwitting employees. The sophistication of phishing attacks means that even well-informed individuals can fall victim. It is the attackers’ adaptability and ingenuity that we must confront with the onus on businesses to ensure critical systems are adequately protected.
Shifting the Blame
To best protect critical systems, it is crucial to acknowledge that cybersecurity is not solely the responsibility of the IT department but a collective effort that touches all employees. Awareness and training programs are essential. Companies should educate their staff about the evolving threat landscape, teaching them to recognize phishing attempts and respond appropriately.
Additionally, adopting a zero trust model is crucial. This approach assumes that nobody, inside or outside the organization, should be trusted by default, meaning every access request is thoroughly authenticated and authorized.
For example, setting up multi-factor authentication on all devices is a good start, but organizations shouldn’t stop there. It is about understanding where the critical data is stored, what the security environment looks like and ensuring that users have as little access as possible to do their jobs. At the end of the day, zero trust is an approach, and it can’t be bought from a box.
The Role of Human Experience
As nation-states collaborate to combat AI-enabled cyber threats and invest in advanced technologies, it is important to remember that human intelligence remains irreplaceable. While tools and technology can scale defensive capabilities, it is the human factor that provides nuance and context.
Back in 2021, there was a serious attack on a water supply facility in Florida as attackers managed to hack a remote access application, allowing them to increase the level of sodium hydroxide to dangerous amounts. While automated machine operations were securing the facility, it was an employee who noticed the issues and sounded the alarm after he saw someone else was controlling his computer.
This is a perfect example of why the human experience remains vital in recognizing and responding to threats. While AI is designed to analyze patterns and trends, an experienced cybersecurity team will be able to interpret the subtleties of an attack, understand the tactics employed and adapt strategies to counter them effectively. They provide the human touch necessary to identify and address threats that machines might overlook.
The adaptability of human intelligence is a crucial asset. As the threat landscape continues to evolve, cybercriminals will devise new techniques and methods. Human experts can learn, adapt and stay one step ahead, anticipating the attackers’ next move. It is the human that will use this intelligence to train the learning model that can then work in the background to support security efforts.
Phishing attacks will only grow more sophisticated with advanced social engineering tactics being augmented by AI. It’s time to shift the blame away from employees when security breaches occur due to phishing. Investing in AI-literate humans is essential because human intelligence will always be indispensable in this battle. It’s not just about technology; it’s about the people who wield it.