In a paper released today, the Information Security Forum is urging organizations to capitalize on the opportunities offered by artificial intelligence while taking sensible steps to reduce the risks posed by this still immature technology.
Demystifying Artificial Intelligence in Information Security defines exactly what AI is, then lays out a realistic analysis of what it can do, and will be able to do soon, for both legitimate organizations and criminals.
While detailing AI's potential to significantly improve cyber-defenses, especially around early threat detection, ISF's research recognizes that the technology carries with it the disease as well as the cure.
Researchers wrote: "No matter the function for which an organization uses AI, such systems and the information that supports them have inherent vulnerabilities and are at risk from both accidental and adversarial threats. Compromised AI systems make poor decisions and produce unexpected outcomes.
"Simultaneously, organizations are beginning to face sophisticated AI-enabled attacks—which have the potential to compromise information and cause severe business impact at a greater speed and scale than ever before."
According to researchers, companies that have already adopted AI while it's still in its baby feathers have enjoyed benefits that include being able to counter existing threats more easily. But, as threat actors nurture their own twisted versions of the new technology to maturity, this early advantage will shrink into nothingness.
"An arms race is developing," said ISF's managing director, Steve Durbin. "AI tools and techniques that can be used in defense are also available to malicious actors including criminals, hacktivists, and state-sponsored groups.
"Sooner rather than later these adversaries will find ways to use AI to create completely new threats such as intelligent malware—and at that point, defensive AI will not just be a 'nice to have.' It will be a necessity."
Asked how far away the world is from intelligent malware, ISF senior research analyst Richard Absalom told Infosecurity Magazine: "Back in January 2018, in our publication Threat Horizon 2020, we predicted that intelligent malware would emerge by 2020. I don’t think that prediction is far off but can’t be sure—I wouldn’t bet my house on it!
"What we do know is that attackers can already use AI tools to identify vulnerabilities—although human hackers are still better at exploiting them. As soon as that intelligent malware emerges, AI tools will be required to spot anomalous activity on the network and identify well-hidden malware.
"For example, social engineering attacks that use deepfake videos and automated vishing are likely to make it impossible for human eyes and ears to identify what is real and what is fake—it may be that intelligent systems will be required to analyze all types of digital communications to establish source and authenticity."
Asked if the benefits of AI will always outweigh the risks, Absalom said: "Yes—if (big IF) the risks are managed properly. AI promises some really exciting developments for information security. The risks are not insurmountable but do require serious thought and investment to manage."