Over 80% of security professionals are concerned about the prospect of attackers using artificial intelligence (AI) against their organization, according to new research from Neustar.
The global information services provider polled 301 IT and security professionals across EMEA and the US to compile its latest International Cyber Benchmarks Index.
It found that although 87% of respondents agreed AI would help them improve cyber-defenses, a similar number (82%) claimed to be nervous about the prospect of it being used by black hats against them in the future.
Some 60% claimed to be apprehensive about using AI in their organization at all, citing “security reasons.”
Respondents feared the prospect of stolen data (50%) resulting from attacks most of all, although loss of customer trust (19 percent), unstable business performance (16 percent) and extra costs (16%) all figured highly.
The findings chime somewhat with a Webroot study last December which found that 87% of US cybersecurity professionals are currently using AI to help defend their organization, but 91% feared the impact of its use by attackers.
Neustar SVP, Rodney Joffe, claimed the security industry is at a "crossroads" with AI.
“Organizations know the benefits, but they are also aware that today’s attackers have unique capabilities to cause destruction with that same technology. As a result, they’ve come to a point where they’re unsure if AI is a friend or foe,” he said.
“What we do know is that IT leaders are confident in AI’s ability to make a significant difference in their defenses. So what’s needed now is for security teams to prioritize education around AI, not only to ensure that the most efficient security strategies have been implemented, but to give organizations the opportunity to embrace — and not fear — this technology.”
That education piece appears to have been lacking so far. An ESET poll of European and US IT pros in August found a disappointing 75% believe AI is a ‘silver bullet’ to helping them tackle online threats.
NTT Security EMEA SVP, Kai Grunwitz, has listed some of the ways AI could be used in the wrong hands here.