Cybersecurity Teams Largely Ignored in AI Policy Development

Written by

Cybersecurity teams are being left out of the development of policies governing the use of AI in their enterprises, new research published by ISACA during its 2024 Europe Conference has found.

Just 35% of 1800 cybersecurity professionals surveyed said they are involved in development of such policies.

Meanwhile, 45% reported no involvement in the development, onboarding or implementation of AI solutions.

This is against a backdrop of many organizations now permitting the use of generative AI in the workplace.

According to the EY report AI Adoption Key to Corporate Growth, 35% of senior leaders said that their organization is creating a roadmap for AI implementation fully and at scale.

Chris Dimitriadis, Chief Global Strategy Officer, noted that in 2023, IASCA saw very few policies being applied to AI but that trend is now changing as more organizations look to create AI governance frameworks.

He noted that while it is positive to see governments across the globe working towards governance and regulation of AI, it is concerning that cybersecurity is not necessarily being embedded into those governance systems.

“In our thirst to innovate and adopt AI very fast in order to create a new product or service or to improve customer experience, usually we focus on the ethical or  compliance side of AI without taking into account cybersecurity which is key,” Dimitriadis said. 

Speaking to Infosecurity about where in the organization AI risk lies and who is ultimately responsible for governance, Erik Prusch,  CEO, ISACA, said that while traditionally it would lie with the CISO or CIO, now the answer is “everybody within the team.”

“This is because all of your points of vulnerability rely on the individual as much as the systems and security that the enterprise has,” he said.

Adoption of AI in Cybersecurity

The survey also found that automating detections and response (28%) and endpoint security (27%) were the areas where AI was most used by cybersecurity professionals.

This was followed by automating routine tasks (24%) and fraud detection (13%).

“They seem like two logical areas to expand from, but I don't think we've scratched the surface on it yet,” Prusch said.

“I love the idea of being able to be more systemically in control using technology, if we can point it in the right direction. And if we can utilize it within our internal systems to give us greater comprehension, instead of periodic reviews.”

Dimitriadis said that it is important that AI is being used in these tools already.

“We cannot audit AI, secure AI and we cannot protect the privacy related to AI if we do not have an AI system enabling our processes,” he said.

What’s hot on Infosecurity Magazine?