CISOs are becoming more confident that generative AI is being used securely in their organization, according to a new survey led by professional association ClubCISO.
Two years after generative AI's emergence into the mainstream, nearly half (45%) of CISOs surveyed by ClubCISO said their organizations now allow the use of some generative AI tools for specific applications.
The same share of respondents said that the CISO office makes the final decision on which AI use is admitted.
Empowered with such oversight, CISOs appear confident about the risks involved with AI. Over half of the respondents (54%) shared that they know how AI tools will use or share the data fed to them, and nearly six out of ten (57%) said their staff is aware and mindful of the data protection and intellectual property implications of using AI tools.
Additionally, only a small minority of surveyed CISOs (9%) say they do not have a policy governing the use of AI tools or have set out a direction.
Overall, half of the respondents (51%) believe generative AI is a force for good and a security enabler, whereas only 25% think it presents a risk to their organizational security.
Rob Robinson, Head of Telstra Purple EMEA, sponsors of the ClubCISO community, commented: “While we do still hear examples of proprietary data being fed to AI tools and then that same data being resurfaced outside of an organization’s boundaries, what our members are telling us is that this is a known risk, not just in their teams, but across the employee population too.”
He said that the ClubCISO survey suggests that CISOS “have taken the time to understand and educate their organizations about the risks associated with using such tools.”