Only a third of organizations are adequately addressing security, privacy and ethical risks with AI, despite surging use of these technologies in the workplace, according to new ISACA research.
The survey of 3270 digital trust professionals found that just 34% believe organizations are paying sufficient attention to AI ethical standards, while under a third (32%) said organizations are adequately addressing AI concerns in deployment, such as data privacy and bias.
This is despite 60% of respondents stating that employees at their organization are using generative AI tools in their work, with 70% saying that staff are using any type of AI.
In addition, 42% of organizations now formally permit the use of generative AI in the workplace, up from 28% six months ago, according to ISACA.
The three most common ways AI is currently used are to increase productivity (35%), automating repetitive tasks (33%) and creating written content (33%), the respondents said.
Lack of AI Knowledge and Training
The research, dated May 7, identified a lack of AI knowledge among digital trust professionals, with only 25% declaring themselves as extremely or very familiar with AI.
Nearly half (46%) classified themselves as a beginner when it comes to AI.
Digital trust professionals overwhelmingly recognize the need to improve their AI knowledge for their roles, with 85% acknowledging they will need to increase their skills and knowledge in this area within two years to advance or retain their job.
Most organizations do not have measures in place to address the lack of AI knowledge among IT professionals and the general workforce. Nearly half (40%) offer no AI training at all, and 32% of respondents said that training that is offered is limited to staff who work in tech-related positions.
Additionally, only 15% of organizations have a formal, comprehensive policy government the use of AI technology.
Speaking to Infosecurity, Rob Clyde, past ISACA board chair and board director at Cybral, said this is directly tied to the lack of expertise and training in AI.
“Cybersecurity governance professionals are the people who make the policies. If they’re not very comfortable with AI, they’re going to be uncomfortable coming up with an AI policy,” he noted.
Clyde advised organizations to utilize available AI frameworks to help build an AI governance policy, such as the US National Institute of Standards and Technology’s (NIST) AI Risk Management Framework.
In the meantime, organizations should at least put in place some clear rules around the use of AI, such as not inputting confidential information into public large language models (LLMs), added Clyde.
“You do not have a long time to figure this out, now is the time,” he warned.
ISACA also revealed that it has released three new online courses for AI training, including auditing and governing these technologies.
How AI Will Impact Cybersecurity Jobs
IT professionals surveyed in the research also highlighted the significant impact they expect AI to have on jobs generally. Around half (45%) believe many jobs will be eliminated due to AI over the next five years, and 80% think many jobs will be modified as a result of these technologies.
However, 78% believe AI will have a neutral or positive impact on their own careers.
Clyde told Infosecurity that he expects AI to essentially replace certain cybersecurity roles in time. This includes SOC analysts, with AI far better than humans at pattern recognition. Another is substantially reducing the human role in writing policies and reports.
However, Clyde agreed with the vast majority of respondents that AI will have a net positive impact on cybersecurity jobs, creating lots of new roles related to the safe and secure use of AI in the workplace.
For example, specialists vetting that an AI model doesn’t contain bias hasn’t been compromised, or ensuring that AI-based disinformation isn’t getting into the environment.
“If you think about it, there’s whole new opportunities for us,” said Clyde.
Tackling AI-Based Threats
The respondents also expressed a lot of concern about malicious actors using AI tools to target their organization.
More than four in five (81%) highlighted misinformation/disinformation as the biggest threat. Worryingly, just 20% of IT professionals said they are confident in their own ability to detect AI-powered misinformation, and 23% in their company’s ability to do so.
Additionally, 60% said they are very or extremely worried that generative AI will be exploited by malicious actors, for example, to craft more believable phishing messages.
Despite this, only 35% believe that AI risks are an immediate priority for their organization to address.