#Infosec2024: Organizations Urged to Adopt Safeguards Before AI Adoption

Written by

Organizations must urgently put safeguards in place before deploying generative AI tools in the workplace, cybersecurity leaders pleaded during Infosecurity Europe 2024.

Kevin Fielder, CISO at NatWest Boxed & Mettle, noted that while AI adoption is surging in organizations, this is often done without addressing significant security risks.

One major risk is prompt injection attacks on large language models (LLMs), which are particularly hard to prevent as they revolve around threat actors asking it questions for malicious purposes, such as coaxing a customer service chatbot into sharing users' private account details.

Fielder added that if LLMs are not properly trained and managed, they could damage the business, such as having biases in place that lead to bad customer advice.

He noted that AI is often “creeping in” to organizations without them realizing it, with many apps and other tools they are using adopting AI within those tools.

“Unless you’re very careful, you’ll be using a lot of AI without knowing it,” Fielder warned.

Erhan Temurkan, Director of Security and Technology at Fleet Mortgages, told Infosecurity that this is an issue security leaders are now regularly observing.

He noted that many services procured by the business, such as software-as-a-service (SaaS) tools, which previously passed a risk assessment, have since included an AI element to them.

“How many tools are we using where we don’t know they are using AI at the back end? That’s going to be a big picture issue that we’re going to have to learn to live with and understand what that means for our data coverage overall,” Temurkan explained.

A Risk-Based Approach to AI Security

Fielder advocated a risk-based approach to AI safety measures, depending on the level of tasks these tools are being utilized for.

  • Low risk use, such as for code snippets. Here, standard security processes such as software development life cycles (SDLC) should be sufficient. Standard security pen tests and quality assurance testing should be undertaken regularly
  • Medium risk use, including customer help chat bots. For these uses, testing should be increased and more thorough
  • High risk use, such as advising on customer decisions, such as mortgages. Fielder said for these cases, there should be a thorough analysis of how the AI model is built, and how it comes to a decision

Temurkan said it is vital that security leaders work with businesses to find a balance between security and usability. For private generative AI tools, if there is control and oversight over the data being input into them, then their use should be encouraged.

For public generative AI tools, where there is a lack of visibility of the data leaving the organization, he noted the approach of many CISOs is to lock them down.

Taking a Step Back to Establish Controls

Fielder noted that we should be more cautious than we currently are about deploying AI tools, but acknowledged that businesses do not want to be left behind by rivals in AI adoption.

He believes organizations should be very careful about how they train their internal models, ensuring the data used is as broad as possible to prevent issues like bias occurring.

Security teams need to be on hand to advise on this training, to reduce the risk of prompt attacks.

Fielder added that the top security challenge around AI is to understand all the tools that use this technology throughout the enterprise and control the flow of data in and out of them.

Rik Ferguson, VP Security Intelligence at Forescout, told Infosecurity that this is not a new problem, with the deployment of data loss prevention (DLP) tools being hindered in the past due to data identity and classification tasks not being undertaken prior to roll out.

We are going to see the same issue occur with AI, with organizations realizing they need to first undertake these processes, he noted.

“You can’t start rolling AI out within an organization that’s going to be trained on your data until you understand what data you have, who should have access to it, and have labelled it as such. Otherwise, your risk of unintended exposure or breach of the training data and corporate secrets is way too high to be acceptable,” Ferguson explained.

What’s hot on Infosecurity Magazine?