Organizations should stick to risk management basics, including well-written policies, effective training and clear accountability, if they want to ensure AI is used safely and securely, experts have argued.
A panel of experts on day two of Infosecurity Europe this morning agreed that accountability and training should go hand in hand.
“One of the things that’s really important [to build into] your policies is accountability. You have to make people accountable for the choices they make. It’s fine to give them the tools … but they need to be accountable for understanding that they’re doing it right,” argued University College London (UCL) CISO, Sarah Lawson.
“If you’re going to use it, then know how to use it. And then as a business, provide the right ways for them to do it, so give people that training.”
Read more from Infosecurity Europe: #Infosec24: Deepfake Expert Warns of “AI Tax Havens”
Others also emphasized the need for updated training programs so that employees can use AI responsible and safely in the organization.
“There’s going to have to be some sort of awareness training to get the most effective use out of [GenAI] because if you ask it a question in the wrong way, you’ll not necessarily get the right answer,” said Blockmoor director of information and cyber security, Ian Hill.
Both Hill and Guildhawk CEO, Jurga Zilinskiene, agreed that prompt engineering will be a key skill going forward.
“We’re talking about machine learning but actually we need to be focusing on human learning. That’s probably the weakest link when we’re talking about AI,” Zilinskiene argued.
Guildhawk has created a prompt engineering task force, where business professionals in the organization rather than technology teams collaborate to decide about the kinds of output they want from GenAI, and then design the prompts.
“Yes, technology people have a role to play in this, but it’s about the actual [business] professionals here,” Zilinskiene added.
Data Quality Is Key
Zilinskiene also pointed to data quality and governance as providing a key but often under-appreciated foundation for safe, secure and optimized use of AI.
“One of the biggest weaknesses is the foundation … that we’re building this technology on. We need to have a strong dataset that is verified and that we can rely on. But who wants to invest in this, because it takes 80-90% of your development,” she argued.
UCL’s Lawson claimed that much of the early hype about AI is starting to drop off as a result, as users realize that key data models “are not as good as they’re cracked up to be.”