The jury of the UK’s Most Innovative Cyber SME competition has crowned Mindgard as the 2024 winner.
The award was announced during Infosecurity Europe by Saima Poorghobad, portfolio director at Reed Exhibitions.
She said the panel of cybersecurity experts judging the competition praised the AI security provider for “delivering a very articulate pitch that displayed their originality, impact and market need.”
The startup was selected because it “tackles some of the most pressing challenges in AI deployment,” she added.
An AI Red Teaming Platform Powered by Academic Research
Mindgard is a UK-based startup offering a platform for continuous red teaming and remediation of AI vulnerabilities so that enterprise security teams can deploy different AI use cases.
Speaking to Infosecurity after the ceremony, Nipun Gupta, Mindgard’s head of product, explained: “We help enterprises unlock value from AI by securely testing the AI models they use.”
Mindgard offers a software-as-a-service (SaaS) platform that helps organizations implementing AI use cases in their workflows test these different models against specific adversarial attacks and find particular vulnerabilities.
“Traditionally, red teaming has been associated with simulating adversary-like attacker behavior against IT systems. We are trying to do this with AI models,” Gupta said.
Although the startup emerged from stealth mode while generative AI was taking the world by storm, its head of product said the red teaming platform’s capabilities extend beyond generative AI and large language models (LLMs).
“We are capable of understanding adversarial machine learning attacks against all types of neural networks, including deep learning models and GenAI applications beyond LLMs. Our platform can replay attacks in a way that allows us to cut through the black box effect of most AI models and report on the security posture of the models based on the risks they are vulnerable to.”
Stefan Trawicki, founding machine learning engineer at Mindgard, continued: “Our understanding of the fundamentals of machine learning means that we don’t really mind about which AI models we are dealing with.”
Some of the red teaming methods the Mindgard platform uses include model input/output analysis and application programmable interface (API) analytics.
“We can identify AI prompt hacking vulnerabilities like jailbreaks, but also more fundamental flaws in the AI models such as ‘extract model’ and ‘extract training data’ vulnerabilities,” Trawicki added.
Gupta said Mindgard’s platform is designed to complement rather than replace internal red teaming work. “We can supercharge them and allow them to focus on the specific attacks AI models that their organizations are relying on are vulnerable to.”
From the Lab to the Startup
The story of Mindgard dates back to 2018, when Peter Garraghan, a professor of computer science at Lancaster University, started to study adversarial machine learning at his research lab, created in 2014.
“Initially, we started by realizing how disparate the research in adversarial machine learning was. Most of the projects at the time were written in a very academic way and were not fit for the industry. We started unifying our own work into a user-friendly platform on which we could rapidly test AI models,” said Trawicki, who was part of Garraghan’s academic team.
The Lancaster University researchers then realized that no commercial product existed like the one they were building and created Mindgard in 2022.
The startup has raised £3 m ($3.8m) from Lakestar, IQ Capital and Osney Capital. Its customers are primarily large enterprises across all sectors.