The UK government has announced a new AI safety research program that it hopes will accelerate adoption of the technology by improving resilience to deepfakes, misinformation, cyber-attacks and other AI threats.
The first phase of the AI Safety Institute’s Systemic Safety Grants Programme will provide researchers with up to £200,000 ($260,000) in grants.
Launched in partnership with the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, it will support research into mitigating AI threats and potentially major systemic failures.
The hope is that this scientific scrutiny will identify the most critical risks of so-called “frontier AI adoption” in sectors like healthcare, energy and financial services, alongside potential solutions which will aid the development of practical tools to mitigate these risks.
Read more on AI safety: AI Seoul Summit: 16 AI Companies Sign Frontier AI Safety Commitments.
Science, innovation and technology secretary, Peter Kyle, said that his focus is to accelerate AI adoption in order to boost growth and improve public services.
“Central to that plan though is boosting public trust in the innovations which are already delivering real change. That’s where this grants programme comes in,” he added.
“By tapping into a wide range of expertise from industry to academia, we are supporting the research which will make sure that as we roll AI systems out across our economy, they can be safe and trustworthy at the point of delivery.”
The Systemic Safety Grants Programme will ultimately back around 20 projects with funding of up to £200,000 each in this first phase. That’s around half of the £8.5m announced by the previous government at May’s AI Seoul Summit. Additional funding will become available as further phases are launched.
“By bringing together researcher from a wide range of disciplines and backgrounds into this process of contributing to a broader base of AI research, we’re building up empirical evidence of where AI models could pose risks so we can develop a rounded approach to AI safety for the global public good,” said AI Safety Institute chair, Ian Hogarth.
Research released in May revealed that 30% of information security professionals had experienced a deepfake-related incident in the previous 12 months, the second most popular answer after “malware infection.”