The broad adoption of generative AI has come with an onslaught of misleading content online.
In a bid to help restore integrity to digital information, the UK’s National Cyber Security Centre (NCSC) and Canada’s Centre for Cyber Security (CCCS) have released a new report on public content provenance.
Provenance refers to the place of origin. To build stronger trust with external audiences, organizations need to improve how they address the public provenance of their information, the report reads.
Commenting on the publication, Ollie Whitehouse, NCSC chief technology officer, said: “This new publication examines the emerging field of content provenance technologies and offers clear insights using a range of cyber security perspectives on how these risks may be managed.”
“While there is no single solution for ensuring and assuring trust in digital content, this collaboration looks to introduce key concepts and strategies which bear further investigation to help protect our collective digital security and prosperity.”
The Future of Content Integrity
There are ongoing efforts within industry to tackle content provenance, such as the Coalition for Content Provenance and Authenticity (C2PA), which benefits from the support of generative AI and big tech firms like Google, OpenAI, Meta and Microsoft.
However, there is now an emerging need for interoperable standards across various media types, including video, image and text documents. While there are content provenance technologies available, the area remains immature.
The key technology involves trusted timestamps and cryptographically secure metadata help prove content hasn’t been tampered with. However, there are challenges around the development of these secure technologies, including how and when they are implemented.
Today’s technology also places an unfair burden on the end user, relying on them to understand the provenance data. The user must be able to essentially read and review the metadata of content they have received to assess its legitimacy instead of being able to make an assessment from a watermark or similar.
A provenance system should allow a user to check who or what created the content, when it was created and if any edits or changes have been made.
As cybercriminals increasingly use AI-generated images, videos and text to make scams more convincing the ability to trace the origin and edit history of digital media offers a critical defense.
The NCSC and CCCS’s publication looks to help others navigate this complex space with confidence and clarity.
It also offers practical steps for organizations considering the use of provenance technologies.
