#Infosec2024: Deepfake Expert Warns of “AI Tax Havens”

Written by

Global AI and deepfake regulations could be seriously undermined if countries intentionally decide to allow irresponsible products to be built within their jurisdictions, a leading AI expert has warned.

Speaking at the opening keynote of Infosecurity Europe 2024 this morning, Henry Adjer argued that although regulation is “fundamentally changing the landscape” of AI development, there are potentially major hurdles ahead.

“There will be different landscapes,” he said. “Different countries will have different attitudes and my concern is we might see the equivalent of AI tax havens – countries that intentionally do not put in place legislation, [in order] to attract investment … but it leads to irresponsible products being built which go on to have a global impact.”

A Perfect Fake Storm

This could have major implications for democracy. Adjer, who described himself as a deepfake/generative AI “cartographer,” claimed that the world is facing a “perfect fake storm” – that is, the use of fake audio of public figures “leaked” to journalists as if it were a legitimate hidden recording.

Read more on deepfakes: Martin Lewis Shocked at Deepfake Investment Scam Ad

Such a recording may already have swung the Slovakian election last year in favor of a populist challenger to the pro-EU Progressive Slovakia Party.

The tactic was on show again when faked audio of Keir Starmer purported to reveal the Labour Party leader hurling a foul-mouthed tirade at an aide.

It will put increasing pressure on journalists, who must decide what is in the public interest to publish and what may merely be mischief-making, or worse, Adjer argued.

The challenge is that as deepfakes become more commonplace, and harder to spot, they also provide plausible deniability to bad actors for real things they’ve done. This “liar’s dividend” will lead to a “poisoning of the well, a corrupting of the information ecosystem,” Adjer argued.

Need for Better Detection Tools 

Unfortunately, “the FBI doesn’t really know what to do about this” and “detection tools are often not particularly robust, particularly around critical context,” he added.

False positives and negatives are still a problem, leading Adjer to argue that – while they have a role – detection tools are not the panacea many assume them to be.

Watermarking technology is more robust but could still be undermined by “a high degree of compression on a piece of media,” Adjer said, adding that tools will inevitably appear for removing and corrupting watermarks.

A more sophisticated solution to the challenge of deepfakes is “content provenance” – cryptographically secured metadata which is attached to media the moment it’s captured on a device or generated using an algorithm.

An initiative worthy of note is the Adobe-led C2PA standard, which provides a “nutrition label” to enhance transparency. 

“This is the dynamic we need to be looking for moving forward. A world where we look for these secure standards. It’s going to take time and scaling, but this is the way,” Adjer concluded. “But there is no silver bullet when it comes to these challenges.”

What’s hot on Infosecurity Magazine?