The 2024 US presidential race is officially underway. Candidates on both sides are gearing up for battle, arming themselves with campaign ads targeted at their competitors and press tours designed to sway voters in their direction.
From printing dubious newspaper articles to pushing false narratives on social media, candidates have used – and will continue to use – various avenues to stretch the truth and place themselves in a more favorable position. Political propaganda, as these tactics are so often called, has played a role in governmental elections for centuries. Even Octavian ran a propaganda campaign against Mark Anthony to become the emperor of Rome more than 2000 years ago.
It’s safe to say that political propaganda is here to stay, but its role in the upcoming presidential election may look a little different. AI deepfakes have been on the rise in recent years, with the number increasing at an annual rate of 900%, and there’s concern about how they will impact the upcoming presidential elections.
Deepfakes Enter the Race
Deepfakes are artificial media, such as images and videos, that have been digitally manipulated to impersonate a person’s likeness. You’ve likely seen a seemingly harmless example of them already, like Jim Carrey taking on the role of Jack Torrence in The Shining or Luke Skywalker’s guest appearance in an episode of The Mandalorian; but for every one of these, there are many far more dangerous examples.
On the political front, AI-generated media has taken the shape of former President Donald Trump giving a speech about the Paris climate agreement and current President Joe Biden responding to questions at a press conference. With deepfake technology becoming more accessible, there is rising concern that supporters from both political parties can use deepfakes to spread false information and make the opposing candidate look bad.
As deepfake technology advances, it’s getting more and more difficult to distinguish what is real from what is fake. Not being able to identify a genuine image, video or soundbite from an artificial one can have serious consequences for voters, especially those on the fence about who to vote for. For example, a damaging deepfake believed to be authentic and viewed by millions of people could cost a candidate the election.
When we look back, though, we’ll see that this isn’t the first time concern about deepfakes has found its way into the political ring. However, the difference today is how they’re being used. Before the 2020 presidential election, there was concern about deepfakes being used to spread misinformation about the modified electoral processes put in place as a result of the COVID-19 pandemic. Widespread misinformation about mail-in ballots or voting locations could disenfranchise a significant number of voters, but deepfakes falsely impersonating candidates was less of a concern at the time. Now, we’re seeing an influx of fake videos and images that could influence people’s decisions come election time.
A Step in The Right Direction
Because deepfake technology is still relatively new, there are currently no safeguards against its use in spreading misinformation – and, adding fuel to the fire, the tech industry’s recent AI arms race has only accelerated the release of deepfake generation tools. In an attempt to battle the rise of manipulated content online, companies and governmental bodies alike have taken various steps to curb the effects of deepfakes.
On one hand, tech giants are working to implement strategies and roll out initiatives to keep deepfake development at bay. Last year, Google started prohibiting deepfake-generating AI on its Colaboratory platform, while Meta has been removing deepfakes and AI-generated content from its social networks since 2020.
On the other hand, state legislatures are pushing to pass new bills and expand existing legal frameworks for issues like copyright infringements and defamation to prevent the use of deepfakes. For example, earlier this year, New York congressman Joseph Morelle announced a bill to make AI-generated deepfakes illegal, while nine other states have already enacted laws that regulate deepfakes to some degree.
Fighting AI With AI
Machines are increasingly getting better at imitating reality. If we hope to successfully detect deepfakes and AI-generated content as it becomes more advanced, we have to turn to technology. While AI might be the catalyst behind deepfakes, it is also an essential element in detecting them. Because of its ability to assure liveness and legitimacy within an image, video or audio soundbite, AI is far superior in deciphering what is real and what is fake.
Liveness detection uses advanced AI and computer vision to identify spoofing attempts. By analyzing thousands of data points and tracing evidence that would go unnoticed by the human eye, AI can detect artificially generated content within seconds. Since AI models ingest massive amounts of data, their algorithms can compare potential deepfakes with known videos and images to flag when something has been modified. For example, AI can identify that Joe Biden’s press conference has been artificially altered by comparing it to videos that have already been authenticated and picking up on suspicious or unusual elements of the modified version.
Political propaganda has been around for centuries, but, because of how easily it blur the lines between real and fake, artificially generated videos and images have the potential to influence voters and impact their decisions more than previous tactics. Considering experts predict as much as 90% of online content could be artificially generated in the next few years, now is the time to harness AI’s power to minimize the impact deepfakes could have on our most sacred democratic process.