The United States is one of the many nations around the world that is gearing up for a major election cycle in 2024. With the advent of publicly accessible artificial intelligence (AI) tools, there has been a rise in political deepfakes, which require voters to acquire new skills to distinguish what is real. 

On Feb. 27 senate intelligence chair Mark Warner (D-Va.) said that America is “less prepared” for election fraud for the upcoming 2024 election than it was for the previous one in 2020.

This is in large part due to the rise of AI-generated deepfakes in the U.S. over the last year. According to data from SumSub, an identity verification service, North America has seen a 1,740% increase in deepfakes, with an increase times ten in the number of deepfakes detected worldwide in 2023.

Between Jan. 20-21, New Hampshire citizens reported receiving robocalls with the voice of U.S. President Joe Biden which told them not to vote in the Jan. 23 primary.

A week later, this incident prompted regulators in the U.S. to ban AI-generated voices used in automated phone scams, making them illegal under U.S. telemarketing laws.

However, as it goes with scams of all kinds where there is a will, there is a way despite any laws in place. As the U.S. gears up for Super Tuesday on March 5 — when the greatest number of U.S. states hold primary elections and caucuses — concern is high over false AI-generated information and fakes.

Cointelegraph spoke with Pavel Goldman Kalaydin, the head of AI and machine learning at SumSub, to better understand how voters can better prepare themselves to spot deepfakes and handle situations of deepfake identity fraud.

How to spot a deepfake

Kalaydin said despite there already being a 10 times increase in the number of deepfakes worldwide, he expects that to grow even more as we enter election seasons. 

He stressed the two types of deepfakes to be aware of: one which comes from “tech-savvy teams” that utilize advanced tech and hardware like high-end GPUs and generative AI models, which are often more difficult to detect, and “lower-level fraudsters” who use commonly available tools on consumer computers.

“It’s important that voters are vigilant in scrutinizing the content in their feed and remain cautious of video or audio content,” he said. Adding:

“Individuals should prioritize verifying the source of information, distinguishing between trusted, reliable media and content from unknown users.”

According to the AI specialist there are a number of telltale signs to be aware of in deepfakes.

“If any of the following features are detected — unnatural hand or lips movement, artificial background, uneven movement, changes in lighting, differences in skin tones, unusual blinking patterns, poor synchronization of lip movements with speech, or digital artifacts — the content is likely generated.”

However, Kalaydin warned that the technology will continue to “advance rapidly” and said soon it will be “impossible for the human eye to detect deepfakes without dedicated detection technologies.”

Deepfake generation, distribution and solution

Kalaydin said that the real problem is rooted in the generation of the deepfakes and their subsequent distribution. While AI accessibility has opened the door of opportunity for many, accessibility is to blame for the increase in fake content.  He added:

“The democratization of AI technology has granted widespread access to face swap applications and the ability to manipulate content to construct false narratives.”

Distribution of this deepfaked content then follows, with the lack of clear legal regulations and policies making it easier to spread such misinformation online.

“This, in turn, leaves voters ill-informed, fostering the risk of making poorly informed decisions,” Kalaydin warned.

He urged potential solutions to be mandatory checks for AI or deepfaked content on social media platforms to inform users.

“Platforms need to leverage deepfake and visual detection technologies to guarantee content authenticity, protecting users from misinformation and deepfakes.” 

Another potential approach he suggested would involve employing user verification on platforms in which “verified users would bear responsibility for the authenticity of the visual content while non-verified users would be distinctly marked, urging others to exercise caution in trusting the content.”

This uneasy climate has made governments around the begin to contemplate sufficient measures. India released an advisory to local tech firms saying approval is needed before releasing new ‘unreliable’ AI tools for public use ahead of its own 2024 elections. 

In Europe, the European Commission created AI misinformation guidelines for platforms operating in the area in light of multiple election cycles of countries in the region. Shortly after Meta, the parent company of Facebook and Instagram, released its own strategy for the EU to combat the misuse of generative AI in content on its platforms.

Magazine: Google to fix diversity-borked Gemini AI, ChatGPT goes insane: AI Eye