Back To Top

[Editorial] Deepfake risks in election

Election watchdog, portals urged to step up deepfake detection amid advance of AI video

Threats of "deepfake" videos and photos are mounting ahead of South Korea’s parliamentary election slated for April 10, posing a serious challenge to both election watchdog officials and voters as forged content is easy to make and circulate thanks to the fast-evolving artificial intelligence capabilities.

The National Election Commission said Monday it caught 129 deepfakes in violation of the election laws between Jan. 29 and Feb. 16, a significant number that deserves public attention since the number is feared to go up as we get closer to election day.

The NEC spotted the “politically motivated” deepfakes -- fake images or videos that appear to be real -- by screening major social media and online community platforms. The election watchdog is carrying out its duty after the National Assembly revised the Public Official Election Act in December to ban the use of deepfake images in election campaigns.

The danger of deceptive AI-generated content was revealed in the provincial elections in 2022, where an AI-generated video circulated on social media falsely showed President Yoon Suk Yeol endorsing a local candidate from the ruling party.

Under the revised law, which came into effect on Jan 29, those who use deepfake videos, photos or sound in connection with the election could be sentenced to up to seven years in prison or slapped with a fine of up to 50 million won ($37,500).

Legal penalties and warnings, however, can go only so far for deepfake producers armed with sophisticated AI solutions and techniques to remain digitally anonymous.

Given the relentlessly fast speed at which online posts are produced and AI-based editing tools becoming more available, it is only a matter of time before a torrent of political deepfakes could spread through online communities and mobile messengers.

Outside of Korea, global tech firms have joined forces in dealing with the negative impact of the powerful AI-generated deepfakes, especially as elections are held in 76 countries this year amid increasing attempts of AI-generated interferences.

Last Friday, companies including Adobe, Amazon, Google, Meta, Microsoft, OpenAI and TikTok signed a pact at the Munich Security Conference to voluntarily take preventive measures against the misuse of AI in connection with democratic elections.

The joint move came as instances of AI-generated misinformation, like fake robocalls and audio recordings impersonating candidates threatened to disrupt elections. However, the accord is largely symbolic, as the companies are not committed to removing or banning deepfakes. Instead, it just outlines methods they will use to detect and label deceptive AI content on their platforms. It remains to be seen when such labeling will be widely implemented given that global tech firms are yet to roll out specific solutions to identify and label AI-generated content.

On Tuesday, Korea’s two major portals announced they would take steps against AI-generated deepfakes. Naver, the country’s biggest portal, said its chatbot-based AI service would not respond to user’s requests for generating “inadequate content,” such as the composition of face images. Naver said it is running a monitoring team dedicated to detecting online posts that violate the election regulations while analyzing new patterns of abusive content such as deepfakes.

Kakao, Korea’s dominant mobile messenger app, said it is considering the adoption of watermark technology for content generated by its AI service, adding that the specific implementation schedule is not determined yet.

Last week, OpenAI shocked the world with a new app called Sora, which turns text descriptions into photorealistic videos, demonstrating the alarming pace of advances in AI-based video technology. The latest AI model is drawing not only praise from experts but also concerns about the risks of video deepfakes during global elections in 2024.

Considering the rapid evolution of deepfake technology, Korea’s election officials, portals and AI service operators are urged to make joint efforts to enhance deepfake detection and verification solutions.



By Korea Herald (khnews@heraldcorp.com)
MOST POPULAR
LATEST NEWS
subscribe
지나쌤