Back To Top

Naver, Kakao strive to combat deepfake porn spreading online

(Getty Images Bank)
(Getty Images Bank)

The rise of explicit and nonconsensual deepfake pornography illegally being shared online is prompting Korean platform operators to boost their surveillance while delaying generative AI services.

Naver, the country's No. 1 search engine, said Wednesday it is running Clova GreenEye 2.0 on its platforms to identify harmful and sexually explicit images using the image analysis solution based on its AI model's vision technology.

While the solution is not specifically made to scan for deepfake and AI-generated content, it monitors for all obscene content uploaded to the portal and its blog and community spaces, and automatically deletes content identified as inappropriate, Naver said.

The AI behind Clova GreenEye classifies images and content in four stages -- obscene, adult, pornographic and normal -- according to the internet content rating provided by the Korea Communication Standards Commission. It compares images against millions of others it has been trained with to decide to either delete or keep an image.

After introducing the program first in 2017 for use on its own platform, the company upgraded the model to sell as a solution to other companies in 2022.

The tech giant, which leads the country's generative AI field with Hyper Clova X, is cautious about introducing image generation. Recently, the company announced an update to Hyper Clova X that enables the generative AI chatbot to recognize images and respond to user inquiries.

Naver has been developing AI image generation technology that can compete with ChatGPT's Dall-E. But it is still pondering the launch of a similar service for its chatbot.

Naver also announced the development of Clova Speech X, the voice synthesis program that can re-create voices based on voice data inputs. The company reiterated that it has only developed the technology and is not planning on launching a service based on it soon.

"When it comes to image and voice generation by AI, user safety is a bigger issue than the technology itself," a Naver official said.

"We will only consider launching services with those technologies after we come up with proper solutions to address the risks of misuse, and when we know those features are profitable businesswise."

In an effort to prevent users from falling to phishing attacks, Kakao has introduced a "fake signal" in its antiabuse system to automatically detect suspicious accounts on the KakaoTalk messenger app by analyzing account information and user history with AI and machine learning technologies.

Kakao, however, noted there are limitations to its monitoring efforts, since conversations between individuals and in open chatrooms are completely protected from the company's surveillance.

"We cannot scan all the chats to find out if individuals are sending inappropriate content to others and prevent them from sending the content in advance. That would be infringing on privacy," a Kakao official said.

"But we are aware of the seriousness of obscene content including deepfake images and working with authorities when such crimes are reported."

Kakao has been keeping a low profile in the AI sector since it absorbed the AI research team of Kakao Brain in June. In July, Kakao Brain shut down its generative AI service that created images upon the user's request.

While the closing of the AI image generation service was part of a strategy for business management, Kakao underscored it also prevented users from creating explicit content.

"We do not have plans to launch such services soon. But we do have the technology, and if we were to roll out such a service in the future, we will make sure to have filtering solutions and an antiabuse system to protect users from explicit content," the Kakao official added.



By Jo He-rim (herim@heraldcorp.com)
MOST POPULAR
LATEST NEWS
leadersclub
subscribe
지나쌤