|
(Yonhap) |
An increasing number of IT companies in South Korea are developing artificial intelligence-based software to counter abusive online comments.
The AI-based profanity filters help them to improve user experience by hiding unpalatable comments on their platforms, according to industry sources.
One of them is the nation’s top game company Nexon, which in 2017 created an in-house AI research and development team called Intelligence Labs. The team of around 200 engineers has developed its own program that filters abusive comments and unauthorized advertisements. According to the company, the engine can also recognize newly coined words and offensive words that are disguised in symbols.
The technology has been applied to in-game chats for Nexon’s games since 2019.
Watcha, the operator of over-the-top service Watcha Play, also has its own swear filter designed to monitor comments and reviews that users leave on film and TV titles that the platform offers.
Watcha said its machine learning engine can monitor user comments and automatically hide them when they are found inappropriate. It hopes to improve the engine to get rid of malicious words or hate speech from its service, where many users come for useful and enjoyable comments.
Korea’s top portal operator Naver developed an AI-enabled chatbot in April last year to stop users from being exposed to distasteful comments and prevent people from manipulating online comments in order to sway public opinion on political issues.
Naver now has introduced its chatbot on its internet cartoon and online news platforms to filter comments. It is also capable of sorting out users who constantly reproduce malicious comments on purpose, the company added.
“There are around 100,000 variations that one can make with a single swear word, and it is not 100 percent guaranteed that the AI-based filter is able to catch them all. More diverse models and big data will help construct a more accurate filter,” an official said.
By Shim Woo-hyun (
ws@heraldcorp.com)