Artificial intelligence is expected to deliver not only new, innovative capabilities but also unknown risks. There is little preparation in place to deal with potentially destructive threats that could emerge at the cutting edge of AI in the future -- a dangerous “frontier” that AI could generate.
The global AI Safety Summit, held at Bletchley Park north of London, Britain from Nov. 1-2, explored the concept of frontier AI and countries agreed there are substantial risks that may arise from potential intentional misuse or unintended issues of control of frontier AI, with particular concerns involving cybersecurity, biotechnology and disinformation risks.
At the first summit on AI safety, hosted by the UK government, 28 countries -- including South Korea, the US and China -- and the European Union signed and published the “Bletchley Declaration,” in which the delegates agreed “the urgent need to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community.”
The governments also noted the risks beyond frontier AI, including bias and privacy and jointly recognized the need to deepen the understanding of risks and capabilities that are not fully understood.
The two-day summit marks a step forward since the countries finally started discussing the unknown territory of frontier AI, which could disrupt a wide range of sectors if no proper action is taken to strengthen safety standards and regulations in connection with emerging technological breakthroughs.
Korea also contributed to the event, with President Yoon Suk Yeol offering his view by videoconference from Seoul.
“The emergence of generative AI, such as ChatGPT, has enhanced convenience in our lives and raised industrial productivity,” Yoon said.
“But the digital gap can worsen economic gaps and the rapidly increasing disinformation can undermine our freedom and threaten our democratic systems, including elections,” he added.
ICT Minister Lee Jong-ho, who took part in the summit along with other delegates in Britain, stressed the need for global efforts to raise AI ethics standards as its reliability and safety must be prioritized in consideration of AI technology’s transformative power.
The participation of Yoon and Lee at the summit was part of the country’s endeavor to tackle the potential risks of AI, even though Korea is just in the beginning stage of understanding the potential perils of frontier AI and formulating basic policy ideas about countermeasures.
As Yoon introduced during the summit, the Korean government unveiled the Digital Bill of Rights, a set of guidelines designed to ensure that AI and other technologies advance, not hinder, the freedom of humanity, and to lay the foundation for preventing AI-based fake news, among other objectives.
Although critics say the Digital Bill of Rights has a long way to go before government agencies and major players in related industries implement specific actions, it is meaningful that the government is actively engaged in global efforts to minimize AI-related risk by co-hosting a mini-virtual summit in May 2024 with Britain as part of the AI Safety Summit. In addition, Korea is set to host the AI Global Forum to support the international organization launched under the United Nations, which will establish a global AI governance system.
But the government and companies are yet to come up with specific rules and principles to block AI-based scams such as voice deepfakes targeting banks and account holders. The government is also urged to address the disputes about copyright and privacy infringement in the AI-based generation of text, images and video.
Given the breakneck pace of advances in AI technology, the government has to work closely with lawmakers to pass AI-related bills that have long been stalled at the National Assembly, and update policies and standards in a way that strikes a balance in promoting advances in innovation and preventing negative aspects of frontier AI.