The European Union agreed Friday to a set of new controls aimed at regulating artificial intelligence, marking the world’s first attempt to put limits on the use of fast-evolving technology that generates both positive and negative responses.
The EU’s new law, called the “AI Act,” is yet to be formally approved by the European Parliament and the bloc’s 27 member states, but Friday’s political agreement signals that the legislation’s key points have been determined.
Thierry Breton, the EU’s internal market chief, said in a statement that the deal strikes a balance between nurturing the AI potential and protecting the fundamental rights of people.
Under the AI Act, the EU would ban biometric scanning that can categorize people by certain characteristics, and require general-purpose AI systems, such as those that underpin the OpenAI’s ChatGPT, to follow new transparency requirements. In addition, the use of facial recognition would be allowed only within safeguards and exemption rules, and people should be properly warned that what they are seeing are chatbot-generated images or “deepfakes.”
Under the proposed legislation, companies that violate the rules could face fines of up to 7 percent of global sales, a regulatory penalty that should prompt South Korean companies to review them in advance when their products and services involve AI technology.
The AI Act can be seen as a regulatory breakthrough, a result of forward-looking policy debates among EU member states. But there are skeptics who raise questions about the law’s effectiveness and timely application. Indeed, it is far from straightforward to draw a regulatory line for the rapidly shifting field of AI, where revolutionary solutions quickly emerge overnight in a way that can reshape the way people write letters, code programs and draw pictures, among the myriad of human behaviors being affected.
The EU’s pioneering AI Act itself had gone through heated internal debates due to the recent emergence of generative AI systems such as ChatGPT. Since the legislation’s first draft in 2021, ChatGPT, Google’s Bard chatbot and other general purpose AI services -- also known as large language models -- have hit the market, pushing EU lawmakers to catch up and beef up the legislation.
Given that new AI solutions that go beyond the regulatory domains of the AI Act are bound to be developed in the coming years, the EU policymakers are likely to have to deal with a frustrating mismatch between slow-paced regulation and galloping advances in AI technology.
Aside from the time gap in regulation’s implementation, some EU members had expressed concerns that over-regulating general-purpose AI systems could undermine the market position of European OpenAI competitors in an entirely new market where trillions of dollars are estimated to be at stake.
Although the EU has taken the initial the lead in formulating AI regulations, there is no doubt that companies in the US are spearheading major technological breakthroughs in chatbot services that can instantly create diverse content on command.
At least for a while, OpenAI’s ChatGPT, Microsoft’s Bing and Google’s Bard are likely to dominate the chatbot-based services, easily outsmarting smaller competitors.
With the regulatory and technological changes in AI sweeping the entire world, Korea is lagging behind on both fronts. The country is known for solid broadband networks and a host of leading technology firms, but it does not have a major digital platform or AI service that is widely used globally.
Korea is also stuck with outdated laws regarding AI. Experts call for speedy revisions to the personal data laws and other rules so that domestic AI developers can better compete on the global market. But all the 12 bills related to AI technology that have been filed over the past three years are idling at the National Assembly.
Given the nature of fast-evolving AI technology, Korean policymakers must realize that the country is missing a lot of opportunities in the new era of generative AI.