South Korea's "Framework Law on Artificial Intelligence Development and Trust Building"
Does really Asia not regulate AI?
Reporting on governance and digital regulation is a permanent fight against common places, which are installed in the public conversation in a way that is not often innocent. Last month another topic has fallen, in that Asia “does not regulate” in AI, in search of a technological development (that, of course, is never compatible with guaranteeing fundamental rights).
Well, it is no longer just China or Vietnam, the Asian panorama of artificial intelligence governance ends up illuminating a decisive milestone. This January, South Korea’s “Framework Law on Artificial Intelligence Development and Trust Building” came into force. And, when analyzing the text, it is impossible not to think about the “Brussels Effect”: the architecture of Korean law largely reflects the fundamental pillars of the EU AI Act.
I highlight only a few obvious similarities between the two texts:
1. The Classification: The Korean law introduces the concept of ‘High-Impact AI’. The definition is strikingly similar to the European High Risk one, covering systems that affect human life, safety or fundamental rights in areas such as energy, health, transport, employment assessments and government decisions... As in Europe, providers of these systems are required to take enhanced safety, reliability and risk management measures before deployment.
2. Transparency: Korean law requires labeling of artificially generated content, and requiring operators to notify users when they interact with Generative AI or when they consume content that could be confused with reality (audio, image, video).
3. Governance and Institutional Supervision: Korea institutes the “National AI Committee”, tasked with deliberating key policies, and creates an “AI Safety Research Institute” to define risks and technical standards.
4. Extraterritorial scope and obligation of local representation: The Korean law explicitly applies to acts performed abroad that affect the domestic market or users in Korea, a mirror of the scope of the EU Regulation, which affects third-country suppliers if the exit from their system is used in the EU. Korea further requires that AI operators without a home in Korea designate a “Domestic Representative” to act as a liaison with the government.
5. Digital Illustration: Korean law places a strong emphasis on education as a support for the “AI Society”. The Government should establish policies for the education and promotion of a “safe and trustworthy AI society” and the practice of AI ethics. Education projects are envisaged to improve awareness about the safe development of AI, as well as specific educational support for executives and employees of small and medium-sized enterprises (SMEs) for the introduction and use of technology.
6. Penalty regime: Korea raises, for companies that do not comply with the established provisions, mainly administrative fines and corrective orders, lower in general than European ones; but there are also specific criminal sanctions for the breach of confidentiality (in this it goes much further than the AI Act).
It is true that the Korean Law contemplates many pro bussines and development measures that the European Law lacks (with the exception of the sandbox system), but it is also true that the EU is promoting its strategic investment and development plans through other figures (such as the AI Continental Plan).
Are we seeing the birth of a global regulatory consensus in AI, which only the US Federal Government seems to be left out of?
Let’s see if we Europeans are the only ones who have believed the Silicom Valley tale about our “over-regulation” in AI...


