On September 8, 2025, the Ministry of Science and ICT (MSIT) released the draft Enforcement Decree of the Framework Act on Artificial Intelligence Development and Establishment of a Foundation for Trustworthiness (the AI Framework Act), together with directions for the enactment of subordinate legislation.
The draft Enforcement Decree (comprising 34 provisions including the Addenda) was prepared following more than 70 rounds of consultations with diverse stakeholders, including industry, academia, civil society, and relevant ministries. The MSIT emphasized that the draft aims to provide greater clarity on the scope of regulated entities and the criteria for determining whether specific AI systems are subject to statutory obligations, while striking a balance between global regulatory trends and the realities of Korea’s AI industry. In particular, the MSIT highlighted that the draft Enforcement Decree is designed to prioritize promotion of AI development while mitigating regulatory uncertainty and compliance burdens.
Public explanatory sessions and consultations with relevant stakeholders are scheduled for the second to fourth weeks of September, during which interested parties may consider submitting their comments.
This article highlights the key provisions of the draft Enforcement Decree and their implications for AI businesses.
I. Transparency, Safety, and High-Impact AI Obligations
1) Transparency Obligations
- Obligation to provide advance notice
An AI business intending to provide a product or service utilizing high-impact AI or generative AI must provide advance notice by one of the following methods, which may include through terms and conditions, user interfaces, or similar means (Article 22(1) of the draft Enforcement Decree):
(1) stating such fact directly on the product or service (the Products, etc.), or setting it out in a contract, user manual, terms of service, etc.;
(2) displaying such fact on the user’s screen or device;
(3) posting such fact at the place where the Products, etc. are provided (including places reasonably related thereto) in a manner that is easy to recognize; or
(4) any other method recognized by the Minister of Science and ICT, taking into account the characteristics of the Products, etc.
- Obligation to label outputs generated by generative AI
An AI business may label outputs generated by generative AI in a format that can be recognized by humans or machines (Article 22(2) of the draft Enforcement Decree). The MSIT has indicated that such labeling may take the form of an invisible watermark, and it plans to issue transparency guidelines by December 2025, including unit standards for labeling and examples of the use of invisible watermarks.
- Obligation to label deepfake outputs
An AI business, with respect to deepfake outputs (outputs such as virtual sounds, images, or videos generated by an AI system that are difficult to distinguish from reality), shall provide notification or labeling in a manner that enables users to clearly recognize such outputs, taking into account the following (Article 22(3) of the draft Enforcement Decree):
(1) notification or labeling by a method through which users can easily confirm the contents by means of vision, hearing, or by using software, etc.; and
(2) notification or labeling by a method that takes into account the age, physical conditions, and social conditions of the principal users.
- Exemptions from transparency obligations
The transparency obligations shall not apply in cases where (i) it is evident, taking into account the product or service name, statements displayed on the user’s screen, or indications on the exterior of the product, that the product or service is operated on the basis of high-impact AI or generative AI, or (ii) the AI system is used solely for the internal business purposes of the AI business (Article 22(4) of the draft Enforcement Decree).
2) Obligation to Ensure Safety
The draft Enforcement Decree defines an AI system subject to the obligation to ensure safety as an AI system whose cumulative computation used for learning is not less than 10²⁶ floating-point operations and which falls within the criteria publicly notified by the Minister of Science and ICT, taking into account the level of development of AI technology and the degree of risk (Article 23(1) of the draft Enforcement Decree).
3) Criteria for High-impact AI
An AI system that may cause a significant impact on, or pose a risk to, human life, physical safety, or fundamental rights, and that is utilized in specified sectors such as energy, healthcare, nuclear power, transportation, and education, shall constitute high-impact AI (Article 2(4) of the AI Framework Act).
The draft Enforcement Decree specifies the criteria for determining whether an AI system constitutes high-impact AI, and provides that the Minister of Science and ICT shall determine such status by comprehensively taking into account (i) the area of use, (ii) the extent, severity, and frequency of risks to fundamental rights, and (iii) the particular characteristics of the area of application (Article 24(2) of the draft Enforcement Decree).
Where an AI business provides products or services utilizing high-impact AI, the AI business shall post on its website, etc., the following matters (Article 26(1) of the draft Enforcement Decree):
(1) the key contents of the risk management plan, including the risk management policies and organizational framework, under Article 34(1)(i) of the AI Framework Act;
(2) the key contents of the standards and explanation methods under Article 34(1)(ii) of the AI Framework Act;
(3) measures for the protection of users; and
(4) the name and contact information of the person who supervises and manages the relevant high-impact AI.
4) AI Impact Assessment
Where an AI business provides products or services utilizing high-impact AI, the AI business shall endeavor, in advance, to assess the impacts on fundamental rights of individuals (Article 35(3) of the AI Framework Act). Pursuant to the draft Enforcement Decree, the matters to be included in such impact assessment are as follows (Article 27(1) of the draft Enforcement Decree):
(1) identification of the subjects who may potentially be affected in their fundamental rights by the products or services utilizing the relevant high-impact AI (including the identification of individuals or groups with certain characteristics);
(2) identification of the types of fundamental rights that may be affected in connection with the relevant high-impact AI; and
(3) the contents and scope of the social and economic impacts on fundamental rights of individuals that may arise from the relevant high-impact AI.
II. Operation of a Guidance Period for Administrative Fines and Incentives for Impact Assessments
Under the AI Framework Act, an administrative fine of up to KRW 30 million may be imposed: (i) where advance notice related to transparency has not been made; (ii) where a foreign business exceeding certain thresholds fails to designate a local representative; or (iii) where a corrective order issued for a violation of the AI Framework Act is not complied with (Article 43(1) of the AI Framework Act).
The MSIT has announced that, in order to minimize confusion for companies during the initial enforcement of the AI Framework Act and to achieve an effect substantially equivalent to a regulatory grace period, it will operate a guidance period for administrative fines. The specific duration and details of such guidance period will be finalized through consultation with stakeholders.
In addition, the MSIT has stated that it intends to reduce corporate burdens and provide incentives by offering consulting and financial support for safety and trustworthiness certifications and for the conduct of impact assessments, thereby encouraging voluntary participation. The MSIT also plans to support business operators in fulfilling their obligations, including the confirmation of high-impact AI and the implementation of transparency measures.
III. Subjects and Criteria for Support to Foster the AI Industry
The AI Framework Act provides provisions to foster the development of AI technology and the AI industry, including support for R&D projects for AI technology development (Article 13 of the AI Framework Act), the establishment of policies related to training data (Article 15 of the AI Framework Act), and support for companies in the introduction and utilization of AI technology (Article 16 of the AI Framework Act).
The draft Enforcement Decree specifies the subjects and criteria for each of the foregoing promotion provisions, and the main contents are as follows:
<Main contents of subjects and criteria for support for fostering the AI industry>
- Businesses eligible for support for training data (Article 12 of the draft Enforcement Decree)
-Businesses for the development of technologies for the production and processing of training data
-Businesses related to the production, collection, management, distribution, and utilization of training data for the development of AI services
-Businesses related to the development of standards and guidelines for training data, etc.
- Support measures for the introduction and utilization of AI technology (Article 15 of the draft Enforcement Decree)
-Provision of information on AI technology
-Education and technical support necessary for the protection of users or affected persons
-Establishment and provision of AI computing infrastructure, etc.
IV. Implications and Future Plans
The MSIT has announced that, based on the results of stakeholder consultations, it plans to proceed with administrative legislation procedures beginning in October, with the aim of completing the enactment of the Enforcement Decree by December 2025. The MSIT also intends to publish, around December, final versions of key guidelines, including:
- Guideline on Criteria and Examples of High-Impact AI
- Guideline on Responsibilities of High-Impact AI Businesses
- Guideline on Safety Obligations for AI
- Guideline on Transparency Obligations for AI
- Guideline on AI Impact Assessment
The Enforcement Decree is expected to be continuously supplemented through stakeholder consultation, and because it contains detailed standards for determining obligations of AI businesses, companies should carefully monitor developments. It will also be important to track the finalized guidance period for administrative fines and to actively refer to the forthcoming guidelines once published.
If you have any questions regarding this article, please contact below:
Hwan Kyoung KO ([email protected])
Sunghee CHE ([email protected])
Tae Joo KIM ([email protected])
Kyung Min SON ([email protected])
Il Shin LEE ([email protected])
Jaeyoung CHANG ([email protected])
Matt Younghoon MOK ([email protected])
For more information, please visit our website: www.leeko.com