South Korea Finalizes Framework for AI Basic Act: Legislative Notice for Enforcement Decree Concludes
The public notice period for the Enforcement Decree of the “Framework Act on the Advancement of Artificial Intelligence and Establishment of Trust” (commonly referred to as the 'AI Basic Act'), which establishes the institutional standards for the AI industry, concluded on December 22, 2025. With the Act set to take effect on January 22, 2026, the South Korean government is finalizing detailed regulations, prompting both domestic and international AI service providers to accelerate their compliance efforts.
The Ministry of Science and ICT (MSIT) released the draft Enforcement Decree on November 12. Compared to the initial draft unveiled in September, the final version provides more specific methods for fulfilling obligations and includes adjustments to minimize overlapping regulations with existing laws, such as the Personal Information Protection Act (PIPA). Industry experts evaluate that this decree has evolved beyond mere recommendations to become an essential standard that must be integrated from the initial planning and design stages of AI services.
Mandatory Labeling for Generative AI
A cornerstone of the Enforcement Decree is the mandatory labeling of generative AI outputs. Under Article 31 of the AI Basic Act, AI providers must notify users that content has been generated by artificial intelligence in a manner that is easily recognizable.
The decree categorizes labeling methods into "Human-Perceptible" and "Machine-Readable" formats. Machine-readable formats include technical measures such as C2PA or metadata embedding. Even when opting for machine-readable methods, providers are required to inform users at least once via text or audio prompts that the content is AI-generated.
However, to reduce the administrative burden on businesses, the decree waives redundant labeling if the content (such as deepfakes likely to be confused with reality) has already been labeled or disclosed in accordance with other relevant statutes.
Regulations on "High-Impact AI"
The decree also clarifies the regulation of "High-Impact AI," defined as systems that may significantly affect human life, physical safety, or fundamental rights. It establishes a procedure where providers can apply to the MSIT to confirm whether their service falls under the High-Impact category, with the government required to respond within 60 days.
Industry insiders view this confirmation process as a critical benchmark, as the classification of a service as High-Impact AI significantly increases the required level of risk management and documentation.
Integration with Existing Laws and Operational Accountability
A notable point for businesses is the coordination with other legal frameworks. The decree specifies that if a provider faithfully fulfills its obligations under the Personal Information Protection Act (PIPA), it is deemed to have satisfied the safety and reliability requirements of the AI Basic Act within the scope of personal data processing.
While this is expected to ease the burden of double regulation, it does not exempt providers from duties regarding algorithm risk management or accountability for AI outputs outside the realm of personal data.
Furthermore, the decree governs the overall operational systems of AI providers. Businesses must retain documents regarding risk management, explanation protocols, and user protection measures for five years. For overseas providers, the obligation to appoint a domestic agent to protect Korean users has been formalized.
Safety Requirements for High-Compute AI Models
For large-scale AI models with cumulative training computation exceeding 10²⁶ FLOPs, the establishment of risk identification and management systems is now mandatory. While few commercial models currently meet this threshold, discussions regarding the scope of application are expected to continue as hyper-scale AI development accelerates.
A key remaining issue is the extent of liability for companies that do not develop their own models but provide generative or high-impact services using global models via APIs.
The Path Forward: "Compliance by Design"
Although the government plans to implement a grace period of approximately one year after the law takes effect, industry tension remains high. Modifying the UX or system architecture of an AI service after launch incurs significant time and cost.
Jin Hyeonsu, Managing Partner at DECENT Law Firm, commented, "From the conclusion of this public notice period, it is essential for businesses to diagnose whether their services fall under the High-Impact or Generative AI categories. The core of preparation before the 2026 enforcement will be determining how to reflect labeling obligations on service screens and establishing a systematic framework for document management."
As the 2026 enforcement of the AI Basic Act approaches, the domestic AI industry faces the dual challenge of technological competition and trust management. Whether these institutional standards act as a barrier to innovation or a foundation for market stability will depend on how effectively enterprises respond.