Hyeonsu “Elliot” Jin
MP elliot@decentlaw.ioElliot served as a corporate lawyer at Pyeongan Lawfirm and as in-house counsel for Chai Corporation, providing diverse corporate advisory services.
- Corporate / Startups
- Cross-border / Dispute Resolution
- Crypto
- VC / Financial Advisory
- IP Litigation
- Sports
- Education
- New York University B.A., Political Science Inha University School of Law J.D. Postech Blockchain Expert Program
- Experience
- Legal Advisor to Ministry of Gender Equality and Family Pyeongan Lawfirm (Corporate, Crypto, Criminal, Data) Chai Corporation (Legal Counsel) Kim & Chang (Intern) Yulchon (Intern) Korean Air (Intern)
- Licenses
- Attorney, Korea My Data Manager Regular Member of the Blockchain Law Society
- Languages
- English Korean
- CASES
-
-
[Corporate/Startups]
- Corporate criminal cases involving embezzlement, misappropriation by CEOs, drug-related offenses, and sexual crimes litigation.
- Domestic and international mid-sized company and startup litigation and advisory on corporate damages and lawsuits.
- M&A, legal due diligence, investment agreements, VC/PE corporate legal advisory.
- Startup investment agreements, terms of service, personal data legal advisory.
- Inter-corporate dispute resolution and civil/criminal litigation.기업형사, 대표이사의 배임, 횡령, 마약, 성범죄 사건 등 소송
- Multinational civil, criminal, IP dispute resolution and litigation.
- Establishment of corporations and bank account openings in Singapore, BVI, Switzerland.
- English supply contract review and advisory with international electric vehicle company T.
- English contract drafting, review, translation, etc., with international record label W.
- English contract drafting, review, translation, etc., for fintech company K.
- Comprehensive tax audit advisory for Korea's largest virtual asset investment company, H.
- Business structure comprehensive consulting advisory for virtual asset issuance P2E company P.
- Progression of ICO, SAFT, and exchange acquisition contracts for virtual asset issuance corporation B.
- Review and advisory of white papers for virtual asset and NFT issuance corporations.
- Tax investigation response advisory for algorithmic trading companies U and B.
- Business model structure review and advisory for NFT trading platform operations of corporation K.
[Cross-border / Dispute Resolution]
[Crypto]
-
Business Advisory
Legal Review Case: AI Healthcare Service Terms of Use & Privacy Policy
Client Information Corporate Client / Service Operator Case Details Decent Law Firm’s Corporate Legal Team conducted a comprehensive legal review of the Terms of Use and Pr...
Delivery of Legal Review Documents -
Civil Litigation
Successful Defense in a Damages Claim Lawsuit | KRW 630 Million Claim Fully Dismissed
Client Information Individual / Defendant Case Details The defendant had been in a de facto marital relationship with the deceased since approximately 2005. The deceased was ...
Full Dismissal of All Claims -
Crypto Advisory
Legal Review Opinion on the Compliance of a Crypto Asset Exchange Service (Kimchi Premium Arbitrage)
Client Information Corporate Client / Business Entity Case Details The client operates a business overseas and had been utilizing a cryptocurrency arbitrage strategy commonly...
Provision of a legal review opinion -
Corporate / Startups Advisory
Medical Device Act Compliance Review for AI Healthcare App
Client Information Corporate / Service Provider Case Details The Corporate Practice Group at Decent Law Firm conducted a comprehensive legality review for the new service of...
Provision of Legal Opinion Letter
Related News
-
BlogsRecovery Strategies Every Blockchain Scam Victim Must Know
Why Blockchain Scams Cause Such Severe Damage Unlike conventional financial fraud, blockchain scams are rooted in the abuse of technological characteristics. Due to the inherent nature of blockchain technology, once a virtual asset transaction is completed, it is technically irreversible. This non-reversible structure makes post-incident recovery extremely difficult. When overseas exchanges or foreign wallets are involved, jurisdictional issues further complicate the situation, significantly raising the difficulty of any legal or practical response. A major problem is that many victims mistakenly perceive these cases as mere investment losses. Technical jargon such as whitepapers, smart contracts, and algorithms is often used to disguise intentional deception, causing victims to miss the critical window for effective action. In practice, early response timing plays a decisive role in determining whether meaningful recovery is possible in blockchain fraud cases. For those already experiencing anxiety and fear due to such losses, we hope this guide helps provide some clarity and direction. Common Types of Blockchain Scams (From the Victim’s Perspective) In practice, blockchain scams tend to follow recurring patterns: Fraud disguised as coin investments or private sales Schemes posing as crypto loans, staking programs, or yield products Fake exchanges, wallets, or phishing sites designed to steal login credentials or assets Signal groups, automated trading services, or guaranteed-profit schemes The critical issue is distinguishing between ordinary investment risk and losses caused by deceptive conduct. This distinction directly affects whether criminal fraud charges may be established and whether civil liability for damages can be pursued. If this assessment is incorrect, the entire response strategy may be fundamentally misguided. Immediate Actions to Take Once You Recognize the Fraud If fraud is suspected, the first step is to immediately stop any further transactions, change passwords for all related accounts, and prevent secondary damage. At the same time, evidence preservation is the highest priority. You must securely retain and back up original materials, including transaction hashes, wallet addresses, screenshots and original files of Telegram, KakaoTalk, or email communications, website URLs and whitepapers provided by the counterparty, contracts, and transfer records. Failure at this stage can create decisive limitations in both criminal and civil proceedings later on. You may request withdrawal freezes from virtual asset service providers (exchanges), ask investigative authorities to suspend payments related to fraudulent accounts, and consider filing a report under the Act on the Prevention of Telecommunications-Based Financial Fraud and Refund of Damage Proceeds. The earlier the response, the greater the possibility of practical measures being taken. Waiting passively only reduces the available options—active intervention is essential. How Decent Law Firm Makes a Practical Difference Blockchain scam cases cannot be resolved simply by filing a criminal complaint. They require a coordinated strategy that integrates structural analysis of the scam, clear legal issue identification, and parallel criminal and civil actions. Decent Law Firm conducts in-depth reviews of transaction flows and technical structures to precisely assess whether fraud is legally established, and prepares structured criminal complaints and legal memoranda for submission to investigative authorities. We also develop realistic, case-specific strategies involving exchanges, overseas platforms, and potential civil recovery depending on the feasibility of identifying the perpetrators. Ultimately, blockchain scams are not merely technical issues—they are problems of legal structure and accountability. If damage has already occurred, it is crucial not to proceed alone. Early, accurate legal intervention is the most effective way to define the proper course of action. In blockchain fraud cases, speed and precision make a tangible difference. We strongly encourage you to prepare and act without further delay.
2025-12-25 -
Media CoverageSouth Korea Finalizes Framework for AI Basic Act: Legislative Notice for Enforcement Decree Concludes
The public notice period for the Enforcement Decree of the “Framework Act on the Advancement of Artificial Intelligence and Establishment of Trust” (commonly referred to as the 'AI Basic Act'), which establishes the institutional standards for the AI industry, concluded on December 22, 2025. With the Act set to take effect on January 22, 2026, the South Korean government is finalizing detailed regulations, prompting both domestic and international AI service providers to accelerate their compliance efforts. The Ministry of Science and ICT (MSIT) released the draft Enforcement Decree on November 12. Compared to the initial draft unveiled in September, the final version provides more specific methods for fulfilling obligations and includes adjustments to minimize overlapping regulations with existing laws, such as the Personal Information Protection Act (PIPA). Industry experts evaluate that this decree has evolved beyond mere recommendations to become an essential standard that must be integrated from the initial planning and design stages of AI services. Mandatory Labeling for Generative AI A cornerstone of the Enforcement Decree is the mandatory labeling of generative AI outputs. Under Article 31 of the AI Basic Act, AI providers must notify users that content has been generated by artificial intelligence in a manner that is easily recognizable. The decree categorizes labeling methods into "Human-Perceptible" and "Machine-Readable" formats. Machine-readable formats include technical measures such as C2PA or metadata embedding. Even when opting for machine-readable methods, providers are required to inform users at least once via text or audio prompts that the content is AI-generated. However, to reduce the administrative burden on businesses, the decree waives redundant labeling if the content (such as deepfakes likely to be confused with reality) has already been labeled or disclosed in accordance with other relevant statutes. Regulations on "High-Impact AI" The decree also clarifies the regulation of "High-Impact AI," defined as systems that may significantly affect human life, physical safety, or fundamental rights. It establishes a procedure where providers can apply to the MSIT to confirm whether their service falls under the High-Impact category, with the government required to respond within 60 days. Industry insiders view this confirmation process as a critical benchmark, as the classification of a service as High-Impact AI significantly increases the required level of risk management and documentation. Integration with Existing Laws and Operational Accountability A notable point for businesses is the coordination with other legal frameworks. The decree specifies that if a provider faithfully fulfills its obligations under the Personal Information Protection Act (PIPA), it is deemed to have satisfied the safety and reliability requirements of the AI Basic Act within the scope of personal data processing. While this is expected to ease the burden of double regulation, it does not exempt providers from duties regarding algorithm risk management or accountability for AI outputs outside the realm of personal data. Furthermore, the decree governs the overall operational systems of AI providers. Businesses must retain documents regarding risk management, explanation protocols, and user protection measures for five years. For overseas providers, the obligation to appoint a domestic agent to protect Korean users has been formalized. Safety Requirements for High-Compute AI Models For large-scale AI models with cumulative training computation exceeding 10²⁶ FLOPs, the establishment of risk identification and management systems is now mandatory. While few commercial models currently meet this threshold, discussions regarding the scope of application are expected to continue as hyper-scale AI development accelerates. A key remaining issue is the extent of liability for companies that do not develop their own models but provide generative or high-impact services using global models via APIs. The Path Forward: "Compliance by Design" Although the government plans to implement a grace period of approximately one year after the law takes effect, industry tension remains high. Modifying the UX or system architecture of an AI service after launch incurs significant time and cost. Jin Hyeonsu, Managing Partner at DECENT Law Firm, commented, "From the conclusion of this public notice period, it is essential for businesses to diagnose whether their services fall under the High-Impact or Generative AI categories. The core of preparation before the 2026 enforcement will be determining how to reflect labeling obligations on service screens and establishing a systematic framework for document management." As the 2026 enforcement of the AI Basic Act approaches, the domestic AI industry faces the dual challenge of technological competition and trust management. Whether these institutional standards act as a barrier to innovation or a foundation for market stability will depend on how effectively enterprises respond.
2025-12-24 -
BlogsKorea’s AI Basic Act: Key Compliance Checkpoints for AI Businesses
The public notice period for the Enforcement Decree of the “Framework Act on the Advancement of Artificial Intelligence and Establishment of Trust” (hereinafter the 'AI Basic Act') concluded on December 22, 2025. With the Act set to take effect on January 22, 2026, the South Korean government has finalized the institutional framework. For AI service providers, these regulations are not merely post-launch checklists but essential specifications that must be integrated from the initial planning and design stages. 1. Labeling Obligations for Generative AI and UX Integration Transparency requirements under Article 31 of the AI Basic Act have been further refined through the Enforcement Decree. Labeling Methods: Providers must choose between a "Human-Perceptible Format" (visible text/watermarks) and a "Machine-Readable Format" (C2PA, metadata, etc.) to identify AI-generated content. Mandatory Notification: Even when adopting machine-readable formats, providers are obligated to notify users at least once via text or audio prompts. Avoidance of Double Regulation: If content (e.g., deepfakes) has already been labeled in accordance with other relevant laws, it may be exempt from redundant labeling duties under this Act. Practical Impact: This is not a simple notification task; it directly impacts UX/UI design. Legal reviews should be conducted during the design phase to avoid the prohibitive costs of post-launch modifications. 2. High-Impact AI Confirmation and Launch Risk Management The Act defines "High-Impact AI" as systems that significantly affect human life, physical safety, or fundamental rights (e.g., healthcare, transportation, recruitment, credit scoring, etc.). Confirmation Procedure: Businesses can apply to the Ministry of Science and ICT (MSIT) to confirm whether their service qualifies as "High-Impact AI." The government must respond within 60 days (30 days + a possible 30-day extension). Business Risk: The government's response timeline is a critical variable for service launch schedules. If a service is retroactively classified as High-Impact AI, the provider may face the risk of redesigning the entire system architecture to meet enhanced safety standards. 3. Integration with the Personal Information Protection Act (PIPA) The Enforcement Decree reflects efforts to resolve overlapping regulations with the existing PIPA. Deemed Compliance: If a business faithfully fulfills its obligations under PIPA, it is deemed to have satisfied the safety and reliability requirements of the AI Basic Act regarding the processing of personal information. Limitations: Note that this "deemed compliance" applies only to personal data processing. Obligations regarding algorithm transparency and accountability for AI outputs must still be addressed separately under the AI Basic Act. 4. Post-Management Accountability: 5-Year Data Retention & Domestic Agents The Decree formalizes accountability measures to verify regulatory compliance. Record-Keeping: Documents including risk management plans, explanation protocols, and user protection measures must be retained for 5 years. These serve as crucial evidence during disputes or regulatory investigations. Domestic Agent Appointment: Overseas AI providers are now explicitly required to appoint a domestic agent in Korea. Domestic companies utilizing APIs from global Big Tech firms must also conduct supply-chain compliance checks. 5. Safety Obligations for Large-Scale AI Models (High-Compute AI) The Decree imposes obligations to establish risk identification and management systems for developers of large-scale AI models with cumulative training computation exceeding $10^{26}$ FLOPs. Domestic businesses providing services based on these hyper-scale models must also review the legal structure of liability and risk-sharing. Conclusion: A 1-Year Grace Period, but the Time to Prepare is Now The government intends to provide a one-year grace period following the enforcement of the Act. However, given the nature of the AI industry, reactive adjustments can lead to immense technical costs and legal exposure. Enterprises must now view "Compliance by Design" as a core element of their service, moving beyond mere technological development. We recommend a thorough diagnostic of whether your services fall under the "High-Impact" or "Generative AI" categories to ensure full readiness by the enforcement date. DECENT Law Firm provides tailored legal counsel and solutions to navigate the evolving regulatory landscape of the AI industry. If you require a detailed review or a compliance audit, please contact us.
2025-12-24