The regulation of artificial intelligence (AI) technologies in Asia Pacific (APAC) is evolving rapidly, with at least 16 jurisdictions having some form of AI guidance or regulation in place. Some countries have implemented AI-specific laws or regulations, while others have adopted a “soft” law approach that relies on non-binding principles and standards. While regulatory approaches differ in the region, policy drivers include common principles such as responsible use, data security, end user protection, and human autonomy.
Below we provide some updates on recent AI regulatory developments in the Asia-Pacific region. More developments are expected to be on the horizon, especially with the coming into force of the European Union Artificial Intelligence Act (EU AI Act) on August 1, 2024. The EU AI Act is the world's first comprehensive “hard” law on AI, governing the entire lifecycle of the production, distribution, and use of AI systems. The EU AI Act has broad extraterritorial reach, extending to (1) anyone who places an AI system on the market in the EU, and (2) providers or implementers of AI systems (wherever located) whose output is used in the EU. These factors mean that the EU AI Act is expected to influence the direction and nature of the different regulations being developed in various Asia-Pacific countries.
Businesses are seeing immediate impacts from these AI-related developments. First, APAC businesses need to develop AI governance frameworks to review the scope of their use of and impacts from AI technologies and ensure they comply with applicable legal and regulatory requirements. Additionally, the prevalence of AI systems across almost every industry has cross-functional implications related to business decisions such as mergers and acquisitions, investment transactions, joint ventures, and sourcing or outsourcing of critical services and supplies. This increases the need to carefully consider AI governance and compliance as part of transaction management, including conducting risk assessments and due diligence, and ensuring that AI-related issues (and associated risks) are appropriately addressed in relevant business agreements.
Recent AI Regulatory Developments in the Asia-Pacific Region
India was expected to include AI regulations as part of the proposed Digital India Bill, but a draft of the bill has yet to be released. However, it was reported that a new AI advisory group has been formed, tasked with (1) developing a framework to foster AI innovation (including India-specific guidelines to foster the development of trustworthy, fair and inclusive AI) and (2) minimizing the misuse of AI. In March 2024, the government also released recommendations on due diligence by intermediaries/platforms. The recommendations advise platforms and intermediaries to ensure that no illegal content is hosted or published using AI software or algorithms, and require platform providers to identify content generated by AI and explicitly inform users about possible errors in such output. Indonesia's Deputy Minister of Communications and Information Technology announced in March 2024 that preparations for AI regulations are underway, targeted for implementation by the end of 2024. The focus of the regulations is expected to be sanctions for the misuse of AI technology, including violations of existing laws on personal information protection, copyright infringement, and electronic information. Japan is in the process of preparing an AI law known as the Basic Act for the Promotion of Responsible AI. The government aims to finalize and propose the bill by the end of 2024. The bill is likely to only cover so-called “specific AI foundation models” that have significant societal impacts, and touches on aspects such as accuracy and reliability (e.g., safety verification and testing), cybersecurity of AI models and systems, and disclosure of AI capabilities and limitations to users. The framework also suggests working with the private sector to implement certain standards of these measures. Malaysia is developing an AI Code of Ethics for users, policymakers, and developers of AI-based technologies. The code outlines seven principles of responsible AI, focusing primarily on transparency of AI algorithms, preventing bias and discrimination by including diverse datasets during training, and evaluating automated decisions to identify and correct harmful outcomes. There are currently no indications that the government is considering implementing AI-specific legislation. Singapore has similarly not announced plans to develop AI-specific legislation. However, the government introduced the Model AI Governance Framework for generative AI in May 2024. It lays out best practice principles for how companies across the AI supply chain can responsibly develop, deploy, and use AI technologies. Relatedly, the government-backed AI Verify Foundation released AI Verify, a testing toolkit that developers and owners can use to assess and benchmark AI systems against internationally recognized principles of AI governance. The government also recently revealed plans to introduce safety guidelines for developers and app adopters of generative AI models. This aims to advance end-user rights by promoting transparency regarding the behavior of AI applications (e.g., data used, results of tests, limitations of AI models) and outline safety and reliability attributes that must be tested before deployment. South Korea's AI law, the Act on the Framework for Promoting AI Industry and Establishing Trustworthy AI, has passed the final stage of voting and is currently being debated in the National Assembly. Following the principle of “permit first, regulate later,” it aims to promote the growth of the country's AI industry, but still imposes strict notification requirements for certain “high-risk AI” (those that have significant impacts on public health and fundamental rights). Taiwan has published a draft AI law entitled the “Basic Act on Artificial Intelligence.” The draft bill will be open for public comment until September 13, 2024. The bill outlines a set of principles for research on AI development and applications and proposes certain mandatory standards aimed at protecting user privacy and security, including specific AI security standards, disclosure requirements, and accountability frameworks. Thailand is developing AI legislation, specifically the draft bill on the promotion and support of artificial intelligence (creating an AI regulatory sandbox) and the draft royal decree on business operations using artificial intelligence systems (outlining a risk-based approach to AI regulation by establishing differentiated obligations and penalties for each category of AI used by companies). Similar to the approach adopted in the EU AI law, the draft royal decree classifies AI systems into three categories: unacceptable risk, high risk, and limited risk. However, progress on both bills is unclear, with no significant developments reported in 2024. Vietnam's AI bill, the Digital Technology Industry Law, is open for public comment until September 2, 2024. The bill sets out policies aimed at developing the country's digital technology industry, including government financial support for companies that participate in or host programs aimed at improving research and development capabilities, as well as a regulatory sandbox framework. It also outlines prohibited AI practices, such as using AI to classify individuals based on biometric data or social behavior. If passed, the bill would apply to companies operating in the digital technology industry (including information technology, AI systems and big data companies).
Check out Sidley's AI Monitor, your central resource for AI content, including Sidley thought leadership, the latest legislation and regulations, and access to our AI lawyers. If you'd like to receive AI-related news from Sidley, click here to sign up for our AI mailing list.
This post is current as of the posting date shown above, and Sidley Austin LLP undertakes no obligation to update this post or to post any subsequent developments relating to this post.