China
Artificial Intelligence
Introduction
The AI industry in China has been growing rapidly for years, supported by strong government backing, ongoing technological innovation, and rising demand across different sectors. China’s ambition to be one of the global leaders in AI technology has been stable and consistent through the years. It has a vibrant and growing AI market focusing on commercialisation of the technology with diverse applications across various industries. After the COVID-19 pandemic, the emergence of large language models represented by ChatGPT, and their commensurate increases in capability, shocked the world. As its response to such emerging AI technology and its capabilities, China is rethinking its AI strategy and is digesting the new technology according to its own logic.
On the government policy front, the State Council of China released the New Generation Artificial Intelligence Development Plan in July 2017. This plan served as a comprehensive blueprint outlining China’s objectives to become one of the global leaders in AI innovation by 2030. For global coordination on AI governance, the Chinese government published its Global AI Governance Initiative in 2023. It advocates for an aligned, open and fair approach — shared by all citizens of the world — to AI regulation. In the domestic domain of AI regulation, the Chinese government implemented three administrative regulations to maintain governance, stability and social harmony, including the 2021 regulation on recommendation algorithms, the 2022 rules for deep synthesis, and the 2023 measures on generative AI. In addition, the Chinese government is putting more effort into cultivating laws dedicated to addressing the development and regulation of AI technology and businesses.
1 . Constitutional law and fundamental human rights
1.1. Domestic constitutional provisions
In China, fundamental human rights are codified in the Constitution of the People’s Republic of China (PRC). Article 33 of the PRC Constitution provides that all PRC citizens are equal under the law, and the country respects and protects human rights. The right to privacy, an integral part of personal dignity, is protectedby Article 38 of the PRC Constitution.
The PRC Civil Code devotes an independent chapter, titled “Protection of Privacy and Personal Information”, to asserting that natural persons’ right to privacy and personal information is protected, and it is prohibited for any organisation or individual to cause harm to other individuals’ rights of privacy and personal information.
1.2. Human rights decisions and conventions
AI is creating new dimensions of human rights. For example, discussions regarding “digital human rights” have become more intensive, even in the traditional areas of the right to privacy,freedom, and discrimination. In addition to the PRC Constitution, China has released national-level policy documents signalling its commitment to protect fundamental human rights in the development and deployment of AI in various sectors.
China’s Code of Ethics for the New Generation: Artificial Intelligence requires integration of ethics consideration into the administration, research and development, supply and use of artificial intelligence. AI-related activities should follow this code of ethics to improve human welfare, promote fairness and equality, protect privacy and ensure reliability and controllability in AI.
In addition, the Interim Measures on Generative AI Services provide, in Article 4, that providers of generative AI services must adopt effective measures to prevent discrimination on ethnicity, belief, country, region, gender, age, occupation, and health.
The above-mentioned legislation establishes basic principles of prioritising human protection and aims to prevent harm caused to humans by AI technologies and their application.
2 . Intellectual property
2.1. Patents
AI-related inventions are patentable in China subject to satisfaction of certain conditions, including those set out below.
On 1 February 2020, the revised Patent Examination Guidelines (2020) (“2020 Guidelines“) were implemented, which added rules in Chapter 9, Part 2 of the 2020 Guidelines to determine patentability of applications concerning computer programs which include algorithms. The 2020 Guidelines explicitly provided rules to review special and cutting-edge patent applications concerning artificial intelligence, specialised applications offering online services in different industries, and big data that is closely related to AI technologies.
In 2023, the Patent Examination Guidelines (2023) were released and sought to strengthen protection of innovations arising from new industries and business models, while also responding to calls from innovation stakeholders to improve patent application review rules and standards, particularly for those applications in the fields of AI and big data. For example, the 2023 Guidelines clarified that AI systems were not capable of being recognised as inventors; instead, inventors could only be a real person (or persons). Building on the existing examination criteria, the 2023 Guidelines introduced refined standards to evaluate patent applications involving AI and big data, added more relevant examination examples, and made efforts to streamline the examination process.
2.2. Copyright
Output: Copyrightability of AI-Generated Contents (AIGC)
The emergence of AIGC has sparked considerable debate regarding the copyrightability of the contents created by AI under the existing legal framework in China. According to current law and judicial practice, AI-generated content may be copyrightable if such content meets the requirements for copyrighted works, but there are debates as to who should be the author of such content in practice.
According to the PRC Copyright Law, “works” are defined as ingenious intellectual achievements that are original and can be expressed in a tangible form in the fields of literature, art, and science. In the absence of more details in legislature, courts across the country are striving hard to answer the question of AIGC copyrightability in emerging disputes.
In the landmark case of Beijing Film Law Firm v. Baidu (case reference number: (2019) Beijing 73 civil final No.2030 ((2019) 京73民终2030号)), the Beijing Internet Court initially established the “human authorship” test, which serves as a pivotal criterion in determining the “originality” of AIGC output.
In Shenzhen Tencent v. Shanghai Yingxun (case reference number: (2019) Guangdong 0305 civil trial No.14010 ((2019) 粤0305民初14010号)), the Shenzhen Nanshan District People’s Court held that the choice and setting of the parameters from the writer or editor shall constitute “human authorship” and, although the work in dispute is produced with the assistance of software, that work is subject to copyright protection in China.
In the case of Li v. Liu (case reference number: (2023) Beijing 0491 civil trial No.11279 ((2023) 京0491民初11279号)), China’s first case concerning the copyrightability of AI-generated images, the Beijing Internet Court ruled that the AI-generated picture was copyrightable because the plaintiff had exerted “aesthetic choices and personal judgment” in the entire generation process.
Therefore, AIGC output may be eligible for copyright protection in China, if it meets the statutory criteria, particularly the presence of substantial “human authorship”. However, the standard of sufficient “human authorship” remains in development and is in practice subject to the courts’ determination on a case-by-case basis.
Regarding the copyright ownership of AIGC output, the prevailing consensus in the current practice is that AI cannot own copyright because AI is not human. In the case of Li v. Liu, the court ruled that the plaintiff, who used AI to create the image, was the author because the image resulted from the plaintiff’s intellectual contribution and his personal expressions. In addition, the court further clarified that neither the developers nor the providers of AI tools or services should be considered as the author.
Input: Copyright Infringement by AIGC
On 10 July 2023, the Cybersecurity Administration of China (CAC) released dedicated regulations on generative AI, namely the Interim Measures on Generative AI Services (“the Measures“), which mandates respect and protection of intellectual property rights during the deployment of generative AI services and prohibits service providers from infringing the intellectual property rights of others.
On 8 February 2024, the Guangzhou Internet Court issued the first judgment in China concerning the infringement of content created by AIGC. In this case, the defendant, a provider of a text-to-image AIGC tool, was found liable for infringing the copyright of the famous Ultraman IP. The court emphasised that a generative AI service provider should take “reasonable duty of care” regarding protection of others’ intellectual property rights in accordance with the Measures and other applicable laws in China. In fulfilling such “reasonable duty of care”, for example, AIGC service providers are required to establish reporting mechanisms, alert users to potential risks, and provide prominent labelling, among other actions.
At the same time, the court acknowledged that the generative AI industry in China is still at its infancy. As such, it is important to strike a balance between copyright protection and industry growth, and excessive burdens should not be placed on the service providers. In practice, such a balance will have to be subject to the court’s discretion on a case-by-case basis.
2.3. Trade secrets and confidentiality
The Interim Measures on Generative AI Services stipulate that the provision and utilisation of generative AI services should maintain confidentiality of trade secrets. In practice, users are generally discouraged from using AI products and services to process confidential information (unless strictly necessary). As a general rule of thumb, these practices should be avoided.
2.4. Notable cases
The following landmark cases are introduced above in Sections 2.1 and 2.2:
- Beijing Film Law Firm v. Baidu: the first AIGC copyright dispute in the PRC.
- Shenzhen Tencent v. Shanghai Yingxun: the first ruling that an AI-generated article constitutes a “work” within the legal framework of PRC.
- Li v. Liu: the inaugural AIGC image copyright case in the PRC.
3 . Data
3.1. Domestic data law treatment
Legitimacy of data collection and processing
Data collection plays an important role throughout the entire lifecycle of AI technologies, particularly in data training and AI service provision. Under the current regulatory framework on data, China’s adherence to legality stands as the cornerstone principle applicable to data processors. China’s sector-specific regulations pertaining to AI impose analogous requirements on entities offering distinct AI technologies. For instance, Article 7 of the Interim Measures on Generative AI Services explicitly requires utilising a legitimate source of data and a basic model to train data or algorithms.
Therefore, companies engaged in training and deploying AI models should ensure the legitimacy of their data sources. Use of inappropriate or unlawful data can result in significant consequences, such as IP infringement, harm to personal information, threat to national security, etc.
Data quality
Data quality is another critical aspect that significantly impacts the effectiveness, fairness and reliability of AI technologies, in particular generative AI. It encompasses the authenticity, accuracy, objectivity, and diversity of the data. High-quality data is characterised by its freedom from errors, inconsistencies, and biases, allowing AI algorithms to make informed and reliable decisions.
As such, AI developers must prioritise the quality of their data sets, which is essential for accurate and robust AI models. Although there is not yet a specific legal requirement on training data quality, there are national and industry standards detailing its principles and requirements.
Data labelling
Data labelling is a crucial step in the data preprocessing phase. Proper data labelling ensures that AI models can effectively understand and interpret the input data, leading to more accurate and reliable outcomes.
Effective data labelling requires careful consideration of the specific task and objectives of the relevant AI system, as well as assessment of the quality and consistency of the labels assigned to the data. According to Article 8 of the Interim Measures on Generative AI Services, companies should establish clear, specific, standardised and effective labelling procedures, conduct quality evaluation on the data labelling and verify the accuracy of labelled content by sampling, train annotators to improve their awareness of legal compliance, and supervise annotators in performing such data labelling activities.
3.2. General Data Protection Regulation
At the level of national laws, data is subject to the following fundamental laws, namely the PRC Cyber Security Law (CSL), the PRC Data Security Law (DSL), and the PRC Personal Information Law (PIPL). The CSL, DSL and PIPL are deemed to be the three primary statutes governing cybersecurity and data processing activities, and are reinforced by a number of regulations, rules and national standards.
3.3. Open data and data sharing
Starting from 2023, the PRC has cautiously and gradually taken steps to test open data and data-sharing practices in domestic and cross-border scenarios. For example, Article 6 of the Interim Measures on Generative AI Services calls for the development of generative AI infrastructure and public training data resources platforms, to advance orderly and graded open data. With the release of the “Data Twenty Measures”, cities such as Shanghai and Beijing have established data exchange platforms to explore and encourage the sharing and transaction of data.
However, industry regulations and local policies on AI open data vary, with a lack of specific and clear top-down guidance or a down-to-earth system.
3.4. Biometric data: voice data and facial recognition data
Under the PIPL, biometric data is categorised as sensitive personal information, subject to more stringent protection. The PIPL provides that processors may process sensitive personal information only if: (1) there is a specific purpose and a clear need to do so; and (2) strict protection measures are in place. The PIPL also underscores the protection of individuals’ rights and imposes strict requirements for obtaining consent, ensuring data security, and prohibiting unauthorised access or misuse of biometric information.
Additionally, specific guidelines and standards may apply to industries or sectors utilising biometric technologies. Notably, Article 14 of the Provisions on the Administration of Deep Synthesis of Internet Information Services requires providers of deep synthesis services to ensure adequate transparency and obtain separate consent from individuals when collecting and editing facial data, voice data, etc.
As reported in public news, on 23 April 2024, the Beijing Internet Court delivered the first instance ruling in the case of infringement of personal rights related to AI-generated voices. The court determined that the scope of protection afforded to the rights and interests of natural persons’ voices can be extended to AI-generated voices, provided they are identifiable.
4 . Bias and discrimination
4.1. Domestic anti-discrimination and equality legislation treatment
More and more evidence suggests that AI technology may generate biases and discrimination against individuals in certain special application scenarios. In an online trading scenario, some platform companies — especially online retailers and travel service providers — collect and analyse customers’ consumption habits and preferences, and use algorithms to differentiate their pricing strategies to different groups of consumers (known in China as “big data ripping-off”). These tactics usually result in price discrimination for the relevant consumers. In a human resource management scenario, AI can be used for recruitment and performance evaluation, causing a significant impact on the rights and interests of candidates or employees.
Having been made aware of the harm to society and consumers’ interest caused by algorithmic bias and discrimination, the Chinese authorities are ramping up efforts to rectify such issues. The PIPL provides in Article 24 that data subjects have the right to refuse a decision made solely using automated decision-making methods. Similar rules have been set by other laws and regulations regarding automated decision-making, where transparency, controllability and fairness requirements are explicitly stipulated. For example, the Interim Provisions on the Administration of Online Operation of Tourism Services prohibits travel service providers from setting unfair trade conditions by abusing big data analysis or other technical means.
5 . Cybersecurity and resilience
5.1. Domestic technology infrastructure requirements
In 2017, the CSL imposed comprehensive obligations regarding cybersecurity protection on network operators. Public AI service providers, such as generative AI service providers, are network operators under the CSL and are subject to such obligations. Key examples of such obligations include:
- Ensuring security of network operation. AI providers must fulfil their security obligations in accordance with a Multi-Level Protection Scheme to safeguard their networks from interference, damage or unauthorised access, and to prevent data leakage or loss.
- Ensuring security of online information. This mainly involves ensuring online content complies with Chinese laws and regulations as well as maintaining the security of users’ personal information.
To fulfil the above obligations, AI service providers must adopt adequate administrative and technical measures to contain risks and respond to cybersecurity events. The requirements of such administrative and technical measures are spread across different laws, national standards and technical documents. AI service providers need to have good advisers to identify, consolidate and fully comply with such requirements.
6 . Trade, anti-trust and competition
6.1. AI related anti-competitive behaviour
In recent years, new types of AI-related antitrust and competition behaviours (such as algorithmic collusion, dark patterns, predatory pricing, data manipulation, etc.) have come to the attention of Chinese regulators. With the use of AI technology, the competing undertakings in the same or similar markets may use data, algorithms, technologies, and platform rules to conclude monopoly agreements or to engage in abusive behaviours, which undermine the competition order of the market and infringe on the rights of consumers. Compared to those traditional anti-competitive behaviours, AI-related behaviours are more complex, covert, unpredictable and challenging, and require regulatory agencies and legal systems to innovate and improve continuously.
6.2. Domestic regulation
China’s antitrust and competition authority, the State Administration for Market Regulation (SAMR), promulgated a series of rules and measures to control the harm to the competitive order and consumer benefit arising from AI technology. For example, the newly amended PRC Anti-Monopoly Law emphasises antitrust violations in the digital sectors by virtue of technological means such as data, algorithms, and platform rules. The Antitrust Guidelines for the Platform Economy state that concerted practice may result from coordination through data, algorithms, platform rules, or other means, without an agreement necessarily being entered into.
In regulatory practice, the SAMR and its local branches are taking actions to deal with AI-related competition issues. For example, under Administrative Penalty Decision (Guo Shi Jian Chu [2021] No. 28) of 10 April 2021, the SAMR fined Alibaba group for abuse of a dominant market position by using data, algorithms and other technologies to restrict the trading parties of online merchants for online platform services within China. The fine totalled CNY 18.228 billion, which constituted 4% of Alibaba’s domestic sales revenue for the year of 2019.
7 . Domestic legislative developments
7.1. Proposed and/or enacted AI legislation
Although AI as an innovative technology is highly regarded by the Chinese government, this does not mean that application of a promising technology can be immune from regulation in its infancy. However, currently, regulation of typical application and important issues of AI primarily relies on sector-specialised legislation. Below is a non-exhaustive list of significant regulations relating to AI that have come into effect.
- Administrative Provisions on Algorithmic Recommendations of Internet Information Services, issued by the CAC, the Ministry of Industry and Information Technology (MIIT), the Ministry of Public Security (MPS) and the SAMR with effect from 1 March 2022. The provisions are applicable to algorithm recommendation service providers, i.e. enterprises that provide internet information services to users by applying algorithm technologies such as generation-synthesis, personalised push, sorting and selection, retrieval and filtering, and scheduling and decision-making.
- Provisions on the Administration of Deep Synthesis of Internet Information Services, issued by the CAC, MIIT and MPS with effect from 10 January 2023. The provisions primarily focus on the governance of deep synthesis and emphasise that deep synthesis services may not be utilised for illegal activities (i.e. where prohibited by laws and regulations).
- Interim Measures on Generative AI Services, issued by the CAC and other six governmental agencies with effect from 15 August 2023. The interim measures cover the requirements of generative AI service providers to assume responsibilities as producers of online content and processors of users’ personal information. The interim measures provide provisions regarding content censorship and management, training data processing activities, user rights protection, security assessment, etc.
It is worth noting that, at a higher level, the Artificial Intelligence Law has been included in the State Council’s 2023 legislative work plan, although no progress has been made so far. In the private sector, a few think-tank units and academics in China have drafted and released template proposals such as “Model Law on Artificial Intelligence (Expert Proposal Draft, Version 2.0)” and “Artificial Intelligence Law (Scholar Proposal Draft)”. Whilst not binding, these drafts reflect respective academic insights and deep thinking on the topic of AI governance. These templates could serve as valuable references and provide direction for the future legislation of AI in China.
7.2. Proposed and/or implemented Government strategy
AI technology is highly regarded as a promising technology in China, and its development attracts special attention. It is particularly encouraged by the Chinese government. The Chinese government began to speak highly of AI technology in 2017. This started with a development plan for artificial intelligence introduced by the State Council, which states that AI has become a new focus of international competition and a new engine of economic development. The plan states that AI will become a national strategy of China and will be strongly encouraged by the Chinese government.
As a follow-up to the development plan, the MIIT released the “Three-Year Action Plan to Promote the Development of a New Generation of Artificial Intelligence Industry (2018–2020)”, the Ministry of Science and Technology released the notice on “Supporting the Construction of New-Generation Artificial Intelligence Demonstration Application Scenes”, and the Supreme Court issued the opinions on “Regulating and Strengthening the Judicial Application of Artificial Intelligence”, etc. Meanwhile, China’s national ministries, commissions, provincial governments, and provincial capital cities have issued hundreds of policies and documents that support AI technology development and provide frameworks for implementing business applications of the technology.
China, with promotion from both the central and the local governments, is attempting to incorporate AI into its development strategies to enhance and expand the country’s domestic regulation, global competitiveness and influence.
8 . Frequently asked questions
8.1. What are the national authorities for AI regulation in China?
In China, there is no unified regulatory authority responsible for the specialised regulation of AI. It is common practice for the CAC and other departments to cooperate in rulemaking in this field. Most of important AI-related rules and policies are jointly created and issued by multiple regulatory authorities including the CAC, MIIT, the Ministry of Science and Technology, SAMR, among others.
The general regulatory framework for AI is primarily led by the CAC, with other regulatory departments in different sectors establishing rules and enforcing the laws within their respective areas. Here is a brief of their respective roles in AI regulation:
- CAC: the CAC is responsible for the overall planning, coordinating and supervising of cybersecurity, data security, personal information protection, algorithm governance and content censorship, and has issued a number of regulations regarding AI governance.
- MIIT: the Ministry of Industry and Information Technology oversees the telecommunication and information technology industry in China and is the regulator of AI technology development and service solutions/applications. It is also responsible for promoting the development and growth of the AI industry and has released several positive industrial policies and action plans boosting the AI industry.
- The Ministry of Science and Technology: in addition to promoting the innovation and demonstration of AI technology, the Ministry of Science and Technology in recent years has focused on preventing the ethical risks of science and technology associated with AI technology and its applications. In September 2023, the Ministry of Science and Technology took the lead to issue the Measures for Ethical Review of Science and Technology, which provide ethical review requirements for certain AI technology activities.
- SAMR: as China’s market regulator, the State Administration for Market Regulation has been very active in maintaining market orders and consumer protection. The use of algorithms and other automated decision-making measures to mislead consumers or infringe their legal rights, or to engage in unfair competition or discrimination, is subject to regulation by SAMR.
8.2. What are the most significant concerns for Chinese regulators on generative AI?
Generative AI, as one of the most typical and popular applications of AI technology in recent years, has brought a lot of risks along with its explosive development. Currently, the primary concerns of Chinese regulators about generative AI include, but are not limited to, the following three aspects:
- Data security risks. In the process of large model training, a large amount of data is involved. The data used as training material may be difficult to distinguish as true or false and may involve the different complex rights and interests of various subjects. For example, the uses of personal information may invade the privacy of and cause harm to data subjects, while the use of enterprise data may infringe the trade secrets and copyrights of the stakeholders. Additionally, users of generative AI services may face security risks from data leakage and data outbound transfer.
- Algorithmic risk. Algorithms are often considered to have black-box attributes. They can be opaque, biased and discriminatory, and can deeply influence or dominate actors’ choices, which poses unpredictable risks to the world. Preventing algorithmic risk is therefore a common challenge for all governments around the world, including Chinese regulators. The Chinese authorities have implemented various measures to control the risk, including security assessments, algorithmic filing, and ethical reviews.
- Risks associated with AI-generated content. Content generated by generative AI services may contain factual errors or false information. Such deep-fake information could be easily used for illegal activities such as defamation, creating and spreading rumours, disrupting network and social order, and committing fraud. Chinese regulators attach great importance to this, because this kind of risk not only concerns individual rights, but also involves social stability and even national/political security. Therefore, Chinese regulators require service providers to improve the accuracy and reliability of generated content and to prominently label or watermark the generated content.
8.3. What are the key compliance considerations and strategies for starting an AI-related business in China?
This depends on what role a company plays in the AI industry chain and what specific services (AI business services or technical support) the company provides. Taking generative AI service providers as an example, compliance considerations regarding content censorship, algorithmic governance and data protection are the top priorities that providers should focus on. In addition, to provide generative AI services in China, it is a must to identify and obtain applicable qualifications and licences in accordance with relevant laws and regulations, such as applying for applicable value-added telecoms licence(s) and going through the algorithmic filing procedures. Lastly, it is worth noting that the regulatory changes in China are almost as dynamic and fluid as the changes in AI.
To navigate the complex and multi-faceted Chinese legal and regulatory requirements, it is imperative for generative AI service providers to adopt proactive compliance strategies, including but not limited to:
- ensuring product compliance by design and by default;
- establishing solid systems and internal procedures on data protection, user protection, and content management, etc.;
- fulfilling the necessary regulatory processes (such as security assessment, algorithmic filing, ethical review, where applicable) and obtaining applicable business qualifications/licences; and
- keeping close watch on the dynamic legislative efforts, administrative actions and judicial determination, and making the compliance system as resilient and flexible as possible to accommodate the potential changes.
The authors would like to thank Vincent Wang for editorial work on this chapter.
3-6 PQE Corporate M&A Associate
Job location: London
Projects/Energy Associate
Job location: London
3 PQE Banking and Finance Associate, Jersey
Job location: Jersey