ChinAI #262: Expert Draft AI Law Changelog
Saad Siddiqui tracks the evolution of a key expert draft AI law
Greetings from a world where…
the chess candidates tournament consumed my weekend, so I’m grateful to Saad Siddiqui for contributing this week’s analysis
…***Thanks folks for getting the paid subscriptions back up a bit in the past few weeks; please consider subscribing here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors). As always, the searchable archive of all past issues is here.
Feature Analysis: CASS Draft AI Law v1.0-to-1.1-to-2.0
***Saad Siddiqui is an AI governance researcher focused on identifying similarities and differences between Chinese and Western approaches to AI governance and safety. He was previously a Winter Fellow at the Centre for the Governance of AI in Oxford. What follows is Saad’s analysis (lightly edited by me).
Context: In August 2023, a team of scholars from the Chinese Academy of Social Sciences (CASS) published a draft AI law. DigiChina covered this, inviting input from various experts and featuring a guest translation from Jason Zhou and Kwan Yee Ng of Concordia AI. At that time, the experts noted that the draft AI law was the start of a longer process of policymaking and regulation in China, as expert drafts of law tend to have varying degrees of impact on the laws that are eventually published. The CASS scholars who wrote the law indicated that they would update the law, and since August they have updated it twice - a minor revision to v.1.1 draft in September 2023, and a more major update with a v2 draft in April 2024 (available in Mandarin and English).
Summary of Draft AI Law 2.0 and Version Changes
The law itself calls for the creation of a China Administration of AI (CAAI) as the authority that leads the regulation of AI, with responsibility for a categorized oversight system for AI developers and providers. The Chinese version of the draft law suggests that the CAAI would be housed under a high-level leading small group on AI.
Under this categorized oversight system, the draft law sets up a new governance tool called the Negative List. This creates tiered obligation, where products and services deemed to be on the negative list are subject to a more stringent licensing oversight system. They would also have to take steps to ensure that AI is controllable (e.g., by ensuring that humans can always intervene). On the other hand, developers and providers of services that fall outside of the negative list are subject to registry oversight, which is mostly similar to the current algorithm registry system that generative AI providers have to abide by. This is similar to the end-use specific risk tiering in the EU AI Act.
Across the board, model developers and providers have a wide range of obligations, ranging from pre-deployment safety/security assessments, audits every two years, security vulnerability management, to incident reporting and management. Foundation model developers have special obligations on top of this, which include ensuring adequate compute for safety and establishing an external oversight board that publishes an annual corporate social responsibility report. The law also proposes a system of shared liability for tort, with exemptions possible for developers and providers that implement all required measures.
There is also significant flexibility built into the law for lighter-touch implementation of the law. For example, the law allows for the China Administration of AI (CAAI) to exempt providers or developers from punishment, despite a violation, if the provider or developers is deemed to have adequate safety and governance measures in place. This light-touch approach is especially clear for open-source models, which can be fully exempt or have diminished liability, should they have the right governance measures in place (what exactly this entails is unclear).
There is also a clear update in v2, with respect to frontier AI safety and governance, namely a provision for tax credits for AI safety efforts (minimally 30% of investment into safety would receive a tax credit). It is also worth noting that some other updates to the law — in addition to the tiered obligation system — suggest the Brussels effect (the extraterritorial influence of EU regulations) at work to some degree. Two of the exemptions added in v2 (open-source and military AI) mirror similar provisions in the EU AI Act, which was recently finalized.
Finally, the law would also have some impact on domestic governance arrangements, by shifting the aim and scope of previous regulations to some degree. First, it may move the authority for the algorithm registry from the Cyberspace Administration of China to the newly created CAAI. It also raises the question of what distinct role the CAAI’s Ethics Committee will play that isn’t already covered by the Ministry of Science and Technology’s AI Ethics Committee.
See Saad’s full detailed changelog for the various versions of the CASS Draft AI Law: CASS Draft Law v1.0-1.1-2
ChinAI Links (Four to Forward)
Must-attend: 16th Annual GW China Conference
This Friday (4/26), the GW China Conference has a great line-up of panels on U.S.-China economic relations and China’s economic development. I’ll be charing the 11:30-12:45PM panel on technological competition and decoupling, which features three scholars who I think are doing the most stimulating work in this area: Yeling Tan (Oxford); Roselyn Hsueh (Temple); and Ling Chen (Johns Hopkins). Pre-register at the link above and stay for free lunch after!
Should-read: China keeps generative AI on simmer
Excellent commentary by Wendy Chang of MERICS on China’s generative AI ecosystem, which highlights a “more commercial and application-specific approach.”
Should-read: The Global AI Talent Tracker 2.0
Way back when we were still youngsters, Matt Sheehan and I worked together on the first iteration of MacroPolo’s Global AI Talent Tracker. In the 2.0 version, Ruihan Huang, AJ Cortese, Graham Chamness and a team of RAs have made some great updates to show how the AI talent landscape has changed in the past three years.
Should-read: Future of Humanity Institute 2005-2024: Final Report
I want to use this space to pour one out for FHI, home to the Governance of AI Program where I got my start as a researcher and wrote my first publication: “a comprehensive report on China’s AI capabilities, debunking several widespread misconceptions about US-China competition.” I met some incredible people and still miss the random conversations in the kitchen over seaweed snacks.
From Anders Sandberg’s epitaph, this tidbit about the university bureaucracy’s suffocating pace was especially revealing and lamentable:
One of our administrators developed a joke measurement unit, “the Oxford”. 1 Oxford is the amount of work it takes to read and write 308 emails. This is the actual administrative effort it took for FHI to have a small grant disbursed into its account within the Philosophy Faculty so that we could start using it - after both the funder and the University had already approved the grant.
Thank you for reading and engaging.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is an Assistant Professor of Political Science at George Washington University.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Also! Listen to narrations of the ChinAI Newsletter in podcast format here.
Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99
China's AI legislation has unique characteristics different from legislation in other fields, and thus, AI legislation in China must follow three basic principles:
Firstly, it must be highly adaptable to uncertainty. The disruptive and rapidly iterative nature of AI technology means that future technological developments and potential risks are highly uncertain. This conflicts with the relative stability and lag of traditional legislation. Therefore, China's AI legislation needs to accommodate and adapt to this high level of uncertainty in AI development.
Secondly, it must clearly respond to local needs. Given the unique international status of AI technology and industry, China's AI legislation must express localized demands that arise from specific international environmental conditions, distinguishing it significantly from technology-leading countries like the United States and governance-focused Europe.
Thirdly, it must improve the AI legal system to effectively respond to international competition.