ChinAI #156: AI Risk Research in China

Have you lit the lamp, swept the house, and searched carefully?

Greetings from a world where…

this issue almost didn’t happen because season 3 of Sex Education just dropped

…As always, the searchable archive of all past issues is here. Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).

Feature Translation: AI Risk Research: A Field to be Explored Urgently

A common view holds that Chinese debates about AI safety are less advanced than Western discussions on managing the risks of AI. This perception of underdeveloped thinking about AI risks includes several assumptions, including that Chinese conversations take these risks less seriously and that these conversations started much later than in other countries.

Let’s put some quotes on the table because I don’t find subtweets productive:

  • Here’s a former VentureBeat reporter interviewing Michael Kanaan, previous chairperson of AI for the U.S. Air Force. The reporter asks: “So when you look at China’s persecution of Uighurs using facial recognition, doing the morally right thing is not the point. I suppose that could mean that because China doesn’t have these ethical qualms, they probably aren’t slowing down and building ethical AI, which is to say, it’s possible they’re being very careless with the efficacy of their AI. And so how can they expect to export that AI and beat the U.S. and beat Russia and beat the EU when they may not have AI that actually works very well?”

  • Graham Allison, Harvard Professor, and Eric Schmidt, Chair of the National Security Commission on AI: “China’s government, laws and regulations, public attitudes about privacy, and thick cooperation between companies and their government are all green lights for its advance of AI. In the United States and Europe, yellow and red lights abound.”

  • Hong Kong University of Science and Technology professor Pascale Fung on East Asian countries lagging behind in AI ethics: “Our prime concern is to look at the ethical adoption of AI in terms of setting up standards. Do we also need regulations; if so, what? This conversation has not happened in this region yet. . .There is no transparency about dataflow. And there is no certification of AI safety.

I’m not here to convince you that this view is wrong. There’s validity to some parts of these quotes. My goal is much more modest. I hope to convince you that there’s a very simple question worth asking, directed not just at people who make sweeping statements about Chinese views on AI safety but also to myself: Have you really looked?

Like the woman who lost her coin in Luke 15, have you lit a lamp, swept the house, and searched carefully?

Context: This week’s feature translation is an April 2020 article (link to original in Mandarin; requires CNKI subscription to access, but feel free to message me for the pdf) published in the Journal of Engineering Studies by Wang Yanyu, an Associate Professor at the Institute for the History of Natural Sciences, Chinese Academy of Sciences. The article advocates for a specialized research field for AI risk and governance. Founded in 2004, the Journal of Engineering Studies tackles engineering ethics form an interdisciplinary perspective, analyzing the complex relationship between engineering and societal issues from multiple frames, including history, law, management, design studies, and economics. For more on engineering ethics studies in China, see Qin Zhu’s article which “maps approaches to engineering ethics . . .in a nation that now graduates more engineers than any other in the world.”

Key Takeaways:

  • According to Wang, Chinese think tanks and academia are very concerned about AI risks. To support this claim, he reviews some recent papers and reports. Of the 11 Chinese-language cited texts (sources in the full translation), only one has been translated and analyzed in English-language coverage — the CAICT white paper on AI security. I follow this topic pretty closely, but I had not read the other ten publications, all written by different authors. I’d wager the same applies to others in this field.

  • When outlining different risks associated with AI, Wang highlights a few China-specific cases that I had not heard of before. For example, the section on cybersecurity risks mentions Tencent’s Zhuque Lab’s demonstration of the risks of backdoor attacks against deep neural networks. Regarding societal risks, Wang relates the tale of a Chinese platform called “快啊,” which apparently leveraged AI to crack into personal accounts, stealing nearly 1 billion batches of citizens’ personal information. These cases were were new to me, and I couldn’t find any relevant English-language coverage.

  • Among those who think a lot about AI and existential risks, I think there’s a background understanding that these conversations started in the West, so there’s a need to transfer that knowledge to China. This article prompted me to question that assumption. In a section about the Strong AI problem (强人工智能议题), Wang cites research by Liu Yidong, also at the Institute for the History of Natural Science. He states that Liu “was the first to reveal” that humanity is facing serious challenges on two lines: scientific and technological risks have intensified, but prevention and control measures have failed. Liu proposed the concept of “destruction-causing knowledge [致毁知识]” in 1999, and he wrote about the possibility that AI could threaten human security without fully surpassing human intelligence in 2002.

  • Wang ends the article with a call to control AI by stronger measures than vague ethics guidelines. He emphasizes the need for red lines on certain R&D activities. His conclusion: “We need to take ‘AI risk and its governance issues’ as an independent research object and research field, actively build up ‘AI risk science’ or ‘machine risk science,’ and improve people's awareness of the severity of AI risk issues and the importance of AI risk governance.

There’s a lot more I haven’t unpacked from the full translation: Wang’s views on possible forms, risks, and pathways to Strong AI; language asymmetries on AI governance — Wang references a wide range of English-language publications on the subject, including the Malicious Use report, Jessica Newman’s report on AI Security, and Allan Dafoe’s claim (in his AI Governance research agenda) that AI governance “may be the most important global issue of the 21st century.”

Lastly, while we should be careful to not overstate the influence of academic papers, the broader point is that one could ask the “have we really looked” question to other types of writing and thinking as well. Also, this excellent MERICS report on AI ethics and governance in China does point out: “Chinese academia seems to be gaining influence in official government efforts to govern AI.”

FULL(ish) TRANSLATION: AI Risk Research: A Field to be Explored Urgently

ChinAI Links (Four to Forward)

Must-read: AI Education in China and the United States

By Dahlia Peterson, Kayla Goode, and Diana Gehlhaus, this CSET brief is an essential contribution to understanding how Chinese and U.S. approaches differ when it comes to AI education:

Both countries’ approaches could result in uneven levels of AI workforce competitiveness . . . China’s centralized push could lead to widespread integration of AI education, but the resulting curricula could be shoddy for the sake of participating in the ‘AI gold rush.’ This risk is especially pronounced in under-resourced areas, which could produce underwhelming results. The United States’ varied, decentralized approach may allow for greater experimentation and innovation in how AI curricula are developed and implemented, but diverse approaches may exacerbate disparities in curriculum rigor, student achievement standards, and educator qualifications. As for similarities, the two countries share hurdles such as the rural-urban divide, equitable access to quality AI education, and teacher quality.

Should-read: The Scientist and the A.I.-Assisted, Remote-Control Killing Machine

This New York Times piece, by Ronen Bergman and Farnaz Fassihi, investigates the killing of Iran’s top nuclear scientist:

Iranians mocked the story as a transparent effort to minimize the embarrassment of the elite security force that failed to protect one of the country’s most closely guarded figures…

Except this time there really was a killer robot.

The straight-out-of-science-fiction story of what really happened that afternoon and the events leading up to it, published here for the first time, is based on interviews with American, Israeli and Iranian officials, including two intelligence officials familiar with the details of the planning and execution of the operation, and statements Mr. Fakhrizadeh’s family made to the Iranian news media.

Should-read: Securitization of Artificial Intelligence in China

In an article published in The Chinese Journal of International Politics last month, Jinghan Zeng shows that “the Chinese central government is performing a securitizing move by labeling AI as a security matter in order to convince local states, market actors, intellectuals, and the general public.” He also argues that this move may have unintended consequences: “this securitization trend could undermine Chinese key AI objectives by heading in an inward-looking, techno-nationalistic direction that may be seriously detrimental to China’s AI industry and leadership ambitions.”

Should-read: China Initiative aims to stop economic espionage. Is targeting academics over grant fraud ‘overkill’?

If a mistrial and six cases dismissed within several weeks wasn’t damning enough for the China Initiative, how about this statement from John Hemann, former federal prosecutor who worked the flagship China Initiative case, quoted in this Washington Post article by Ellen Nakashima and David Nakamura. Pressure to “show statistics” for the initiative’s success “has caused a program focused on the Chinese government to morph into a people-of-Chinese-descent initiative,’’ including Chinese-born scientists working in the United States.

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a researcher at the Center for the Governance of AI at Oxford’s Future of Humanity Institute.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99