An interview with Yi Zeng, director of a new AI ethics and safety research center
|Jun 10 at 6:53 pm||Public post|| 1|
Welcome to the ChinAI Newsletter!
These are Jeff Ding's (sometimes) weekly translations of writings on AI policy and strategy from Chinese thinkers. I'll also include general links to all things at the intersection of China and AI. Please share the subscription link if you think this stuff is cool. Here's an archive of all past issues. *Subscribers are welcome to share excerpts from these translations as long as my original translation is cited.
I'm a grad student at the University of Oxford where I'm based at the Center for the Governance of AI, Future of Humanity Institute.
AI Expert Yi Zeng: Value Alignment with Humanity is AI’s Biggest Challenge
This week’s feature translation comes from Helen Toner, Director of Strategy at Georgetown’s Center for Security and Emerging Technology. What follows is some context on the translation from Helen:
She came across an interview with Zeng Yi about the Beijing AI Principles (in English) that were launched a couple of weeks ago. Zeng Yi is the head of the new "AI Ethics and Safety Research Center" that has been founded by the Beijing Academy of Artificial Intelligence (BAAI), which was announced at the same time as the principles.
BAAI, in turn, was launched by the Ministry of Science and Technology and the Beijing Municipal Government in November 2018. It is supported by Peking University, Tsinghua University, the Chinese Academy of Sciences, Baidu, Xiaomi, Bytedance, Meituan Dianping, and Megvii, though it is not entirely clear what that support entails. In addition to his new post, Zeng Yi is a Research Fellow at the Institute of Automation within the Chinese Academy of Sciences.
This interview, and the release of these principles in general, presents an interesting conundrum for those of us used to thinking of China as not having much of an active conversation about the ethics of AI. On the one hand, it's easy to be cynical and say that this is window dressing, intended to let China present a good face to the world while continuing to use AI in unethical ways behind closed doors. On the other hand, I don't think it's fair to assume that Zeng Yi and others involved in this effort are acting in bad faith.
I'm reminded of this article on privacy by Samm Sacks and Lorand Laskai, where they lay out how it is possible - though counterintuitive - for Chinese people to be concerned about privacy, and for the Chinese government to be responding to those concerns by building up data privacy protections on the corporate side, even while the Chinese state itself makes no concessions whatsoever to the idea that citizens should have privacy from their own government. Similarly, it seems to me that it should be possible both to believe that many AI researchers in China care about the ethics of the technology, and want to see it used for good, while also keeping in mind that the CCP will continue to use these technologies in repressive and highly unethical ways.
Where does this leave us? I don't know. At a minimum, I hope that Western AI researchers who have the chance to work with mainland Chinese counterparts can use these principles as a starting point for discussions of the ethics of AI. Beyond that, it's not clear how these principles are supposed to affect what happens on the ground - but then again, isn't that true of all the many principles documents floating around?
Big thanks to Helen for doing the heavy lifting on this insightful interview, and always keen to feature more translations like this one from other contributors!
This Week's ChinAI Links
Helen and I, along with Elsa Kania, testified last week before the U.S.-China Economic and Security Review Commission on “U.S.-China Competition in Artificial Intelligence.” In my written testimony, I argued that “China is not poised to overtake the U.S. in the technology domain of AI; rather, the U.S. maintains structural advantages in the quality of S&T inputs and outputs, the fundamental layers of the AI value chain, and key subdomains of AI.”
MacroPolo continues its series on Chinese AI talent, with Matt Sheehan’s breakdown of NeurIPS 2018 papers, and the following three takeaways: 1. Chinese-born researchers conduct a relatively small portion of the most elite AI research but a substantial portion of upper-tier AI research; 2. A majority of Chinese-born researchers conducting upper-tier AI research do so at US institutions; 3. The majority of Chinese-born researchers conducting upper-tier research attended graduate school in the United States, and the majority of them work in the United States after graduation
Really good reporting by Buzzfeed on how U.S. capital is tied up in Chinese facial recognition companies. Good details about how institutional investors make decisions: “One Silicon Valley investor, who declined to be named, noted that institutional investors can include clauses in agreements with venture funds that prevent them from investing in certain industries like firearms or gambling. That investor, however, noted that they had never seen clauses that directly address issues around human rights.”
MIT Tech Review piece with more background on Beijing AI Principles, along with similarities/differences with other ethical frameworks laid out by Western companies and governments.
Thank you for reading and engaging.
Shout out to everyone who is commenting on the translations - idea is to build up a community of people interested in this stuff. You can contact me at firstname.lastname@example.org or on Twitter at @jjding99