ChinAI #115: White Paper on AI Governance (2020)
From the China Academy for Information and Communications Technology and the AI Industry Alliance
|Jeffrey Ding||Oct 12, 2020|| 5|
Greetings from a world where…
another white paper roams free
…As always, the searchable archive of all past issues is here. Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).
Feature Translation: White Paper on AI Governance Summary Slides
Context: From two entities we’ve featured quite a bunch in past issues: CAICT and the AI Industry Alliance. Had some serious difficultly actually getting the full pdf of the White Paper, which was just published in September — if anybody else has more luck, let me know. The article announcing the white paper directs you to follow CAICT’s WeChat account and get the pdf link from there: http://www.caict.ac.cn/kxyj/qwfb/bps/202009/P020200928368250504705.pdf.
Unfortunately, I had a real tough time downloading the full white paper using this link. The original article link did have some summary slides, though, so I converted into Google slides and added some selective translations as comments. All slides that I translated should have a comment icon with a 1 in the middle.
Slide 5 highlights 3 technical characteristics of AI that make it more difficult to govern than other technological domains: 1. general-purpose nature (leads to risks that are more widespread); 2. dependency on data (leads to results that are more uncontrollable); 3. algorithmic blackbox (generates processes that are more difficult to explain).
Slide 6 emphasizes the risk of AI influencing the political process and smearing (抹黑) political figures. This was one of three risks highlighted at the societal level. The other two were: intensification of unemployment/wealth gap, and encroachment upon videos of incident (侵害实践频发), which I take to refer to the manipulation of security videos?
A lot of priority on self-regulation by companies, which is not surprising with AIIA as a co-drafter. See slides 10 and 22 (which identifies companies as the main governance entity in the near-term stage of AI governance)
Perceptions of other countries’ approaches to AI governance (slide 14, 19):
The paper frames the U.S. as using ethical norms to guarantee national security. They recognize the U.S. as having “published the world's first AI ethics principles for military uses, grasping hold of the ‘power over explanation’(解释权) for (military AI) regulations.” Germany’s goals for “AI Made in Germany” is mentioned. China’s approach is described as advocating for the development of responsible AI
Paper also analyzes countries’ efforts in subdomain-specific governance. In autonomous vehicles, it evaluates the U.S. has having a very clear autonomous vehicle strategy. South Korea is judged to be the first to put forward autonomous driving safety standards. Other subdomains covered are deepfakes, smart finance, and smart medicine.
EXCERPTED TRANSLATION: SUMMARY SLIDES ON AI GOVERNANCE WHITE PAPER
ChinAI Links (Four to Forward)
CSET data brief by Simon Rodriguez, Autumn Toney, and Melissa Flag drills down into the subdomain of computer vision and finds that China has overtaken the U.S. in patent filings in computer vision (based on data from 1790 analytics).
Note: I think we should be pretty skeptical to draw too many conclusions from patent data, especially since this brief makes no mention of patent quality. Studies have shown that China’s patent stats are inflated by both universities and companies taking advantage of patent subsidies to produce large quantities of low-quality patents. Only 4 percent of patent applications filed in China are then filed in other jurisdictions, which is a key marker of quality. The comparable figure for the U.S. is 32 percent. I cited these points in past testimony to U.S.-China Economic and Security Review Commission, in which I concluded: “China is not poised to overtake the U.S. in the technology domain of AI.”
That being said, the approach to focus on one domain of AI is one we could all learn from. The authors capture this well, “Research and reporting on this topic tend to generalize AI, yet treating it as a singular entity with a homogenous development landscape loses sight of variability in both research and potential applications. In order to effectively compare AI production between countries, it is necessary to drill down into the subdomains of AI and identify exactly where and how nations truly lead.”
Should-watch: Bridging AI's Proof-of-Concept to Production Gap
Andrew Ng, founder & CEO of Landing AI and founder of deeplearning.ai, discusses key challenges facing AI deployments and possible solutions, ranging from techniques for working with small data to improving algorithms' robustness and generalizability to systematically planning out the full cycle of machine learning projects.
Should-read: The Future of Military Applications of Artificial Intelligence: A Role for Confidence-Building Measures?
In Orbis, Michael C. Horowitz, Lauren Kahn, and Casey Mahoney explore confidence-building measures as a form of information-sharing and transparency-enhancing arrangements to enhance strategic stability. Their aim is to “speed the learning process about the implications of military applications of AI in ways that reduce the risk that states’ uncertainty about changes in military technology undermine international security and stability.”
Should-read: Techie Software Soldier Spy
Longread by Sharon Weinberger on Palantir, as it prepares to IPO. Exposes some creation myths, and gets into the nitty-gritty of how tech is actually deployed in the military. Concludes with this: “So why are people still so excited about Palantir? One former national-security official told me the company is now famous for being famous, sort of like the Kardashians.”
Thank you for reading and engaging.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a researcher at the Center for the Governance of AI at Oxford’s Future of Humanity Institute.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Any suggestions or feedback? Let me know at email@example.com or on Twitter at @jjding99