ChinAI #83: AI Security Standardization White Paper 2019

Some interesting excerpts from the Oct 2019 White Paper

Welcome to the ChinAI Newsletter!

As always, the archive of all past issues is here and please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).

Feature Translation: Excerpts from AI Security Standardization White Paper 2019

Context:

  • Issued in October 2019 by National Information Security Standardization Technical Committee (全国信息安全标准化技术委员会) and its task force on big data security standards

  • Builds on the White Paper on AI Standardization published in January 2018 (first translated excerpts in ChinAI #1 ). The Jan 2018 White Paper is the first reference in the bibliography on this White Paper focused on AI security.

  • Writing units: China Electronics Standardization Institute, Tsinghua University, Baidu, Huawei, Alibaba, China Mobile, Renmin University, Ant Financial, IBM (China) etc. *In the 2018 WP, Panasonic, Intel, and IBM all participated, whereas this time it seems like IBM was the only international company.

Interesting Excerpts:

Recommendation #5 pushes for increasing China’s influence in international standards in this domain as well as developing international exchanges and cooperation in this space:

  • “Continuously increase China’s international standards influence in the domain of AI security, strongly support Chinese work units and experts to participate in international standardization work, strengthen research on proposals for AI security standards, and encourage Chinese experts to serve in international standardization organizations and serve as editors of international standards projects.”

  • “fully develop China's international standardization exchanges and cooperation mechanisms, integrated with the rich application scenarios of China's AI industry, to develop cooperation and exchange mechanisms in emphasized and difficult areas of artificial intelligence security, borrowing the strength of international and foreign forces to enrich China's artificial intelligence security standardization work.”

Appendix B has an interesting list of practical applications:

  • First is Baidu’s: “Adversarial example attacks have gone from the laboratory environment into actual cyber confrontations, increasing threats to personal privacy, property security, traffic safety, and public safety, and restricting the credible and healthy development of artificial intelligence in various industries. There are currently no thorough solutions to the basic problems of the generation of adversarial examples involving the interpretability of deep neural networks. In the process of “landing” artificial intelligence, Baidu gradually formed an overall solution from security verification, model reinforcement, detection of adversarial examples, and formal verification of model robustness—AdvBox.” This arxiv article has more on AdvBox in English.

  • Tsinghua University is also working on AI security but a different model (open source, free, but for non-commercial purposes): “Directed at typical issues with adversarial attacks, Tsinghua University’s RealSafe algorithms has been developed from different application scenarios, different threat scenarios, etc., covering the standard program library in the three levels of system, algorithm, and application. RealSafe is an open source standard library. Industry actors can use it for free for non-commercial purposes, providing platform support for the R&D and standards work on theory and algorithms artificial intelligence security in China.”

Dig Deeper: Samm Sacks, Paul Triolo, and I did some analysis for New America re: China’s interests in developing AI standards back when the first white paper was released.

ChinAI Links (Four to Forward)

Should-read: Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It

For more on policy implications of AI security, This Belfer report by Marcus Comiter, a PhD candidate in computer science at Harvard, gives a nice overview of AI security risks from input attacks (manipulating what is fed into the AI system) and poisoning attacks (corrupting the process during which the AI system is created). It also proposes “AI Security Compliance” programs.

Should-read: Who Owns Artificial Intelligence? A Preliminary Analysis of Corporate IP Strategies — GovAI Working Paper

Nathan Calvin and Jade Leung examine the competitive strategies employed by corporate AI developers and national governments regarding the protection of their IP, introducing an interesting “hybrid strategy” in which companies use selective open-source licensing agreements. Really interesting stuff on how open-source software may produce implicit and explicit lock-in effects as it relates to cloud services.

Should-read: A Brief Examination of Chinese Government Expenditures on Artificial Intelligence R&D

The team of Thomas J. Colvin, Irina Liu, Talla F. Babou, and Gifford J. Wong with the Institute for Defense Analyses argue, “The lack of credible information on Chinese government expenditures on AI R&D can lead to confusion and uneven comparisons between Chinese and U.S. expenditures on AI, which in some cases have caused alarm among U.S. policy makers and observers.” I particularly enjoyed this passage about Tianjin’s $15 billion fund (often-referenced as an unfavorable comparison with US government spending on AI R&D): “Such a comparison is likely to be misleading because, unlike the U.S. Federal Government expenditures with which it was compared, the announced expenditure from the Tianjin government does not appear to be annualized, focused on R&D, come from the central government, or consist of an actual outlay of money.”

Should-listen: 80,000 Hours Podcast on China, its AI Dream, and What We Get Wrong about Both

Rob Wiblin leads us through a wide range of topics, and my talk at Effective Altruism Global (London) is appended to the end. Thanks to the team for really excellent production work and there’s a full transcript available as well.

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford, Researcher at GovAI/Future of Humanity Institute, and non-resident Research Fellow at the Center for Security and Emerging Technology.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99