ChinAI #171: Tencent Goes Inside the Black Box
Daniel Zhang translates Tencent's Explainable AI report
Greetings from a world where…
Chinese Wordle exists, and it is humbling
…As always, the searchable archive of all past issues is here. Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).
Feature Translation: Explainable AI Development Report 2022
Much thanks to Daniel Zhang, an AI policy researcher, for contributing both this week’s feature translation and the accompanying analysis (very lightly edited by me). Follow him on Twitter here.
Context: On January 17, Tencent published what it called the (Chinese) industry’s first report on explainable AI. The report has five chapters, starting with an overview of the challenge and definition of explainability. It then looked at recent policy on explainable AI and the appropriate industry response, followed by a review of existing explainability tools in the industry (Google’s model cards, IBM’s AI fact sheets, Microsoft’s datasheets for datasets, etc.) and how Tencent has approached the challenge. Finally, the report stated five thoughts on the future development of explainable AI (see the translation) and concluded with brief takeaways.
Key takeaways:
In this report, Tencent’s key policy message is that there should not be a one-size-fits-all regulatory approach to AI explainability. The goal of requiring algorithmic explanation varies by AI systems, application areas, audience, and time and space. It is critical to balance the needs of algorithmic explainability with other factors, such as privacy, cybersecurity, and intellectual property rights while exploring alternative mechanisms like routine monitoring and human auditing. The report also points out that efforts on explainable AI should be led by industry, not government-mandated regulations as the market forces will continue to incentivize private companies to improve explainability.
Within this past year, we have seen the Chinese government roll out a series of policy documents and guiding opinions on AI governance. Most notably, the Personal Information Protection Law of 2021 and recent algorithm management regulations require companies to both provide information on how their AI algorithms work as well as explain certain algorithmic decision-making processes when needed. Tencent’s report is part of the industry response to this growing body of policy requirements on AI governance by the Chinese government. While Tencent’s report is the only one so far that focuses exclusively on explainability, other Chinese tech giants, such as the Ant Group of Alibaba and JD.com (in collaboration with the China Academy of Information and Communications Technology), have published guidances and white papers on trustworthy AI in 2021.
For a lot of interesting technical details as well as references to how Chinese platform companies started to disclose the principles behind their algorithms last year, see FULL TRANSLATION: Chapter 4 of Explainable AI Development Report 2022
ChinAI Links (Four to Forward)
Must-read: ‘In the End, You’re Treated Like a Spy,’ Says M.I.T. Scientist
Ellen Barry, in The New York Times, interviews Professor Gang Chen after the U.S. government dismissed its case against him, marking yet another setback to the China Initiative. After the dismissal, his colleagues congratulated him, but he remained sorrowful. From the article: “We are all losers, right?” Dr. Chen said. “My reputation got ruined. My students, my post-docs, they changed their career. They changed to other groups. M.I.T., the country, the U.S., we lose. I can’t calculate the loss. That loss cannot be calculated.”
Should-read: Five longreads on large language models like OpenAI's GPT-3 and the future of writing
Longreads has compiled this great list: “These five longreads dive into large language models created by OpenAI, Google, and others, examine how sophisticated OpenAI’s current third-generation version has become, and highlight a few ways that writers have experimented with language generators in creative ways. I also appreciate the light interactive elements in these stories — typed text animation that signals the AI’s input and touch, which visualizes the interplay between human and machine on the page.”
Should-apply: Senior Research Associate at Centre for the Study of Existential Risk (Cambridge, UK)
CSER is looking for a new Senior Research Associate to work on the long-term impacts, risks, and governance of AI. This post offers a unique opportunity to help lead an ambitious group of researchers who are highly motivated to have a real impact on the safe and beneficial development of AI. Deadline to apply is February 27, 2022.
Should-read: The Lamplighters by Emma Stonex
I really enjoyed reading this novel. The central conceit is based on a historical event: the mysterious disappearance of three lighthouse keepers back in 1900. Also ponders automation and the replacement of lightkeepers.
Thank you for reading and engaging.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a postdoctoral fellow at Stanford's Center for International Security and Cooperation, sponsored by Stanford's Institute for Human-Centered Artificial Intelligence.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99