Plus, my rambles on the "information arbitrage" basis behind each week's 4 ChinAI links
|Jul 15||Public post|| 3|
Welcome to the ChinAI Newsletter!
***Very grateful for those who are now supporting ChinAI through subscriptions, to refresh: we’re doing a Guardian-style model of tipping where there’s no difference in content access between those on the free email list and those who are subscribers (two options to tip: $12/month or $30/year). Revenues go to my hot Cheetos fund and contributing translators (like Jordan Schneider who did awesome work for this week’s feature translation). Link to subscribe here and archive of all past issues here. These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a Rhodes Scholar at Oxford, PhD candidate in International Relations, Researcher at GovAI/Future of Humanity Institute, and Research Fellow at the Center for Security and Emerging Technology.
This week’s feature translation is a joint work by Jordan Schneider (who found this insightful piece on Tencent’s tango with “Tech for Social Good”) and myself. Written by Zhou Yu, an editor with Huxiu (a major platform for science & tech thinkpieces), this article delightfully wanders from online dispute resolution to AI chess and GDPR, all the while meditating on what responsibility tech giants have to society more broadly. Jordan was a Yenching Scholar at Peking University and previously worked at Bridgewater Associates. He now hosts the ChinaEconTalk podcast (I would highly recommend his recent conversation with Neil Thomas on China’s aviation industry) as well as the weekly ChinaEconTalk newsletter (more on this in ChinAI links section).
What follows are our key takeaways of this breakdown of Tencent’s ethical psyche:
In 2019, after the government took a huge bite out of Tencent’s earnings by freezing video game monetization approvals, CEO Pony Ma changed the firm’s mission statement to “technology for social good” — though the adoption of this mission has been messy and confusing at times,. The article also highlights the oft-marginalized role of Tencent Research Institute (TRI), which has been quite outspoken about tech and AI ethics (the very first ChinAI issue translated chapters from a book on AI strategy and ethics coauthored by TRI) . Zhou extends his analysis beyond Tencent, provoking questions like: Is China in the midst of its own techlash? Last year’s government crackdowns would point in that direction. After a handful of DiDi drivers murdered their passengers, local governments severely tightened regulation on who could drive a rideshare. Amid addiction concerns, last year government licensing bodies put the entire Chinese gaming industry in a deep freeze.
But a widespread consumer-driven techlash has yet to materialize. Why? Perhaps because Tencent, whose WeChat product is most intertwined with Chinese lives on par with Google or Facebook in the west, has yet to have a mega-scandal of its own. While Tencent claims that it doesn’t store messaging data, police clearly have access to chat records, which means hackers eventually will get to them as well. For now, the focus for Tencent is more on appeasing the government than the people. But once WeChat faces its own Cambridge Analytica, it will have much more to worry about than delayed game monetization. Zhou states that the companies with foresight, like Tencent, are preparing to set up dispute resolution mechanisms on their platforms.
Other topics discussed: “Tech for Good” as a way for companies to manage government relations; the extra-legal domain of the digital world; rumors of Tencent deploying a chess AI as an anti-addiction measure to stop players' hotstreaks in QQ chess games
When I’m fishing four my four links each week amongst a sea of information overload/hot takes/views from nowhere/and bad memes, I’m looking at places neglected by the herd and the blob. The four weekly ChinAI links I recommend, as well as the whole foundation of ChinAI, are rooted in the notion of “information arbitrage.” ChinAI’s thesis is a bet on the fact that there’s a huge gold mine of Chinese-language work on AI-related issues just waiting to be discovered. Jordan’s ChinaEconTalk newsletter aligns with the ChinAI template of valuing this knowledge arbitrage, translating articles from Chinese media about tech, business, and the political economy on a biweekly basis. This ChinaEconTalk issue was one of my favorites, a jam-packed one that tackled Meetup Music (or 音遇) - China’s latest AI/KTV-hybrid app addiction - and WeChat’s new “have a look” feature (看一看) AKA “Top Stories” in its English version. *Sidenote: there should be a ChinAI-type platform for every topic (e.g. hungry, up-and-coming folks should take a ChinAI approach for China’s foreign policy or for what top Chinese political scientists are writing about, etc.).
Here’s another example of knowledge arbitrage in the tech + politics domain. In this field right now, there are a whole bunch of smart people who know politics and are trying to learn tech, and there are very few smart people who know tech and are trying to translate their technical knowledge into political frames. Basic rational optimization means you should be trying to take advantage of the latter group. There are also a very very small sliver of people who know both tech and politics very well — take for example this recent OpenAI paper by Amanda Askell, Miles Brundage, and Gillian Hadfeld that discusses industry cooperation on safety norms in AI, including factors that will address or exacerbate collective action problems related to AI systems. One area for improvement in this piece is to clearly delineate where AI introduces unique collective action problems, contrasting these with existing cooperation hurdles associated with digital social networks or other tech subsectors.
A third type of knowledge arbitrage: even the smartest people will always be biased toward advocating for a change of some sort, so we always undervalue perspectives that defend the status quo. Last week I linked the wrong submission to the U.S. National Institute of Standards and Technology RFI re: federal engagement in AI standards. This submission by Chan, Jensen, and Zhong was made a refreshing, strong case against too much government activism in standard-setting, in which they reference the “Blind Giant’s Quandary” a phrase coined by Stanford economist Paul David that referes to the risk of government failure in standard setting: “the time when the government is the most powerful (i.e., being a giant) in influencing the future trajectory of a technology is often the time when the government knows the least about what should be done (i.e., being blind-sighted).”
Last but least, investing in reading and supporting longform journalism is another area of knowledge arbitrage in our short-attention-span society. See this piece by Yoojin Wuertz, an American immigrant from South Korea, and her struggles with raising a billingual son (and so much more).
Chinese Phrase of the Week: 啼笑皆非 -- ti2xiao4jie1fei1 -- not knowing whether to laugh or cry — ridiculous
Thank you for reading and engaging.
Shout out to everyone who is commenting on the translations - idea is to build up a community of people interested in this stuff. You can contact me at email@example.com or on Twitter at @jjding99