ChinAI #101: The Demise of Technology Neutrality (Part I)

Plus, Takeaways from the 2020 Beijng Academy of AI Conference

Greetings from a land beautiful enough for hyphenated Americans

…as always, the archive of all past issues is here and please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).

Feature Translation: The Demise of “Technology Neutrality” (Part 1)

CONTEXT: The news that India was blocking 59 Chinese apps, including TikTok, Wechat, and Weibo sparked a lot of discussions in Chinese media. This piece (original Mandarin) on the evolution of “technology neutrality” over the years was one of the most thought-provoking. It’s written by Liu Bohan for 放大灯 (enlarging light), which is a new S&T media platform under guokr, a popular Chinese S&T education community.


  • Some glaring gaps in high-level framing: to motivate the essay, the author calls out how “highly nationalistic Indians once again trampled on the corpse of technology neutrality” but doesn’t mention China’s treatment of int’l tech companies — a point made in comments sections of the Huxiu version of the article

  • It’s very interesting how the author views recent U.S. tech developments against the backdrop of technology neutrality: 1) discusses Facebook’s attempt to maintain a neutral attitude re: hate speech on the platform, 2) is very sympathetic to Yann Lecun who suffered a “tortuous interrogation” over his comments regarding algorithmic discrimination, attributing it to bias in the dataset. Reviews how tech neutrality was established in cases involving Sony’s Betamax and Napster’s p2p sharing, but concludes that “proponents of technology-as-value-laden have gradually won an overwhelming victory in the Western intellectual community.”

  • According to the author, in China, the contest over technology neutrality has been slower to embark down the same road: there is still a basis of technological optimism, which has been expressed in the various slogans of the times such as “Mr. Science” (“赛先生”) supported by the May Fourth New Culture Movement and later "Science and technology are the primary productive forces" — a saying by Deng Xiaoping in 1988.

  • A challenge for readers: I only translated the first half of this lengthy essay this week, but next week will fill in the gaps on why technology neutrality has also been doomed in China, which covers the TD-SCDMA (3G standard), Qvod, and He Jiankui cases. One call for help/challenge for ChinAI readers and contributors: I hit an absolute dead end on the 2nd to 6th paragraphs in part 3 on Chinese views toward technology in the late 19th century. If anyone wants to give it a shot just comment in the doc. Here’s a taste of that nasty (but probably super interesting) 2nd paragraph:


FULL TRANSLATION: The Demise of “Technology Neutrality” (Part 1)

4 Takeaways from the 2020 BAAI Conference

*Huge shoutout to Kwan Yee, a Summer Research Fellow at the Future of Humanity Institute and an incoming Yenching Scholar — below are her thoughts:

The Beijing Academy of Artificial Intelligence (BAAI) hosted its annual BAAI Conference from 21-24th June. BAAI was established in 2018 by the Beijing Municipal Science and Technology Commission and Haidian District government, with the support of some of the most influential academic and industry players in AI such as Peking University, Tsinghua University, the Chinese Academy of Sciences, Baidu, Xiaomi and Megvii. BAAI serves as an experimental hub for cooperation between the academic and corporate sectors while also receiving support from the government including funding and government data. The 2020 BAAI Conference was held as a live broadcast this year and joined by around 30,000 online viewers. 4 major takeaways from the conference:

  1. The Chinese take on privacy: Speakers emphasized the need to build safe, reliable, and trustworthy AI while challenging a Western, personal consent-oriented conception of privacy. Speakers were optimistic that privacy regulations will drive Chinese developers to innovate better privacy protection technologies and embed such protections into their products; to this end, Yang Qiang presented his work on federated learning and highlighted the need to keep humans-in-the-loop when developing AI. Further, some suggested that the U.S. was lagging behind in privacy requirements and relying on unsustainable, corner-cutting growth as a result. Zhang Bo, the dean of Tsinghua’s AI lab, stressed the need to take into account public as well as private interest perspectives when considering issues of privacy and criticized Western definitions of privacy that sacrifice the public interest in favour of personal consent.

  1. Going beyond DL: AI scientists discussed possible futures in AI development and the need to move beyond the current deep learning paradigm to attain further breakthroughs towards human-level AI. Reviewing the past 6 decades of AI development with Bart Selman and John Hopcroft, Zhang Hongjiang observed the retreat of GOFAI techniques amidst the deep learning revolution and questioned the futures of AI paradigms.  Zhang Bo and Bart Selman pointed to the scaling limitations of deep learning to access higher cognitive functions such as language and causal reasoning. Both scientists predicted that a ‘hybrid’ approach between deep learning and GOFAI techniques would be required, while John Hopcroft suggested looking towards other disciplines such as neuroscience. Further, Zoubin Grahamani expected machine learning to progress within a framework of probabilistic modelling, and Yi Wu encouraged contemplating the development of intelligence from an evolutionary perspective when introducing OpenAI’s paper on Emergent Tool Use from Multi-Agent Interaction.

  1. Challenges and opportunities for Chinese AI development: Speakers acknowledged the ‘publish-or-perish’ hurdle facing young Chinese researchers and pointed to opportunities for advancing basic AI research in China. Speakers advised early-career AI researchers to choose their research topics wisely and look beyond the current trends to identify where they can contribute. While Zhang Bo pointed out that China had made vast progress from barely being able to publish on AI at all to the Chinese academic field now being oversaturated with AI papers, he noted that unlike their Western counterparts, Chinese students lacked economic and career affordances to take risks with their research. In a fireside chat moderated by Zhang Hongjiang, John Hopcroft and Alan Kay suggested adjusting grantmaking metrics so that researcher funding will not be so heavily conditioned on the quantity of their publications. Bart Selman took the example of deep learning when around 2010, only 5-10 people were working on the topic and it was because of the continued investment into deep learning research and, more broadly, a diversified portfolio of AI methods that enabled a huge breakthrough in the field.

  1. BAAI launches the AI4SDGs think tank:The Research Center for AI Ethics and Sustainable Development, housed in the Beijing Academy of Artificial Intelligence, is leading the AI4SDGs Think Tank to promote the use of AI technologies for the UN Sustainable Development goals. The nonprofit is self-described as “an online open service for everyone, a global repository and an analytic engine of AI projects and proposals that impacts UN Sustainable Development Goals” and has an associated Research Program that is currently open to applications. Kaifu Lee is on the board of the think tank. 

ChinAI (Four to Forward)

In all honesty I’m falling behind on reading, so send me recommendations please!

Should-read: China & AI: What the World Can Learn And What It Should Be Wary of

Hessy Elliott for thequint summarizes China’s AI development through the lens of the good (fast-paced and pragmatic approach to AI development and implementation), the bad (use of AI to enable surveillance and detention of ethnic minorities), and the unexpected (the important role of local AI ecosystems and decentralized policies on AI development). A useful readout of the Nesta essay collection she organized.

Should-read: Translation: China’s ‘Data Security Law [Draft]’

For DigiChina, Emma Rafaelof et al. translate a draft Data Security Law for public comment, which “marks a significant evolution in China’s data protection regime” and “is set to specify new responsibilities and authorities for government offices and private actors.” Article 19 discusses how to regulating data based on different levels of importance as it pertains to economic and social development.

Should-read: Data-driven Covid Management in China

Very balanced, informative MERICS report on digital solutions China has used in combatting Covid. It highlights how some contract tracing and data sharing tools have been successful to some extent, but also notes some of the drawbacks that aren’t discussed enough: “However, the swift roll out of data-driven solutions to manage public health also highlighted several kinds of risks. Technological solutions like the QR health codes proved only partially functional or serviceable. Personal data has been misused by companies to collect data for their own commercial interest. Local cadres have also abused personal data in the drive to detect infected people and reduce new cases.”

*Last week I participated in a MERICS Webinar: China as an AI superpower? Quantifying China’s AI progress against the US and Europe — good people and good conversations. Here’s the video.

Should-read: In cloud clash with Alibaba, underdog Tencent adopts more aggressive tactics

By Pei Li and Josh Horowitz for Reuters: once you do this sort of thing for a while you start to notice which journalists covering China tech actually have good, well-placed sources, and it shows in this article about the competition heating up between Tencent and Alibaba in cloud services. Insights from two Tencent sources in the company’s cloud division for this story that gives a good overview of current state of play.

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a researcher at the Center for the Governance of AI at Oxford’s Future of Humanity Institute.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at or on Twitter at @jjding99

ChinAI #100: Re-igniting an age-old debate: Data vs. Algorithms

Plus, riffing on "The Industrialization of AI"

Greetings from a land where on the one hand it’s the 100th issue of ChinAI, but on the other hand it’s 100+ years after The Jungle shed light on the experience of Lithuanian immigrants in Chicago’s meatpacking districts we see horrific Covid outbreaks among meatpacking workers, many of whom are immigrants or refugees…

…as always, the archive of all past issues is here and please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).

Feature Translation: The Algorithms vs. Data Debate provokes deeper thinking about AI

CONTEXT: In my opinion one of the most talented reporters on the AI beat (the lack of qualifiers in that statement is intentional) is 四月 for jiqizhineng. Their article from last week revisits a Jan 2020 Economist piece, which sparked some back and forth on Weibo among some Chinese AI professors regarding the relative importance of data vs. algorithms as drivers in China’s success in AI.

KEY TAKEAWAYS — Three points: 1) What sparked the debate, 2) Where are we at with data vs. algorithms, 3) Where are we going?

  • 1. What sparked the debate? Recently, Yungang Bao, a university professor at the Chinese Academy of Sciences, shared an Economist piece about 莫比嗨客 (MBH), a representative data labeling company. On Weibo he wrote that “The Economist featured MBH in an article on the same high level as Sensetime and Megvii, even spilling more ink on MBH.” He then called for China’s “new infrastructure” policy (covered in ChinAI #91) give more support for the data labeling industry, citing how MBH employs 300,000 people in poorer regions of China, as well as analogizing Megvii/Sensetime to Apple and MBH to Foxconn.

  • Many netizens responded to Bao’s Weibo post. arguing that many types of data used in AI cannot just be outsourced to companies like MBH for labeling (e.g. the labeling of network data requires high levels of expert knowledge). Zhou Zhihua, who leads a top AI lab at Nanjing University (ChinAI #5 covered some of his previous writings on Strong AI), also chimed in on the side of algorithms, “A powerful company must have something in terms of algorithms, but it is not like everyone can see it when the paper is published. Often, the algorithm applier does not want to be exposed, and in particular the algorithm scheme cannot be disclosed. So what you can see is only on the surface level."

  • Bao’s response cites the Economist article again: “Many of the algorithms used contain little that is not available to any computer-science graduate student on Earth. Without China’s data-labelling infrastructure, which is without peer, they would be nowhere.”

  • 2. Where are we at with data vs. algorithms? The answer is always it depends…but we can do better than leaving it at that. Personally, I think it’s very difficult to make the case that China’s data labeling industry provides a significant comparative advantage to China’s AI success. First of all, companies like MBH don’t just supply Sensetime/Megvii; I’d wager international firms make up a fair amount of the customer base. Second of all, advances in unsupervised learning — even in fields like image recognition where data labeling may be most salient — lessens the demand for labeled data. In many smart manufacturing systems, the constraint is not the amount of sensor data but rather the talent/skills to develop ML algorithms to make the most of that data.

  • 3. Where are we going? If algorithms are the bottleneck, is the solution, then, the mass production of AI algorithms? This process involves “algorithm factories,” which Xu Bing (co-founder of Sensetime) describes as “a factory where data is continuously refined in a furnace of computing power, and where batches of algorithm models are produced at a lower cost and are continuously brought into the market.” Sensetime’s “SenseParrots,” for instance is a prototype of this “algorithm factory.” Megvii also recently open-sourced MegEngine, which is a deep learning framework that is meant to help diffuse the mass production of algorithms by university students, teachers, and AI developers in SMEs and traditional industries. The article’s conclusion: “AI technology must move towards industrialization.”

FULL TRANSLATIONAlgorithms vs data, which plays the decisive role? A North-South "debate" among big shots provokes deeper thinking about AI

Reflections: The Industrialization of AI

Let’s spitball a little bit about this idea of “The Industrialization of AI (IoAI).” First, it’s important to clarify what the IoAI is not. It’s neither the application of AI to industry nor the application of AI to industrialize industry.

  • Application of AI to industry = facial recognition as an improved product in the existing identity authentication industry.

  • Application of AI to industrialize industries = the application of machine quality inspection to make production lines more automated (ChinAI #58)

The Industrialization of AI, rather, refers to a transition in the methods of producing “AI.” In a January 2020 issue of importAI, Jack Clark described IoAI as “what happens when AI goes from an artisanal, craftsperson-based profession to a repeatable, professional-based profession.” Two examples:

  • He notes, for instance, how AI software frameworks have evolved from tools built by random university students (e.g. Theano) to industry-developed systems (e.g. TensorFlow, Pytorch). Relatedly, this week’s feature translation describes Sensetime’s “SenseParrots” as something that has evolved from a technical framework to “an industrial-grade model production platform.”

  • When Jack first mentioned IoAI in an October 2018 issue, he stated that “the "emergence of new large-scale benchmarks for applied AI applications represent further evidence for the current era being ‘the Industrialization of AI’.” At that time, “AI Benchmark,” which tests the performance of AI software on different smartphones, had just been released. NIST’s Facial Recognition Vendor Test is in this vein.

This naturally leads us to ask: Can we use the industrialization of the 19th century American machine tool industry as a useful analogy for the industrialization of AI? Why not? It’s the 100th issue, let’s get wild. What was involved in the process of industrializing machine tools — the widespread adaptation of milling machines and lathes that cut and shape metal, wood, or other materials?

  • Vertical specialization: Before the 1820s the machine tool industry wasn’t really a separate industry. If you were making sewing machines, you would build your own tools to cut up metal to make a sewing machine. Do we have a separate “AI industry”?

  • Resource endowments: the American machine tool industry was very resource-intensive (this approach required a lot of wood and metal), and the U.S. had a more abundant supply of wood and metals than European competitors, which some argue explains why the U.S. took better advantage of the industrialization of machine-making. Advances upstream, like with high-speed steel, were crucial to advancing machine tool development. A lot of parallels to mull over: we know it’s more complicated than China just has more “wood” than the U.S.; what are the upstream advances that will be crucial to IoAI (e.g. in cloud computing?)

  • Standardization: This connects to the benchmark stuff that Jack is talking about. Standardization was at the heart of machine-tool enabled mass production — the idea that with more precise machine tools you could make standardized , interchangeable parts. Each manufacturer, however, had their own standards, so there was a need for a broader “standardization” that connected different firms, communities, markets, and states.

  • Technological convergence: this cluster of methods — using a sequential series of special-purpose machine tools to make stuff — could be applied to the manufacture of sewing machines, clocks, bicycles, automobiles, etc. This concept is at the heart of AI as a general-purpose technology.

More questions than answers but maybe progress is made by asking better questions. Much of this is drawn from Rosenberg’s 1963 article in The Journal of Economic History on the American machine tool industry. If there’s one article that has transformed the way I think about technological development, it would be this one.

ChinAI (Four to Forward)

Should-read: GovAI Submission to European Commission’s Consultation on AI White Paper: a European approach to excellence and trust

Bit shout-out to Stefan Torges, a GovAI fellow, who researched and drafted GovAI’s submission to the European Commission’s consultation on an AI White Paper. A key point is that excellence and trust can be mutually beneficial: “Trustworthy technology also contributes to the long-term competitiveness of the European AI sector. Accidents and misuse would risk undermining the trust necessary for this industry to flourish.” The paper goes on to outline concrete recommendations to improve the regulatory scope, types of requirements to address particular failure modes of AI applications, and more flexible AI governance.

Should-read: Wrongfully Accused by an Algorithm

By Kashmir Hill for NYT, “In what may be the first known case of its kind, a faulty facial recognition match led to a Michigan man’s arrest for a crime he did not commit.” “This is not me,” Robert Julian-Borchak Williams told investigators. “You think all Black men look alike?”

Should-read: Technology Quarterly January 2020

That January 2020 Economist article that sparked all this discussion comes from their “Technology Quarterly” section. There’s six other articles in that section that are all well-worth reading. Still the gold standard in terms of no-nonsense, concise, punchy writing.

Should-listen: My webinar on Tech Buzz China podcast

T’was really fun to do a webinar/Q&A on Tech Buzz China, run by Rui Ma and Ying-Ying Lu. It’s a really efficient 30-min. distillation of my big-picture thinking about China’s AI landscape, structured around 10 points. I’m a little nervous in the beginning but it gets more “listenable” throughout.

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a researcher at the Center for the Governance of AI at Oxford’s Future of Humanity Institute.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at or on Twitter at @jjding99

ChinAI #99: Chinese Reactions to NIH probe results

Plus, momentum toward more DC think tank transparency?

Greetings from a land where the house don’t fall when the bones are good…

…as always, the archive of all past issues is here and please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).

Feature Translation: Reactions to NIH report on investigation of scientists’ foreign ties

This week’s feature translation bounces off a recent must-read CSET Media Reaction Brief, authored by Emily Weinstein and Dahlia Peterson, on Chinese Reactions to the U.S.’s proclamation that “forbids the entry of graduate students or researchers who have past or current affiliations with Chinese entities that ‘implement or support’ China’s military-civil fusion (军民融合; or MCF) strategy.”

It’s a really neat approach to looking at the response of :

  1. the Chinese government (critical but relative muted)

  2. Chinese experts (uncertain about which entities qualify)

  3. Study abroad consultancies (some state that Chinese students will go to EU, UK, and other countries instead of US; others say US will remain welcoming to Chinese students)

Let’s take a look at the reactions of a different group of people (relatively informed netizens) on a related subject: a recently released readout of the NIH investigation into undisclosed foreign ties by scientists.

CONTEXT: In a departure from the “never read the comments” world we live in, the comments section of a 6/13 zhishifenzi article illuminates some of key throughlines of Chinese discourse on Chinese students in the U.S. and tech transfer. ChinAI #46 introduced zhishifenzi as a media platform dedicated to discussing the state of science in China, founded by three big-shot scholars, Rao Yi, Lu Bai and Xie Yu. Rao Yi, an outspoken figure on these issues, was a faculty member at WashU and Northwestern before taking up the deanship of life sciences at Peking University. The article has been read 100k+ times, and a bunch of my Wechat friends favorited it.


  • The article itself was mainly a summary of a Science magazine article about NIH dep. director Lauer’s readout of the NIH investigation on June 12. Though, there were a few framing changes: a) more emphasis on how only 4 percent of scientists were involved with IP transfer issues. “The publication of this report means that the researchers under investigation did not, as previously claimed by the US NIH and FBI, systematically transfer intellectual property rights to China or other countries,” writes yeshuisong; b) some quotes from Rao Yi pushing back on restrictions to scientific collaborations: “If there are competitions, the Olympic Games have shown us how to compete.”

  • Main throughlines from the comments: 1) a fair bit of techno-nationalist and scientific/tech zero-sum competition thinking (see: Jie, Wotainanle); 2) criticism of both the scientists for trying to play both sides AND the talent programs for being ineffective [this is a point rarely brought up in English-language discussions of the topic because it doesn’t fit a neat narrative]; 3) criticism of zhishifenzi for not covering the legal aspects of this issue in enough dept

Here’s my informal translations of the 8 most upvoted comments, as of Saturday June 21 (all screenshots in the full translation):

Jie says: “The scientific research competition between China and the United States cannot be turned into the Olympic Games because it, at its essence, is a competition for global markets and comprehensive national strength. This is an elimination game. The loser doesn’t even get the chance to hug the champion. They can only slide into decline. The Americans have already seen the essence of this, but unfortunately many Chinese are still doing wishful thinking.

Qinghe says, “To be honest, just because China’s talent project funds have been declared, it doesn’t mean that there are many who are doing effective scientific research in China. Some people just set up this so-called “cooperation” to get a sum of money from the Chinese government. Once the money gets into their hands, they just come back to China and hold a meeting/conference and complete the cooperation. I also hope that China can do more strict screening for their talent plans to attract overseas talents to return. 

Thisisyao says: Eating from both sides, and only making a pretence when in China. Please learn from Professor Rao Yi, either come back full-time, or don't come back.

QC: Regardless of who you are -- if you hide from your boss that you are doing another job, your boss won’t be happy.

该用户已胖(This user is already fat) says: “Regarding what Lauer said, ‘Going forward, China should make the Thousand Talents Plan more transparent, the U.S. should be more transparent about its investigations,’ -- this is no use. If you are out to condemn somebody, you can always trump up a charge [欲加之罪何患无辞]. Make preparations for the other side to completely tear off your face and fight a protracted battle.”

Zhanggongshuo says: “In American history, the Chinese are the only ethnic group that have been systematically excluded (from entering). And at that time, it had nothing to do with the (relationship between) the governments. For any group, if you do something (wrong), in the end all the Chinese will have to cover the damage. Since the end of the last century, this pattern has also been reflected in the relationship between the two countries.”

Zifeiyu says: “All along, I have liked the zhishifenzi (The Intellectual) Wechat public account, but I was disappointed by the content and topic of this article. I was thinking about switching the topic, for instance: analyzing the results of America’s investigation of the relevant scientists through a legal perspective. Isn’t that better?”

Wotainanle comments, “when scientific research meets’s hard to explain in a few words. American has already taken the knife against our throats, wanting to destroy us. Even if your comprehensive strength is not that of America’s, you still need to clench your teeth and stand firm, go all out to block this knife back, for once you admit defeat, they will take the knife and directly chop off your neck. Under these conditions, many Chinese people engage in wishful thinking and believe that as long as we admit defeat, American will let this knife go. Now we see this type of thinking is truly intolerable.

Lastly, in the full translation, I threw up some screenshots of top Twitter comments that linked to the same Science magazine article — for a very unscientific comparison. Here’s one from Sen. Portman gives a flavor of the general reaction. The different angles are interesting, with the Twitter reactions focusing so much on IP transfer and espionage, which wasn’t really a point of emphasis in most of the zhishifenzi comments.

FULL TRANSLATION: Reactions to NIH Investigation Results -- Latest report: 54 Professors Lose their Jobs in the US, mainly of Chinese ethnicity, and very few transfer IP

ChinAI (Four to Forward)

Should-read: SMIC bets on Shanghai listing — by Yuan Yang and Nian Liu for FT

Great overview of SMIC’s current situation. “Technology wise, SMIC is at least five years away from TSMC,” said Xu Tao, semiconductor analyst at Citic Securities. Last week’s ChinAI covered TSMC’s historical rise, and made similar comparisons with SMIC.

Chart showing revenues, government funding and research and development expenses at China’s SMIC and Taiwan's TSMC

Yuan and Nian also capture a dilemma faced by SMIC: “It remains a question whether Chinese chipmakers such as SMIC will comply if the US Department of Commerce strictly enforces its ban on selling to Huawei. Doing so would not hit SMIC’s sales critically, but would seem to defeat the point of creating a domestic chip champion. But refusing to comply could see it cut from US technology, which is present in all stages of the chip supply chain. SMIC’s largest vulnerability in such a scenario would be its reliance on what are known as EDA tools, the software needed to design chips and turn them into customised sets of instructions for specific plants to carry out, said Velu Sinha, telecoms partner at Bain & Co in Shanghai.”

Should-read: Taiwan funding of think tanks: Omnipresent and rarely disclosed

Eli Clifton for Quincy Institute for Responsible Statecraft argues: “Hudson may be the most extreme in its policy proposals, but the consistent behavior from the five think tanks [Brookings, Center for American Progress, CNAS, CSIS, Hudson Institute] is unmistakable: General support funding from Taiwan’s government is never disclosed when experts, whose salaries may well be partially funded by TECRO dollars, offer policy recommendations regarding U.S.-Taiwan relations.”

Bonne Glaser' pushed back on the reporting re: CSIS:

Beyond this specific case, I think the American national interest would benefit from more think tank transparency, as Matt Schrader argues for:

Per Transparify’s 2018 report: the only DC think tanks that cover China rated as highly transparent were Stimson Center, New America Foundation, and International Crisis Group (technically HQ-ed in Belgium but has DC office)

Should-read: Politico China Watcher

Edited by David Wertime with contributions from an impressive team at Politico, this new weekly newsletter on US-China relations is really hitting the ground running. Densely packed, comprehensive in coverage, remarkable synthesis of diverse expert voices each week, and a section on “translating China” which looks at what Chinese social media is buzzing about.

Should-read: ChinaFile Conversation on Zoom’s Closures of US-based activist accounts

ChinaFile convened some folks to answer this question: What is the right way to ensure that companies following China’s laws don’t violate the rights of consumers using their products outside of China’s borders? My contribution to the conversation argues: “This focus on protecting people outside of mainland China from Chinese government censorship is too narrow for two reasons. First, it disregards the fact that the Chinese people are the main targets of Zoom-enabled censorship and surveillance. Second, it further reinforces a U.S. human rights approach that wavers according to geopolitical whims.”

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a researcher at the Center for the Governance of AI at Oxford’s Future of Humanity Institute.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at or on Twitter at @jjding99

ChinAI #98: Techlore - The Historical Rise of TSMC & Samsung in Semiconductors

Plus, the Trump admin's continued efforts to undermine US supply of AI talent

Greetings from a land where the residents ask “What Were Sports?...

…as always, the archive of all past issues is here and please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).

***I added some quick housekeeping edits to last week’s issue based on feedback from readers. The end of last week’s newsletter now has a6.14.20 NOTE: This post was edited to adjust 1) a typo in GPT-3’s number of parameters, 2) the translated article’s title, and 3) a distinction between hyperparameters vs. parameters.” This will be standard operating procedure from now on. Since ChinAI issues are being cited in reports and academic writing, I want to maintain a more accountable system of editing. Thanks to Sören, Karson, and Max for pointing these out.

Feature Translation: TSMC at the head of history’s tide: two high walls (US & China) and one sharp knife (Samsung)

This week’s feature translation is a joint work with Joy Dantong Ma, who pitched this epic piece. My term for these types of pieces is “techlore:” longform pieces that read, at times, like epic poems in which the heroes (tech company leaders) wage battle over the commanding heights of the economy. “Development Bloggers” or the “Industrial Party,” usually people who have experience working in the tech industry and espouse techno-nationalist views, are emerging as a formidable force in Chinese media and the semiconductor industry is especially fertile ground for techlore.

Previously, ChinAI has featured techlore articles by other “development bloggers,” such as Boss Dai and Saidong (“development bloggers”), on the history and prospects of the mainland’s semiconductor industry. This week, we turn our attention to an article edited by Boss Dai on the historical rise of TSMC, the dominant Taiwanese company in control of critical components of the semiconductor chain.

All credit goes to Joy on flagging this piece. We previously worked together on a ChinAI project at MacroPolo/Paulson Institute back in 2018, where she led digital product development. Nowadays she works as a data scientist in Chicago, while continuing to write her observations on the AI industry and its implications for bilateral relations. Below is our analysis and summary of this TSMC-inspired techlore:

***If you’ve been meaning to subscribe for a while but haven’t gotten around to it, please do so. These longform articles take a lot of effort. The full translation is 23 pages and 6000+ words in English, and I want to establish a norm in this space that translation work is valued and compensated fairly. So please subscribe here to help support and more generously compensate contributors like Joy***


  • Let’s start with just an incredible passage from the last part of the article on TSMC’s Nightingale Army, which was an initiative to keep pace with Samsung. It’s the perfect embodiment of this techlore-style of writing: “The company assembled an unprecedented R&D team in the industry: the Nightingale Army - a team that worked at night. TSMC learned from Foxconn assembly line and built a three-shift R&D department to ensure 24-hour uninterrupted R&D. Nightingale's salary was much higher than assembly workers or regular R&D personnel - 30% increase in base salary and 50% in dividend. Attracted by the rewards, before long the Nightingale quickly gathered more than 400 people. Because staying up late harms the liver, the nightingale model is also called "liver buster.” A few sayings started to spread in Taiwan, “100,000 young people, 100,000 livers”, “the tougher the liver, the more money." In 2014, the total annual working hours of Taiwanese laborers was 2135 hours, far exceeding the rest of the world. When Intel was defeated by TSMC’s technology in 2017, some Intel employees went to TSMC to figure out why, and the answer was: you snooze you lose; you’ve been sleeping too much for too long.

  • Five part structure for the full essay: 1. TSMC’s rise to its central position in the semiconductor industry; 2) Samsung’s challenge; 3) Apple’s support for TSMC against Samsung; 4) Samsung poaches a key TSMC talent and gains a technical advantage ; 5) TSMC’s counterattack

  • Re: TSMC’s rise: Yes, TSMC benefited greatly from U.S.-educated/trained returnees (including the founder Morris Chang who was the No.3 at Texas Instruments), U.S.-licensed technology, and large sales orders from U.S. chip companies BUT ALSO the U.S. semiconductor industry benefited greatly from TSMC’s growth: many of today’s giants Qualcomm, NVIDIA, Marvell etc. were startups that rode the wave of TSMC’s fast growth

  • The best example of this is a story about how Nvidia, still a small company, called upon TSMC for an urgent order of wafers. This partnership helped Nvidia gain a stake in the market, and its CEO Jensen Huang commemorated his initial call with Morris Chang in a comic (image in full translation)

  • Unlike the current two-player game/zero-sum thinking that pervades Washington, the dominance of TSMC and Samsung, a Taiwanese company and Korean company bears out the complicated, cross-cutting alliances in the semiconductor industry. Behind the TSMC-Samsung rivalry is an invisible force from across the Pacific: American IT giants such as Apple and Qualcomm. Their ever-shifting support for TSMC and Samsung decided the outcome, on multiple occasions, of this ongoing battle, maintaining a balance between the two and thus avoiding any technology hegemony.

  • It’s hard to summarize much more without just telling you to read the ful translation. Here’s a sneak preview of some interesting details you’ll find: the difficulties TSMC faces in maintaining neutrality b/t US and China; movement of key TSMC talents to the mainland; story of how Samsung’s chairman Lee Kun-hee tried to poach TSMC founder Morris Chang for Samsung’s semiconductor division two years after TSMC was established; how 1987 revelations of Toshiba’s private sale of milling machines to the Soviet Union triggered a 301 investigation into Japanese semiconductors (a key opportunity for Samsung); a lot of intrigue re: talent poaching and key technical breakthroughs

FULL TRANSLATION: TSMC at the head of history’s tide: two high walls and one sharp knife

ChinAI (Four to Forward)

Must-read: MacroPolo’s Global AI Talent Tracker

Great to see Matt Sheehan and Ishan Banerjee’s really essential work on the technoglobal flows of AI research talent based off of papers at NeurIPs 2019 (the premier AI/ML conference). For its article on the release of this tracker, the NYT graphics team produced the following image, which summarizes the findings well:


This is what I told Paul Mozur in his reporting for this story:

Should-read: Petra Moser’s, Prof of Econ at NYU, thread on a forthcoming Trump XO

The thread goes through some of her research, an impressive blend of econometrics and economic history, which shows that keeping out foreign scientists and students just hurts the overall U.S. innovation system.

Should-read: Jay Kang on Tou Thao, the Hmong cop who stood by, and the Myths of Asian American Solidarity

I first started reading Jay’s work when he was at Grantland (RIP) and have been reading it ever since. For The Time to Say Goodbye newsletter, his distinctive take on Asian-Americans “calling out” anti-blackness in our communities“It shouldn’t surprise anyone that these declarations almost always come from elite-educated, upwardly mobile East Asians and they’re almost always directed at poorer, or, at the very least, less genteel immigrants, whether nail salon workers, beauty shop owners, or, in this case, a Hmong-American policeman. There is almost no overlap between these groups. They might each have representatives at a summit or panel discussion in an academic setting, but Hmongs and other poorer Asian groups really only become “Asian American” when they fuck up and do something racist, or when they unexpectedly do something that falls in line with the sort of elite multiculturalism promoted by the professional “Asian-Americans.” 

Should-read: ASPI Report on the CCP’s United Front system

By Alex Joske, this detailed report examines the united front (UF) system, a network of Chinese party and govt agencies that is increasingly trying to influence diaspora communities, MNCs, and foreign political parties. The technology transfer section is a little light (understandable, as ASPI will have a forthcoming report on this subject), and I think we need to have more debates about the potential chilling effects of overemphasizing UF-related risks vs. more vigilance re: UF activities. But this debate can’t happen without understanding the UF in more detail, which makes Alex’s report essential reading.

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a researcher at the Center for the Governance of AI at Oxford’s Future of Humanity Institute.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at or on Twitter at @jjding99

ChinAI #97: Reactions to OpenAI's GPT-3

Breakdown of ACL Publications and Trends

Greetings from a land where yellow peril supports black power

…as always, the archive of all past issues is here and please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).

Feature Translation #1: Reactions to GPT-3

Context: xinzhiyuan (AI era), a media portals that focuses on AI, collected some reactions from Chinese netizens to OpenAI’s GPT-3the latest language model from OpenAI with 175 billion parameters (GPT-2 was 1.5 billion). The piece takes a lighter approach in terms of reactions, so we’ll unpack some fun memes, but they also feature some interesting quotes from the Zhihu thread on GPT-3.

Key Takeaways:

  • Zhihu user Li Ru summarized the advantages of GPT-3 over BERT (Google’s language model): BERT's task fine-tuning in specific fields relies too much on labeled data and is easy to overfit, while GPT-3 only requires a small amount of labeled data and no fine-tuning.

  • Earlier iterations of GPT and GPT-2 lagged BERT in terms of natural language understanding; GPT-3 was better at a few reading comprehension benchmarks but still lagged the fine-tuned BERT/state-of-the-art models in contextual vocab analysis and answering middle school/high school exam questions

  • Some light-hearted poking fun at OpenAI from netizens: the Zhihu thread (Chinese Quora) on GPT-3 was tagged as 炫富 (wealth-flaunting/show-off-y). Compute used was 2000x that of BERT, and Zhihu netizen “Jsgfery” pointed out how there was a bug in a filtering component of OpenAI’s training process, but due to the cost they couldn’t retrain. In the words of Jsgfery: “The landlord does not have surplus grain to let you train the model again.” (地主家也没有余粮再训练一次了).”

  • This article was the first time I came across the slang term “调参侠” (“the hyperparameter-tuning knights”) used to refer to ML engineers. The joke is that all AI engineers do is tune hyperparameters, so now that models like GPT-3 are showing that this may not be that necessary, these parameter-tuning knights have nothing to do now.

  • Some serious stuff in the piece as well: It reflects on how OpenAI did not release GPT-2 and states that “many people agree with the prudent approach of OpenAI.”


Feature Translation #2: Breakdown of the ACL 2020 Accepted Papers

Context: Another xinahiyuan (AI New Era) piece on some stats and trends from the papers accepted to ACL 2020 — The Association for Computational Linguistics conference, which is a premier venue for publishing NLP research.

Key Takeaways:

  • Growth of this subfield in recent years is just remarkable: This year’s 3429 paper submissions was an increase of 523 over the previous year; overall, the number of paper submissions more than doubled in the past two years.

  • Authors at Chinese institutions submitted the most papers, and had the second most accepted papers, but their acceptance rate was outside top 10 countries; US had most accepted papers and a very high acceptance rate. Stats for top three countries by accepted papers follows:

  • This is from the original article’s breakdown of author affiliations by country (all countries in the original. Some caveats — methodology wasn’t that clear, so I assume they scraped this and didn’t manually identify all authors so there could be some discrepancies through scraping.

  • Last half of article has summary of ACL 2010-2020 research trends by Professor Che Wanxiang (at Harbin Institute of Tech):

    • Stark rise in publications on human-machine dialogue starting in 2016 (virtual assistants eating the Internet)

    • Other subjects that have risen in popularity: new tasks and resources for challenging AI systems, Q&A systems, and text generation

    • Very surprising to me: ACL publications on machine translation have actually declined since 2013; Che’s explanation is that all these publications have been taken over by Transformer models which can be applied to a variety of NLP tasks (including translation) and can often outperform neural machine translation models in specific tasks.

***READ FULL(ish) TRANSLATION: ACL Breakdown *** I mostly threw up the images from the slides and ACL stats and tried to add some translations/annotations. Just comment in the Google doc if you have questions about anything in particular.

ChinAI Links (Four to Forward)

Must-read: Overcoming Barriers to Cross-cultural Cooperation in AI Ethics and Governance

A group of researchers from the Cambridge Centre for the Study of Existential Risk and the Beijing Academy of Artificial Intelligence published a paper on cross-cultural cooperation in AI ethics and governance in Philosophy and Technology.

In the paper they argue that international cooperation will be essential for ensuring the global benefits of AI, discuss some of the barriers to such cooperation and how they might be overcome. They particularly emphasize the important role that academia may play in cross-cultural cooperation, since academics can often have conversations or engage in collaborations that are difficult in government or industry, and make a number of practical recommendations for things academics can do, including translating key papers and research exchanges. I think it’s a really nice counterbalance to increasing number of articles and reports framing these issues adversarially (especially with respect US-China tech competition).

The full paper is here, and a short blog post describing the key points is available here.

Should-read: AI Definitions Affect Policymaking

How many think tanks in the world have the capabilities to do this: “CSET developed a functional AI definition using SciBERT—a recent neural network-based technique for natural language processing trained on scientific literature (p. 14).” The problem they try to solve: How you define “AI” will significantly affect any claims you make about AI governance, politics, and policymaking. I’ve called this the AI abstraction problem in previous writing.

Should-read: The Innovation Gap by NESTA (2006)

A blast from the past but still relevant, especially to the current US nat sec community’s obsession with achieving tech dominance over China, which I have tastefully analogized to a glorified dick-measuring contest. NESTA convened some of the smartest thinkers on S&T policy (including many from the University of Sussex Policy Research Unit, which is just an incredible hub for brilliant thinkers on this), and tried to develop a more comprehensive innovation strategy for the UK. Coolest part here is that they unpack how traditional innovation indicators (patents, scientific papers) miss “hidden innovation” which doesn’t show up in these statistics. Does anybody know if the U.S. has undertaken a similar effort? Folks beyond my pay-grade should seriously consider just replicating this for the US context.

Should-read: Blitzkrieg, the Revolution in Military Affairs and Defense Intellectuals

Rolf Hobson for Journal of Strategic Studies in 2010:

“It is hard to explain why the defense intellectual milieu has received so little academic attention when it plays so obvious a role in forming American strategic culture and exerts an undeniable influence over both foreign and domestic policy. It must represent one of the most powerful, unelected groups within the American polity, and the purposes served by its research should presumably be the subject of public interest and scrutiny. In any other field, it is assumed – or suspected – that the products of research institutions are influenced by funding, political affiliations, institutional rivalry and individual career structures. In this one, however, their impact on theory can only be guessed at. One insider has compared the quick succession of High Concepts in American strategic debate to the workings of the fashion industry. If that is a valid comparison, it is also justified to ask what mechanisms exist within the industry to weed out the bad theory that fads and market forces will inevitably produce.”

  • The U.S. needs more of these mechanisms. Gatekeep the gatekeepers. Be very critical and skeptical of “High Concepts:” e.g., Decoupling!, New Tech Cold War, “China Reckoning,” etc.

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a researcher at the Center for the Governance of AI at Oxford’s Future of Humanity Institute.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at or on Twitter at @jjding99

6.14.20 NOTE: This post was edited to adjust 1) a typo in GPT-3’s number of parameters, 2) the translated article’s title, and 3) a distinction between hyperparameters vs. parameters.

Loading more posts…