ChinAI #76: Meta-Critique of Chinese Academic Papers in the Field of AI and Education

Plus, Is BIGness in tech necessary for competing with China

Welcome to the ChinAI Newsletter!

Quick correction on last week’s list of top ten queried items for waste sorting:

1. dog poop wrapped in a napkin
2. dog poop wrapped in a sack
3. cat poop wrapped in a sack
4. dog poop wrapped in a newspaper
5. condoms
6. takeaway bags
7. bubble tea cups
8. wet wipes (h/t to Ryan Soh, ChinAI contributor, for correcting his own correction — we had 湿厕纸 translated as soiled toilet paper in last week’s issue)
9. cray fish (only one labeled as wet waste)
10. nose booger wrapped in a napkin

As always, the archive of all past issues is here and please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).

Feature Translation: A Research Outline for AI in China’s Education Field

This week’s translation and analysis are brought to you by Kwan Yee Ng, a researcher at the Center for Long-term Priorities. Written by a group of academics and researchers (including two professors at Huazhong Normal University AKA Central China Normal University), the piece is a meta-analysis of 176 Chinese academic papers on AI in education published from 2017 onwards and a critique of the quality of this output. Analysis by Kwan follows:

The increasing volume of Chinese papers on AI has been held by some as another piece of evidence that China is set to lead the world in AI (see here, here, and here). This article should serve to unpack this trend for further scrutiny. While the article explicitly pertains to Chinese research on the intersection between AI and education, as someone who's been doing their fair share of trawling Chinese academic platforms to research Artificial General Intelligence (AGI), I think the findings also apply to Chinese academic papers on AI more generally. Some takeaways:

  • The dearth of referencing and frequency of cross-references: as per the article, there is a “network of cross-referential research” and “in 2017 an all-round flourishing [in the volume of publications] began. It is important to point out that between 2016 and 2017, the volume of publications increased sharply but the number of references dropped significantly. This is worthy of vigilance because the relative lack of references means that the research is less objective and scientific.” 

  • A lot of papers read more like op-eds and do not suggest an informed understanding of AI technology: the article laments "irregular academic terminology" and finds that "from the perspective of the whole, there is more qualitative research and less quantitative research, more normative research and less empirical research."

  • A lack of breadth in case studies, among other things, indicate the strength of influence these cases have on the collective memory: “Most examples or quotes are limited to a few "star" cases, such as AlphaGo, autonomous driving, IBM Watson, GoogleBrain cat face recognition, and ImageNet image recognition competitions.”

Dig Deeper: The two lead authors, Liu Kai and Hu Xiangen, are notable proponents of NARS, Non-Axiomatic Logic Reasoning System, an AGI model theorised by Wang Pei. More on NARS here, and Wang’s description of NARS:

FULL TRANSLATION: A Research Outline for AI in China’s Education Field

ChinAI Links (Four to Forward)

Should Read: China Due to Introduce Face Scans for Real-Name ID when Registering a New Mobile Contract

The Ministry of Industry and Information Technology announced regulations in September for telecom operators to verify people’s real-name ID with facial recognition when they get a new sim card for their phone. I provided some comments on how there has been more pushback to China’s widespread adoption of facial recognition technology. Next week’s translation will highlight some of this pushback.

Should Read: Automation Impacts on China’s Polarized Job Market

Arxiv preprint by Chen et al. (researchers from Commonwealth Scientific and Industrial Research Organisation, Monash University, Sun Yat-sen University, and MIT Media Lab) — h/t to Remco Zwetsloot for sharing this.

Summary: "China’s top-down, centrally planned specialization of cities makes large Chinese cities less resilient to impact from automation technologies," as compared to large U.S. cities which are more resilient to the impacts of automation because of a more diversified job market. Related analysis by Frey et al. 2016 of Oxford Martin School found that 47% of US jobs were at risk of computerization compared to 77% in China. The authors state that well-known large cities in China, such as Beijing, Shanghai, Guangzhou, and Shenzhen, exhibit resilience to automation technologies, whereas other large “specialty cities” such as Nanyang which specializes in farming are more susceptible to the impacts of automation.

*I think it’s a useful line of inquiry but not convinced by their method. On a quick scan of the paper I didn’t find how they proxied city size. The authors claim that Nanyang (prefecture level city in Henan province) is “the fifth largest city in China” but that doesn’t square with existing lists of largest cities. I do think there’s a huge gap in our knowledge about cities like Nanyang and Zhumadian compared to the more well-known large cities of China.

Two Links on the We Can’t Break Up Big Tech Companies Because We Need to Compete with China Narrative:

This narrative got very high-profile coverage when Zuckerberg’s leaked notes for his Congressional testimony mentioned it as a key talking point. Zuckerberg also repeated this argument in hearings about its digital currency project in October. Two well-argued, opposing views on the issue follow, looking at Facebook and Qualcomm:

  • Two researchers and social entrepreneurs argue that breaking up Facebook could open paths for Chinese-led alternatives that are more prone to surveillance, with a focus on the situation in Myanmar.

  • The Qualcomm antitrust case also features the Too Big and Necessary to Compete with China narrative. Matt Stoller breaks down why more robust antitrust may actually help U.S. companies compete better.

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a Rhodes Scholar at Oxford, PhD candidate in International Relations, Researcher at GovAI/Future of Humanity Institute, and Research Fellow at the Center for Security and Emerging Technology.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at or on Twitter at @jjding99