Greetings from a world where…
“The opposite of war isn’t peace, it’s creation.”
…Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors). As always, the searchable archive of all past issues is here.
Feature Translation: People Sit, AI Watches
I’m really excited to have a special guest editor for this issue of ChinAI: Shazeda Ahmed, a PhD candidate at Berkeley and a Visiting Scholar at the AI Now Institute who recently co-published a report on China’s emotion recognition market with Vidushi Marda, senior program officer at Article 19. It’s a must-read report with a really rigorous methodology — systematic lit review of emotion recognition in two Chinese academic databases, combined with in-depth research into twenty-seven Chinese tech companies that work on emotion recognition. I’ll put it this way: To anyone doing future reporting or research on emotion recognition in China, if I can tell from a quick scan that you haven’t read this report, I’m closing the tab.
Sourced from her report, this week’s feature translation (“People Sit, AI Watches”) is a 2019 Qbit AI report on netizen backlash to emotion recognition applications in schools, centered on a leaked image of an interface, from Chinese AI unicorn Megvii, that tracks student expressions and concentration levels in classrooms.
Turning the rest of the issue to Shazeda. What follows is her analysis, plus her “Four to Forward” recommendations!
This article is one of many illuminating Chinese sources I found while researching the recent report I co-authored with human rights lawyer Vidushi Marda at Article 19, “Emotional Entanglement: China’s emotion recognition market and its implications for human rights.” The report illustrates how Chinese academic researchers, tech companies, and government actors have worked together to develop and implement technologies that purport to detect emotions based largely on facial expressions, and in some cases on other physiological data like facial blood flow, heart rate, vocal tone, and body movement. Drawing on literature that demonstrates how inaccurate, biased, and culturally reductive methods of determining people’s emotional states are being used in these technologies globally, we argue that their development and sale should be banned everywhere.
“People Sit, AI Watches” reflects some of the diversity of opinions around emotion and behavior recognition technologies in China that our report also uncovers. I appreciate that the article’s author highlights how privacy issues are becoming prevalent in debates around new technologies in China (including backlash against a deepfake face-swapping app, Zao, that went viral there in 2019.) Yet there are still questions and critiques I wish I had seen when doing the research. For one, beyond the “creepiness” of the behavior and gesture recognition cameras described in the article, my coauthor and I found ourselves asking, “what if a student is looking up a term they don’t know on their phone (rather than ‘playing with’ their phone)? Or, conversely, writing notes to each other rather than taking notes about a lecture?” Moreover, a picture of the ‘ideal user’ of these technologies began to emerge, prompting us to wonder of people who don’t fit this model, “How will this work on students with disabilities? What about students with ADHD?”
The translated article mentions Brazil and Sweden for trialling other technologies for surveilling and assessing students. In our report, we saw the same dynamics at play in China and abroad, wherein educational technology (‘edtech’) companies would in some instances market an emotion recognition product as solving a student safety problem, and in other instances would claim the same product serves allegedly educational goals: “compressing the space for slacking off,” as the article phrases it. The article’s critique that some of these other edtech tools aim “not to cultivate people, but to cultivate graduation rates,” echoes one I’ve also seen in the United States with regard to platforms that algorithmically nudge college students towards majoring in subjects they can achieve passing grades in, rather than those that appeal to them.
All of these cases left us with more questions than answers: What counts as a ‘good result’ of a pilot test, and who decides? Are students asked for consent? How would they opt out? Researchers have developed guides of questions schools should ask before acquiring new tech, foregrounding concerns about racial and socioeconomic inequalities that stratify schools. The response to these technologies can’t just come from civil society and social scientists, however. They have to come from within the fields where they are being developed. We found an example where a massive number of technical researchers signed on to a petition letter that barred an article about inferring criminality from faces being published. My hope is that with reports like mine and Vidushi’s, we can urge researchers who develop emotion and behavior recognition systems like the ones in this article and our report to think through the potential harms these technologies might cause if used in the settings they propose, and to cease their production.
FULL TRANSLATION: PEOPLE SIT, AI WATCHES
ChinAI Links (Four to Forward)
For critical scholarship on emotion recognition tech, Luke Stark’s body of work and his most recent paper, The Ethics of Emotion in Artificial Intelligence Systems, are must-reads.
Taking one step back from assumptions about emotions as discrete, measurable phenomena, there’s rich academic literature drawing upon broader concepts of affect and affective labor in the Chinese context. Three sources for thinking about affect, technology, and China:
Angela Xiao Wu’s paper “Chinese Computing and Computing China as Global Knowledge Production” builds from a description of how a tech firm provided Chinese authorities with textual sentiment analysis tools to monitor online opinions about the pandemic, and asks crucial questions about how big data research employing similar methods “yield specters of China that are untethered to the lived realities of those whose data are taken.” I live-texted most of this paper to Vidushi while reading it, in excitement to see this argument out in the world!
I'm making my way through Silvia Lindtner’s Prototype Nation: China and the Contested Promise of Innovation, an ethnography that I can see becoming a classic in so many fields where people are studying China’s tech landscape. Her arguments about the “happiness labor” women conduct in sustaining start-up incubators have been especially helpful to me in thinking about the social context of why ‘emotion recognition’ technologies might be used as a form of workplace surveillance.
Finally, Xiao Liu’s Information Fantasies: Precarious Mediation in Postsocialist China taught me a lot about the history of cybernetics and AI in China, while skillfully drawing upon Chinese science fiction literature and films to talk about affect and the ways people interact with information.
Thank you for reading and engaging.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in international relations at the University of Oxford, where he’s part of the research team at the Centre for the Governance of AI. He’s also a pre-doctoral fellow at Stanford's Center for International Security and Cooperation.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99