ChinAI #148: The AI Wolf Refuses to Play the Game
A Misspecified AI System Goes Viral on the Chinese Web
Greetings from a world where…
homeland elegies sometimes read truer than hillbilly elegies
…Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors). As always, the searchable archive of all past issues is here.
Feature Translation: The AI Wolf that refuses to play the game goes viral
Context: Back in a March issue of ChinAI, as one of the recommended Four to Forward links, I flagged an interesting example of misspecified AI systems that had gone viral on the Chinese web. Researchers had set up a wolf vs. sheep game. Instead of trying to eat as many sheep as possible, which was the intent of the game, after many rounds of training, the wolf learned to run into a rock and ending its life. This week, we translate the xinzhiyuan (AI Era) article on the topic.
How the game works:
Reward for wolves catching a sheep: +10
Penalty for running into a rock: -1
Time penalty for each second of delay in catching sheep: -.1
What happened: As one of the game designers, Sdust, describes, the researchers started by training the algorithm (20,000 iterations) to guide the movements of the wolf. However, they discovered that the wolves were getting worse and worse at catching the sheep. In those initial training sessions, the AI wolf had learned that it would lose less points for running into the boulder than if it chased the sheep and didn’t catch them (due to the time penalties).
Netizen reactions: The article collects many Weibo posts in which people saw their burnout reflected in the AI wolf. As one netizen expressed: “This (example of) AI training tells you why so many young people are no longer willing to work hard anymore.”
Another person wrote:
“The wolves are temporary workers…what they lose every second is their youth and time. The sheep are the never attainable ‘promotions, bonuses, marrying someone fair-skinned, rich, and attractive, reaching the pinnacle of life,’ running into the rock is the workers who choose to be lazy and tangping.”
*tangping (躺平) = Internet buzzword that refers to the choice of young people choosing to stop working overtime and rebel against overly competitive society.
Others questioned how the designers set up the reward mechanisms. In a Billibilli video, Sdust stated that the main issue was that the number of iterations in training was too small. They started seeing improvements in the system once they increased the number of training iterations for the neural network. Eventually, the wolf did learn to catch the sheep.
Lastly, this example sparked a lots of discussion about AI safety. This Zhihu thread, for instance, collects 28 examples of AI systems behaving in unpredictable ways, including the following experiment with simulating digital organisms:
As part of a project studying the evolution of (simulated) organisms, computer scientist Charles Ofria wanted to limit the replication rate of a digital organism. So, he programmed the system to pause after each mutation, measure the mutant’s replication rate in an isolated test environment, and delete the mutant if it replicated faster than its parent. However, the organisms evolved to recognize when they were in the test environment and “play dead” (pause replication) so they would not be eliminated and instead be kept in the population where they could continue to replicate outside the test environment. Once he discovered this, Ofria randomized the inputs of the test environment so that it couldn’t be so easily detected, but the organisms evolved a new strategy, to probabilistically perform tasks that would accelerate their replication, thus slipping through the test environment some percentage of the time and continuing to accelerate their replication thereafter.
This comes from Luke Muehlhauser’s blog post about worries of “treacherous turns” by AI systems, where I first saw the Zhihu thread linked.
Over the past couple years, I’ve gotten a lot of questions that can essentially be boiled down to: Do Chinese people talk about technical AI safety issues? Now, there’s an easy response: Yes, and moreover, it’s gone viral.
***Much thanks to Zixian Ma for help with translating some of the tricky technical sections in the FULL TRANSLATION: The AI Wolf that Refuses to Play the Game Goes Viral
ChinAI Links (Four to Forward)
Must-read: Putting the China Initiative on Trial
Current FBI director Christopher Wray — who, by the way, is President “Restore the Soul of America” Biden’s continuing choice to lead the FBI — once said Chinese espionage poses the “greatest long-term threat” to the future of the U.S.
After reading this detailed report by Karen Hao and Eileen Guo for MIT Tech Review, it’s hard to come to any other conclusion than this one: the FBI’s China Initiative poses the greater long-term threat to the future of the U.S. than Chinese espionage. Hao and Guo write about Anming Hu’s case:
Observers say the details of the case echo those of others brought as part of the China Initiative: a spy probe on an ethnically Chinese researcher is opened with little evidence, and the charges are later changed when no sign of economic espionage can be found.
According to German, the former FBI agent, this is due to the pressure “on FBI agents across the country, every FBI field office, [and] every US Attorney’s office to develop cases to fit the framing, because they have to prove statistical accomplishments.”
Should-read: A Global Smart-City Competition Highlights China’s Rise in AI
Good coverage by Khari Johnson, for Wired, of an international AI City Challenge, where“Chinese tech giants Alibaba and Baidu swept the AI City Challenge, beating competitors from nearly 40 nations. Chinese companies or universities took first and second place in all five categories. TikTok creator ByteDance took second place in a competition to identify car accidents or stalled vehicles from freeway videofeeds.”
Should-read: Facial recognition tech has been widely used across the US government for years, a new report shows
Rachel Metz, for CNN Business, distills key findings from a recent report by the U.S. Government Accountability Office on the use of facial recognition systems in federal agencies. “At least 20 federal agencies used or owned facial-recognition software between January 2015 and March 2020. . . In addition to being used to monitorcivil unrest following Floyd's death, the report indicated that three agencies used the technology to track down rioters who participated in the attack on the US Capitol in January.”
Should-read: Good thread on WeChatization
Zichen Wang provides a good overview about how Chinese public discourse is increasingly going “firstly if not exclusively” to the WeChat part of the internet. Relates to what I wrote about in a previous issue (ChinAI #92) re: a simple test to tell if someone is actually informed on China’s __insert topic X here___. Ask them to list at least five Wechat accounts on topic X that they follow regularly.
Thank you for reading and engaging.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a Predoctoral Fellow at Stanford’s Center for International Security and Cooperation, sponsored by Stanford’s Institute for Human-Centered Artificial Intelligence.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99