Greetings from a world where…
experimenting with the King’s Indian Defense is not going well
…As always, the searchable archive of all past issues is here. Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).
Feature Translation: One month after the implementation of new regulations on algorithms, how many personalized recommendation services still intrude on personal information
Context: In March, China passed stringent regulations on personalized recommendation algorithms. This article, published in April 2022, conducts a one-month check-in on company responses to the regulations, as well as future policy directions in this domain. A previous Around the Horn issue flagged this article by Compute think tank (算力智库), a public account that publishes original articles on algorithmic regulation.
Key Takeaways: The article frames an investigative report on algorithmic constraints imposed on delivery workers, published by People (人物) magazine in 2020, as a tipping point for public pressure re: algorithmic regulation.
Here’s how the article starts: “‘Delivery Drivers, Trapped in the System.’ A year ago, an investigative report from People (人物) magazine swept across the entire internet, tearing open the cold truths behind the algorithms.”
I translated this investigative report in ChinAI #111. The full translation (15,000 words and 37 pages) is available in this Google Doc.
Since the implementation of the regulations, here’s what some companies have done:
WeChat, Meituan, Bilibili, Douyin, Taobao, Weibo, and Toutiao have added buttons for users to turn off personalized recommendation functions.
Some apps now let users view the specific personal info collected by the app as well as the frequency of collection. A few provide an option for users to clear previous activities on the app with one click.
The regulations also require companies to increase transparency about how their recommendation algorithms work. Meituan, one of the companies called out by that People exposé on food deliveries, now provides more information about how they calculate “estimated arrival time” for deliveries and builds in more time for delivery workers.
The past of algorithmic regulation in China provides a roadmap for the future: before the March regulations were adopted, recommendation algorithms were regulated through scattered provisions in the“Personal Information Protection Law (PIPL)” and “Online Data Security Management Regulations (Draft for Comment),” which was issued in November last year.
Take the PIPL, for instance. The PIPL already requires option to turn off personalized recommendations, so the March regulations “mainly refine the requirements and legal responsibilities.”
More intriguing is the draft regulation for online data security management. Article 49 of that document contained some very strict provisions. For instance: “if Internet platform operators use personal information and personalized push algorithms to provide information to users, they should be responsible for the authenticity, accuracy, and legality of the source of the information pushed.” It also proposed a clear opt-in approach for personalized recommendations (user must independently give consent beforehand rather than opt-out of the default personalization settings).
This article’s author states, “Article 49 reflects the latest policy trends in China in terms of personalized push, and these are also the most stringent requirements. Enterprises can use it as a reference for compliance.”
Noting that these types of strict clauses will arouse a lot of debate if they are implemented, the article concludes on a perhaps overly rosy note: “However, the author believes that as long as the data collection is transparent and standardized enough, and the content is high-quality and rich enough, users will voluntarily turn on personalized recommendations.”
FULL TRANSLATION: One month after the implementation of new regulations on algorithms, how many personalized recommendation services still intrude on personal information
ChinAI Links (Four to Forward)
Must-read: Mapping U.S.-China Technology Decoupling and Dependence
I’ve ranted a fair amount in the past about the lack of clear measures re: DECOUPLING! The team at Stanford Center on China’s Economy and Institutions (SCCEI) has put together an informative brief that maps U.S.-China tech decoupling based on patent citation patterns. Some key findings: “Newer technology fields generally exhibit both more decoupling and a steeper drop in China’s dependence on the U.S….However, greater decoupling in a field is followed by more dependence of China on the U.S. after a few years, suggesting a tension between China’s desire and its ability to progress independently.” This brief based on a forthcoming Columbia Business School Research paper. SCEEI’s China briefs will be released twice a month, so look out for more of these in the future.
Should-read: Poll: Distrust of Asian Americans is rising
For Axios, Shawna Chen and Hope King report out the troubling results of a recent survey: 33% of Americans say they believe “Asian Americans are more loyal to their country of origin than to the United States.” This is up from 20% last year. Survey is based on the Social Tracking of Asian Americans in the U.S. (STAATUS) Index (see full report here).
Should-watch: Georgetown Univ. Initiative for U.S.-China Dialogue on Global Issues
Last Friday, I talked about that survey result at the very end of this dialogue on Chinese Artificial Intelligence and the Future of Technology and Trade, sponsored by a Georgetown University organization. It was great to learn from Matt Sheehan, Helen Toner, Emily Weinstein, and Tim Hwang, who were also on the panel.
Should-read: The U.S. and China Need Ground Rules for AI Dangers
In Foreign Policy, Ryan Fedasiuk, a fellow at the Center for a New American Security, makes a strong case for three measures that the U.S. and China should cooperate on re: AI dangers: “enforcing rigorous testing processes for their respective military AI systems, formalizing a written channel for crisis communication, and refusing to integrate AI with nuclear command and control systems.” Cool to see that part of this analysis was based on his discussions with retired Chinese military leaders about AI risks.
Thank you for reading and engaging.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a postdoctoral fellow at Stanford's Center for International Security and Cooperation, sponsored by Stanford's Institute for Human-Centered Artificial Intelligence.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99