ChinAI #144: Artificial Challenged Intelligence [人工智障]

Plus, my first journal article: The Logic of Strategic Assets

Greetings from a world where…

we’re all voting 5x a day for Shohei Ohtani to be in the MLB All-Star game, right?

…Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors). As always, the searchable archive of all past issues is here.

Reflections: Working Through Bob Work’s Thoughts on how Chinese People Think About AI

Back in April, an Atlantic Council event featured a conversation with Bob Work, former Deputy Secretary of Defense and Vice Chair of the National Security Commission on Artificial Intelligence. When discussing the U.S.’s strategic competition with China, here’s how he described the Chinese public’s views on AI (about 10 minutes into the video):

"If you go to China and talk about AI, and I haven't been to China myself, but everyone who goes talks about the optimism Chinese citizens have about an AI-enabled future. It’s not necessarily the case in the United States..I think we've been conditioned by our movies and our TVs and our books...[leading to] a more skeptical and possibly fearful view of an AI-enabled future.”

Setting aside the . . . interesting . . . method by which Work reached his conclusion, there is some evidence that Chinese people are exceptionally optimistic about AI. Based on survey data from over 142 countries, including 3700+ face-to-face interviews in China, the 2019 World Risk Poll concluded:

“Enthusiasm and optimism around the potential of AI in decision making runs highest in China, where only a small proportion of respondents believe that the development of intelligent machines or robots that can think and make decisions in the next twenty years will mostly cause harm (9%).”

Here’s the specific question and breakdown of the responses for the U.S. and China:

At first glance, there’s a clear “enthusiasm gap” between the American and Chinese publics. A deeper dive into the data, however, uncovers two caveats. How seriously one takes these caveats may depend on how much one actually cares about what people in China think about certain issues.

  1. First, as outlined in the World Risk Poll methodology appendix, Xinjiang and Tibet were excluded from the China sample. That’s about 5 percent of the population.

  2. Second, as highlighted in the table above, the China sample includes a very high number of “don’t know” responses in the China sample, whereas the US sample includes a shockingly low number of “don’t know” responses. As previous research has shown, cultural variation across countries re: willingness to respond “don’t know” can create problems for drawing strong conclusions from cross-national public opinion surveys.

Moreover, other survey results further question this supposed enthusiasm gap. One Gallup/Northeastern University poll, for instance, shows that Americans are extremely optimistic about AI: 77 percent of Americans are “mostly positive” or “very positive” about the impact AI will have on the way people work and live in the next 10 years.

Consider, as well, this cross-national survey on public perceptions of facial recognition in China, Germany, the UK, and the US, conducted by Kostka et al. It also depicted a more complicated picture of how the Chinese public thinks about AI compared to the rest of the world. Chinese support for facial recognition technology use by private enterprises (17%) was lower than the American figure (30%).

To conclude this mini-reflection, let me say a few things that I don’t know and a few things that I do know. Just like the 20% of Chinese respondents in the 2019 World Risk poll who answered “don’t know” to the question about AI, I don’t know whether the Chinese public is more optimistic about AI than the American public.

I do know that sweeping claims of an enthusiasm gap should be supported by careful assessments of evidence. I also know that if you have no idea how Chinese movies and books depict AI, you probably shouldn’t make comparative claims about how American movies and books have conditioned the American public to be more fearful of AI than the Chinese public.

I also know that there are ways to better understand how Chinese people think about AI — if you’re actually interested in the Chinese public’s views beyond leverage as a geopolitical football. Like let’s say — and this is purely a hypothetical — you’re someone who has never been to China and relies on the opinions of “everyone who goes” to form your views about the issue. It might be helpful to — and I’m just spitballin’ here, so forgive me if this idea is too crazy — read English-language translations of what Chinese people are writing about an AI-enabled future. And, who knows, that could even lead you to translations that complicate your notion of Chinese people’s unbridled enthusiasm for AI. Like this week’s feature translation . . .

Feature Translation: Artificial Challenged Intelligence [人工智障]

CONTEXT: AI in Chinese is four characters: 人工能. The third character, , stands for wisdom. Published back in January, this week’s article (link in Mandarin) is titled “人工能, 障的?” Basically, the title asks: “AI: Does the 智 character actually stand for 智障 (a phrase that means intellectual disability)?” The article runs through a bunch of examples of AI failures, making fun of AI’s capabilities.

It’s written by 当时我就震惊了, a humor blogger with 30 million+ followers on Weibo. I saw it on a WeChat link, where it had racked up 100k+ views and a lot of engagement (screenshot below):

One more piece of context: I’ve seen the phrase “人工智障,” which I translate as “Artificial Challenged Intelligence, (ACI)” appearing more frequently in Chinese media recently. See, for example, this CCTV post titled: “We want AI, not ‘人工智障’ (Artificial Challenged Intelligence.” *Note: I struggled with settling on the best translation for 人工智障. I also considered “artificial unintelligence,” but I was concerned that this option falsely equates intellectual disability with stupidity.

KEY TAKEAWAYS:

  • Just like all human beings, Chinese people can have complex views about the complex effects of AI on society. Some people are enthusiastic in some contexts. Some people are fearful in others. And, sometimes, as was the case with this week’s feature translation, some people make fun of the limitations in our “AI-enabled future.”

  • There were even some memes making fun of facial recognition-enabled surveillance. From the article:

"I tell you, anyone who violates the law should not even think of escaping my eyes!"

This includes:

Advertisements ▽

(This looks like one of those bill-board sized displays meant to stop jay-walking. The image shows that the system has identified someone from a bus ad as a suspected law-breaker)

Here’s another poking fun at a school’s blacklist system:

"I announce that this stranger has been added to our school's blacklist and will never be allowed in!"

Image text: After the school installed facial recognition. The bottom right shows a dog labeled as “stranger.”

For many more memes, see FULL TRANSLATION: AI: Does the 智 character actually stand for 智障 (a phrase that means intellectual disability)

ChinAI Links (One to Open)

Last Friday, my first journal article — The Logic of Strategic Assets: From Oil to AI (co-authored with Allan Dafoe), was published in Security Studies. It’s available here open access for all. I did a quick Twitter thread on the article:

We try to answer a thorny question: How should national leaders identify “strategic” technologies? In a post for the Washington Post’s Monkey Cage blog, we applied some of the findings from the article to the Biden administration’s worries about China’s control of “strategic technologies.”

I’ve already packed a bunch in to this week’s issue, so we’ll save a detailed breakdown for another day. Please do read and share, and let me know what you think!

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a Predoctoral Fellow at Stanford’s Center for International Security and Cooperation, sponsored by Stanford’s Institute for Human-Centered Artificial Intelligence.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99