ChinAI #150: Guojie Li on AI Macrotrends
A tide-maker turned tide-watcher of AI gives views on limitations of deep learning, prospects for general artificial intelligence, and risks of another AI winter
Greetings from a world where…
article #10 wins the fourth edition of Around the Horn but others may work on translations on segments of the trustworthy AI white paper
…As always, the searchable archive of all past issues is here. Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).
Feature Translation: CCCF Column | Guojie Li: Several cognitive issues about AI
Context: Guojie Li is the honorary chairman of the China Computer Federation (CCF). Candidly admitting that he’s no longer at the research frontier in this field, Academician Li offers perspectives from “someone who’s been around the block” [过来人]. Thirty years ago, he was a “tide-maker” of AI, playing a key role in some of the first national conferences on AI in China; now, he positions himself as a “tide-watcher” of AI technology and applications. This column is from the Communications of the China Computer Federation, which was also the source for Professor Zhihua Zhou’s article on the dangers of Strong AI [feature translation in ChinAI #13].
Regarding the possibility of another AI winter:
Li points to impressive results for GPT-3 and BERT in the Winograd Schema Challenge, which tests the common sense understanding of AI systems through fuzzy references (e.g., who does the “he” in this sentence refer to?). At the same time, he notes that AI applications have a long way to go in complex industries that require high precision and safety, referencing IBM’s plans to sell its Watson Health business. He concludes, “AI may spend a relatively long period of time in ‘autumn.’”
On concerns that deep learning has reached a ceiling, Li acknowledges that we need new paradigms. Still, he points to plenty of space for exploration in deep learning, including a transition from “big data, small tasks” to “small data, big tasks.” He’s optimistic about improvements in algorithms. One statistic in the article: since 2012, the required computing power to train an AI model to meet the benchmark in ImageNet classification has been reduced by half every 16 months. At present, it is 1/44 of the 2012 figure.
Li writes, “Interpretability should not be the primary goal of AI research. Human intelligence itself is also a black box. Compared with the inexplicability of the human brain, artificial neural networks may be able to explain more decision-making processes. . . techniques with weak explanatory power will continue to develop, such as Chinese medicine. For AI, what people are most worried about is not the unclear explanation of ‘how the output is generated,’ but not knowing when it will make mistakes. What is more important than interpretability is error-proofing technology for AI.”
Related to this last point about error-proofing, Li acknowledges that “AI ethics and Ai supervision are obvious shortcomings in China.” He calls for “error prevention research” [防错研究] to become a critical research direction of AI.
Some really interesting comments on general artificial intelligence:
After citing one survey of the 23 top AI scholars that gave timelines for when we would get to general AI, Li gives his estimate: after 2200. This is later than the most pessimistic experts from that survey. He pans Tsinghua University’s recent announcement that it would train virtual robots that reach the level of intelligence of university graduates in about three years. In doing so, he cites Professor Noriko Arai’s research on developing robots that could get admitted to the University of Tokyo.
Li also questions the popular classifications of “Strong AI [强人工智能]” and “Weak AI[弱人工智能].” He outlines two dimensions: generalizability and intelligentization level. Strong vs. weak can apply to both of these dimensions. You can have an AI system that is very strong in terms of intelligentization level for a specialized application but very weak in terms of generalizability. He states, “Humans do not urgently need smart products that are as versatile as humans. If you are studying general artificial intelligence, it is best to set a 20-30 year research goal, immerse yourself in long-term basic research, and keep your head down. Short and fast research cannot solve the problem of general artificial intelligence.”
On how to balance AI development these dimensions and timeframes, Li points to DARPA’s “AI Next Initiative” as a model. He highlights the 9:3:2 ratio in applied AI projects, advanced AI projects, and frontier exploration projects [note: not sure how he got this ratio.] “It is an integrated layout that comprehensively considers short-term, medium-term and long-term needs. It is worth learning from,” he writes.
Lastly, Li mentions that the CCF’s 启智会 platform (InspiringNewIdeas) has begun a fierce debate on the limitations and future development trends of AI. As a reminder, I’m always open to pitches from folks who want to contribute translations and analysis to ChinAI, from sources such as these.
***FULL TRANSLATION: CCCF Column | Guojie Li: Several cognitive issues about AI
ChinAI Links (Four to Forward)
Should-read: Bridging the Gap, One Word at a Time
For U.S.-China Perception Monitor, Zichen Wang and Yang Liu unpack exaggerations and misinterpretations of Chinese idioms. Their starting point is how some Western media and “China-watchers” covered Xi Jinping’s speech earlier in July, which featured the idiom 头破血流 — a good reminder that translation is a contested, political act.
Should-read: AI Accidents: An Emerging Threat
Zachary Arnold and Helen Toner’s CSET policy brief: “As modern machine learning systems become more widely used, the potential costs of malfunctions grow. This policy brief describes how trends we already see today—both in newly deployed artificial intelligence systems and in older technologies—show how damaging the AI accidents of the future could be. It describes a wide range of hypothetical but realistic scenarios to illustrate the risks of AI accidents and offers concrete policy suggestions to reduce these risks.” I want to call attention to one of their important recommendations: invest in AI standards development and testing capacity.
For Ranking Digital Rights, Zak Rogoff, Veszna Wessenauer, and Jie Zhang conducted a study on Bytedance, which owns twin video-sharing services TikTok and Douyin (Chinese counterpart). The dig into really interesting questions, including: Are TikTok’s U.S. policies substantively different from those of similar U.S.-based platforms? Are TikTok users subject to greater human rights risks, given that the platform’s parent company, ByteDance, is headquartered in China?
From the blurb for this fascinating video: “The Wall Street Journal created dozens of automated accounts that watched hundreds of thousands of videos to reveal how the social network knows you so well. [The investigation] found that TikTok only needs one important piece of information to figure out what you want: the amount of time you linger over a piece of content. Every second you hesitate or rewatch, the app is tracking you.
Thank you for reading and engaging.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a researcher at the Center for the Governance of AI at Oxford’s Future of Humanity Institute.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Any suggestions or feedback? Let me know at email@example.com or on Twitter at @jjding99