Greetings from a world where…
the Hawkeyes are the No. 2 team in the land
…As always, the searchable archive of all past issues is here. Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).
Feature Translation: White Paper on China’s Computing Power Development Index
Context: First of many white papers this week. This white paper on China’s computing capacity, by CAICT (a think tank under China’s Ministry of Industry and Information Technology), was the runaway vote-getter from last week’s Around the Horn. I’ve translated a few key sections in the Google Doc. Fortunately, it looks like Georgetown’s Center for Security and Emerging Technology will do a full translation of this white paper, so keep an eye out for that in the future.
Key Takeaways:
***In many ways, ChinAI is just an open-source notebook. In trying to digest this white paper, I had to learn a lot of background info on computing capacity, so this week’s notes are more extensive and technical than usual. Definitely a lot of gaps in my knowledge, so please fill in the blanks if you have expertise in this area.
China’s scale of computing power continues to expand
In 2020, China’s computing power reached 135 exaFlops (EFlops), an increase of 48 EFlops from last year, maintaining a 55% year-on-year growth rate, which is about 16% higher than the global average.
Flops = floating point operations per second. ExaFlops = one quintillion (10^18) Flops. The white paper likens an exaFlop to the computing power of 5 Tianhe 2A supercomputers (one of the world’s fastest) or 2 million mainstream notebooks. If we think of exaFlops like killowatt-hours (measures of electric energy usage) — both capture aggregate figures in rate terms — then I guess a country’s annual total computing capacity (in terms of floating point operations) is just the number of exaFlops multiplied by 31.536 million (which is the number of seconds in a year).
According to the CAICT white paper, China ranks second globally in the scale of computing power: the United States, China, Europe, and Japan account for 36%, 31%, 11%, and 6% of global computing power, respectively.
The paper positions computing power as a new focal point of strategic competition
It emphasizes the strong positive correlation between the computing power level of countries around the world and their GDPs. Among the top 20 countries in terms of computing power, seventeen are among the top 20 economies in the world.
It also references the efforts of other countries to target computing power as a strategic asset, including the U.S.’s “Pioneering the Future Advanced Computing Ecosystem” report in November 2020
Crucially, competition is just as much about internal circulation as it is about global linkages. The CAICT paper highlights the “东数西算” project, which I had never heard of before. My imperfect translation is the “East-West division of computing labor” project. The motivation behind the project: Most of China's Internet and big data companies are located in eastern, coastal regions (Beijing, Tianjin, Shanghai, Zhejiang, Jiangsu, etc). Naturally, that's where they base their data centers too. But electricity costs are very high in these areas. Western provinces, such as Inner Mongolia, Gansu, and Guizhou, on the other hand, have much lower electricity costs, so the idea is to construct more data centers in these areas. This Zhihu thread adds one caveat: to do real-time computations like providing search results, you might need data centers close by; therefore, maybe the Western data centers take on tasks like background processing, offline analysis, and system backup.
The composition of China’s computing demands is changing rapidly, and AI is at the center of it all
The white paper divides compute into three categories: basic, smart, and supercomputing. It calculates basic computing power by estimating the total number of servers in stock. Since the service life of servers is typically 6 years, it takes the sum of the past six years of annual server shipments. Smart computing power is calculated by the same process but limited to AI accelerator chips. Finally, supercomputing capacity is based on data from the global top 500 ranking of supercomputers and other relevant data from supercomputing manufacturers.
Here’s where it gets very remarkable: from 2016 to 2020, the proportion of basic computing power in China’s overall computing power has dropped from 95% to 57%, and the share of smart computing power in overall computing power has increased from 3% to 41%. In terms of national shares of global smart computing power, China leads with 52%, with the U.S. at 19%. A lot of reasons to tread carefully and probe the methodology and appropriateness of these indicators, but these stats surprised me. Here’s one more: the white paper estimates that the proportion of smart computing power in China’s overall compute stock will increase to 70% by 2023.
Nvidia still dominates the supply of China’s smart computing power. In 2020, Nvidia's GPU chips account for about 95% of the Chinese market for AI servers. The white paper lists China’s efforts to develop smart computing centers, including Shenzhen’s Pengcheng Lab and Sensetime’s Shanghai center, both of which close readers of ChinAI should be familiar with. I’ll leave you with this observation: if you read the translated excerpts in the Google Doc, you’ll see that many of the example applications for smart computing power allude to surveillance. For instance, SenseTime’s new computing center is described as being able to “meet the needs of 4 ultra-large-scale cities at the same time, providing capabilities to access 8.5 million video channels.”
TRANSLATED EXCERPTS: White Paper on China’s Computing Power Development Index
ChinAI Links (Four to Forward)
Must-read: Building a National Research Cloud
By Daniel Ho, Jennifer King, Russell Wald, and Christopher Wan, Stanford’s Law School and Institute for Human-Centered AI (HAI) team up on a white paper for creating, implementing, and maintaining a national research cloud. Result of a policy practicum that brought together researchers across many departments. Shout out to Tina Huang, policy program manager at HAI, who I remember first pitched the idea back in summer 2020 (winning the People’s Choice Award for her pitch at the CNAS National Security Conference).
Should-read: White Paper on Trustworthy Artificial Intelligence
Full translation of this white paper by CAICT and JD Explore Academy is now available, edited by Ben Murphy, CSET’s Translation Lead. This paper was included in a previous Around the Horn issue (ChinAI #149).
Should-read: Inspur releases “Source 1.0”
Last week, I noted the release of yet another Chinese GPT-3-esque system, but mentioned that I didn’t know the system creators. Thanks to a few readers who pointed out that Inspur developed this model. This links to a machine translated version of that Xinzhiyuan article in last week’s Around the Horn issue.
Should-read: The Myth of Asian American Identity
The piece I read this past week that made me think the most.
Thank you for reading and engaging.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a researcher at the Center for the Governance of AI at Oxford’s Future of Humanity Institute.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99