ChinAI #235: GPT Medicine Beyond Imagination
Greetings from a world where…
sitting with Philly fans at a DC United game was an incredible experience
…As always, the searchable archive of all past issues is here. Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).
Feature Translation: This Time, The AI Doctor is on Duty
Context: How has China’s medical field reacted to recent breakthroughs in large language models? Based on many interviews with doctors and experts, this week’s article (link to original Chinese) examines trends in this (potential) early adopter domain. Huxiu [虎嗅] is an influential platform that shares user-generated content but also publishes their own pieces on China’s science and technology ecosystem. This article is from Huxiu’s medical reporting team.
Key Takeaways: Fun translation note — some of the article’s reporting came at an event for the Chinese-language version of the book The AI Revolution in Medicine:GPT-4 and Beyond, which was co-authored by Peter Lee (Microsoft’s Corporate VP).
The Chinese translation’s title for the book is 超越想象的GPT医疗, which roughly translates to “GPT Medicine Beyond Imagination” (which I’ve employed as the title for this issue).
At the book event, when asked if medical care would be the first batch of industries where GPT lands, Dalei Zhang founder of Airdoc[鹰瞳科技], responded: “Medical care is not the first batch, it is the zeroth batch.”
According to VBData estimates, from 2020 to 2025, the compound annual growth rate of China's medical AI sector will be about 40%, and the total market size will exceed 30 billion RMB by 2025.
Per Huxiu’s compilation, after ChatGPT’s viral launch, Chinese groups have now produced at least 18 medical models (based on the large model paradigm). The full translation includes details about all 18.
In May 2023, China hosted “the world's first ‘double-blind trial’ in which AI doctors and human doctors face real human patients at the same time. In a competition scored by seven experts, Medlinker’s MedGPT scored just .3 points lower than doctors from top tertiary hospitals (comprehensive general hospital, highest tier in China).
I thought it was very interesting to compare this coverage about smart medicine in China to a series of translations we did on Medical AI back in November 2021 (ChinAI #162: The Misfires — How BAT All Stumbled in Medical AI).
Back then, all the buzz was about the tech giants launching medical imaging systems (e.g., Tencent’s Miying). Here’s what one Alibaba VP said about its Doctor You AI system for medical imaging diagnosis: “Doctor You will soon enter many medical institutions across the country to serve as the best assistant for doctors. We expect medical AI to take away half of doctors’ workload within 10 years.”
That was not even two years ago. Now, the term “medical imaging” only appears once in this week’s feature translation, and most of the coverage is about chat-based medical applications. Covering this field is a humbling thing: trying to keep up with all the trends without letting them take you for a ride.
FULL TRANSLATION: This Time, The AI Doctor is on Duty
ChinAI Links (Four to Forward)
Must-read: AI Safety in China Newsletter (Issue #1)
Concordia AI (安远AI), a Beijing-based social enterprise focused on artificial intelligence (AI) safety and governance, has started a biweekly newsletter focused on China’s governance of frontier AI models, covering its international governance stances, domestic governance efforts, and technical research on safety and alignment. I learned a lot from the first issue, including a very interesting paper by Chinese researchers that assesses the alignment of Chinese models with human values.
Should-read: Two links on a Expert Proposal for China’s AI Law
First, led by Kwan Yee Ng and Jason Zhou Concordia AI translated a scholars’ draft of this law, by a team from the Chinese Academy of Social Sciences. It’s a remarkably ambitious proposal, based on my initial read, proposing a fleshed out Negative List system for certain AI products/services as well as a general stipulation that AI providers have to disclose any security incidents to a government body.
DigiChina invited researchers to comment on the scholars’ draft law and also provided this helpful context behind such proposals by scholars:
A team from the Chinese Academy of Social Sciences (CASS) this month released a scholars' draft of an AI Law for China. When the Chinese government announces that it will draft a law, the future of the effort is uncertain. In some cases, one or more groups of scholars drafts up a proposal. These sometimes feed directly into legislative work and their influence is seen in an official National People's Congress draft for public comment. Sometimes—as, for example, with an early 2000s effort toward a Personal Information Protection Law (PIPL)—the process falters. In the case of the PIPL, CASS scholar Zhou Hanhua described in a DigiChina interview how a team was asked by a government office to work on the issue, producing a draft in 2005, before the PIPL drafting effort stalled for years. By the late '10s when the effort was picked up again, technology and law had changed so much that their 2005 draft was not fit for purpose, and little of its content is visible in the law that went into effect in 2021.
In the case of this scholars' draft of an AI Law, the accompanying explanation notes that it is to serve as a reference for legislative work and is expected to be revised in a 2.0 version. Although the connection between this text and any eventual Chinese AI Law is uncertain, its publication from a team led by Zhou Hui, deputy director of the CASS Cyber and Information Law Research Office and chair of a research project on AI ethics and regulation, makes it an early indication of how some influential policy thinkers are approaching the State Council-announced AI Law effort.
Should-read: 2023 Annual Projects for the Major Research Program on Explainable and Generalizable Next-Generation Artificial Intelligence Methods
For CSET, Kevin Wei translated a document on China’s 2023 research priorities in explainable and generalizable artificial intelligence methods. The National Natural Science Foundation of China will fund 25-30 fostered projects (~800,000 RMB/project) and 6-8 key support projects (3,000,000 RMB/project). Some of the focus areas include privacy-preserving machine learning methods and AI-driven next-generation micro[-scale] scientific computing platforms.
Thank you for reading and engaging.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is an Assistant Professor of Political Science at George Washington University.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99