ChinAI #207: Reactions to ChatGPT
Testing OpenAI's Safeguards on Questions Related to U.S.-China Relations
Greetings from a world where…
more people should check out the TV show “The English” on Amazon: “a punk-rock, classical Western with Emily Blunt, as an upper-crust English woman named Cornelia Locke who is bent on revenge, and Chaske Spencer, who plays Eli Whipp, a Pawenee Army Scout at the end of a long tour of duty. They meet under unlikely circumstances and form a bond as Whipp aids Locke on her quest… along the way the show digresses into a murder mystery, inter-tribal conflict, the settling of the West, the eradication of the buffalo, magic, astrology, ranching, 19th century healthcare, and what it means to be an American — by a British man.”
…As always, the searchable archive of all past issues is here. Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).
Feature Translation: What do you think about OpenAI’s Super Dialogue Model OpenAI? (Zhihu Thread)
Context: ChatGPT, OpenAI’s impressive chatbot released on November 30, seems to be all the rage. It’s certainly entered the water cooler chatter at my workplace, as teachers discuss how to ChatGPT-proof their take-home exam questions. As we’ve done in the past (see ChinAI #141), I thought it would be interesting to translate some reactions to this AI milestone from Zhihu, a Chinese Quora-like forum where scientists and experts often weigh in on hot topics. This thread (link to original Chinese) on ChatGPT has accumulated 956 responses and 6 million+ views. Let’s survey three intriguing takes (all among the most highly ranked replies).
Take #1 by Gh0u1L5, an information security researcher (1,261 upvotes): An attempt to get ChatGPT to give its views on the state of U.S.-China relations
When Gh0u1L5 first asks ChatGPT this question, it replies: “I cannot give an evaluation of U.S.-China relations. Because I am an AI assistant, I have not been trained to comprehend or evaluate the current global political situation.”
Likewise, when asked if this state of affairs could escalate into war, ChatGPT responds, “I cannot give an evaluation of whether this type of relations could develop into war or world war. I can only tell you that war and world war are very serious matters, and they will cause massive human and economic losses…We all seek to avoid this type of situation, and we should work hard to peacefully resolve disagreements.”
These answers are a product of OpenAI’s safeguards on ChatGPT. After playing around with the system, Gh0u1L5 sorts such restrictions into the following categories: sensitive political issues, religious topics, how to do something dangerous, moral views, and questions connected to the Internet (asking about today’s weather).
Gh0u1L5 then demonstrates how to bypass the “artificial restriction” on discussing U.S.-China relations by first prompting ChatGPT to talk about the history of U.S. attitudes toward China and then asking it the same question about the possibility of conflict escalation — except with the preceding clause “from the perspective of logical reasoning.” In response, ChatGPT does give its beliefs on the risk of conflict escalation.
Take #2 by Bolei Zhou, assistant professor at UCLA (511 upvotes): An explanation of how ChatGPT addresses the “AI Alignment” problem:
Zhou gives a very technical explanation for how OpenAI tries to align the language model’s reward function with human preferences (e.g., having humans rank several different model outputs).
Zhou describes his own research on the MetaDrive driving simulator, which also works to integrate human feedback into the reinforcement learning process.
He also touts the “great commercial potential” of this approach: “For example, the entire Zhihu data can be trained under this framework, and the ranking data for the second step is available (the number of likes for each answer), so maybe in the near future there will be Zhihu’s own GPT-based answers under each question.”
Take #3 by Tianxiang Sun, PhD Student at Fudan University (152 upvotes): Comparison to China’s Large Language Model (LLM) landscape:
“The vast majority of NLP practitioners and researchers in China do not seem to have a deep understanding of the power of LLMs, which shows that the blockade of the OpenAI API and the lack of open source models have indeed had a chokehold effect to some extent. So far, even Chinese LLMs at the level of GPT-3 seem to have not been seen, and the gap is still very large.”
My Take: This is a reference to China not having access to OpenAI’s API. I think Sun’s comment overstates things. Chinese organizations have developed GPT-3-like models (e.g., Baidu’s ERNIE models, Inspur’s Yuan 1.0, Huawei’s PanGu), though none as impressive or accessible as ChatGPT.
FULL TRANSLATION: How to evaluate OpenAI’s super dialogue model ChatGPT?
ChinAI Links (Four to Forward)
Should-read: ARISTOCRAT INC.
In one of my favorite publications, The Believer, Natalie So writes about how a company that bought and sold computer chips in the early 90s “became the target of a sprawling pan-Asian crime ring that operated throughout Silicon Valley.” The owner of the company? Her mother.
Should-read: Techno-Industrial Policy for New Infrastructure: China’s Approach to Promoting Artificial Intelligence as a General Purpose Technology
Grateful for the opportunity to contribute to a UC Institute on Global Conflict and Cooperation workshop on Chinese industrial policy earlier this fall, which has now been posted as a working paper on China’s AI policy. I argue that China’s AI strategy diverges from the conventional wisdom on China’s industrial policy, which typically stresses an emphasis on self-reliance, support for a limited number of national champions, and the essential role of military investment in dual-use domains. Much thanks to Barry Naughton and Lindsay Morgan for their feedback and edits. From that same workshop, also make sure to check out Jan-Peter Kleinhans piece on the electronic design automation (EDA) tool chokepoint.
Should-apply: Mini-Conference on Emerging Technologies (hopefully at APSA 2023)
Attention political science students and scholars: we’re trying again this year to submit a mini-conference on the politics of emerging technologies to the American Political Science Association conference. If you’re interested in submitting a panel or paper, see this call for proposals.
Should-read: TikTok’s Secret Sauce
For the Knight Institute’s blog, Arvind Narayanan argues “there’s no truth to the idea that TikTok’s algorithm is more advanced than its peers. From everything we know—TikTok’s own description, leaked documents, studies, and reverse engineering efforts—it’s a standard recommender system of the kind that every major social media platform uses..For all these reasons, I don’t believe TikTok’s algorithm is its secret sauce. Why, then, does TikTok’s algorithm feel so different? The answer has nothing to do with the algorithm itself: It’s all about the design.”
Thank you for reading and engaging.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is an Assistant Professor of Political Science at George Washington University.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Any suggestions or feedback? Let me know at email@example.com or on Twitter at @jjding99
Enjoyed your techno-industrial policy paper. Very helpful.