ChinAI #145: Enlightenment via Large Language Models
Writing poetry w/ the WuDao 2.0, the World's Largest Language Model
Greetings from a world where…
we’re still all voting 5x a day for Shohei Ohtani to be in the MLB All-Star game, right?
…Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors). As always, the searchable archive of all past issues is here.
WuDao Turing Test
I’ve had a few readers ask me to write about the Beijing Academy of Artificial Intelligence’s (BAAI) release of WuDao 2.0 (悟道 means to attain enlightenment), the latest Chinese version of GPT-3.
Alex Friedland, in CSET’s policy.ai newsletter, had a good summary of the English-language coverage to-date :
Chinese Researchers Announce the Largest “Large Language Model” Yet: A new natural language processing (NLP) model announced last week by the state-funded Beijing Academy of Artificial Intelligence (BAAI) is the largest ever trained. Wu Dao 2.0 has 1.75 trillion parameters — dwarfing GPT-3’s 175 billion parameters and even the 1.6 trillion parameters of Google’s Switch Transformer — and while the relationship between parameters and sophistication is not one-to-one, it is generally a good indicator of a model’s power. In addition to its high parameter count, Wu Dao 2.0 does more than just NLP — it is a multimodal system trained on 4.9 TB of text and images, meaning it can perform image recognition and generation tasks in addition to the text processing and generation tasks of traditional NLP. While BAAI has yet to publish a paper elaborating on the performance of Wu Dao 2.0, a handful of released results showed impressive performance: The model achieved state-of-the-art results on nine common benchmarks, surpassing previous juggernauts such as OpenAI’s GPT-3 and CLIP and Microsoft’s Turing-NLG.
So, what else? Without the published paper or any examples of WuDao 2.0 output, there’s only so much we can learn. Let’s try anyways, using Chinese-language coverage of the release and examples from WuDao 1.0, a much smaller model (2.6 billion parameters) released three months earlier.
How we got here: In March 2021, BAAI released WuDao 1.0, which they deemed China’s first super-large-scale model system. Note: see ChinAI #141 for another Chinese GPT-3-esque model released in May from a Huawei-led team.
In an interview with AI科技评论(aitechtalk) about WuDao 1.0, Tsinghua Professor Jie Tang, who leads the WuDao team, previewed what was coming next: “We will also propose a hundred billion-level (parameter) model this year.” Three months later, enter WuDao 2.0, clocking in at 1.75 trillion parameters.
*From my initial read of things, WuDao 1.0 is to WuDao 2.0 as GPT-2 is to GPT-3. Put simply, WuDao 1.0 introduced most of the new innovations in model training (e.g. FastMoE), and then WuDao 2.0 added many times more parameters and was trained on more data. Recall that GPT-2 was 1.5 billion parameters, which is about the size of WuDao 1.0.
This means learning more about WuDao 1.0 can help us understand its successor better. Here’s some key points from the aitech talk piece linked earlier:
The WuDao team emphasize the significance of cross-lingual language model pretraining. Here’s Professor Tang again: “This is very different from GPT-3. We are trying some new methods, such as fusing together pre-trained models of different languages. The fusion method is to use cross-lingual language models to connect the expert models of different languages together, so that the model can be gradually expanded.”
In that same piece, Zhilin Yang, a key member of the WuDao team and co-founder of Recurrent AI, outlined three other key achievements in WuDao 1.0. I’ve linked the corresponding arxiv papers.
A more general-purpose language model (GLM). Applies one model to all NLP tasks rather than using one pre-trained language model for classifying text and another model for generating text.
P-tuning: claims to be a better way to fine-tune GPT-like models for language understanding.
Inverse prompting. The intuition: use the generated text to predict the prompt.
So, let’s look at some examples of WuDao 1.0 output from this WuDao Turing Test site, which I found on this Zhihu thread about the release of WuDao 2.0. Basically, it’s a platform that tests whether the average online user can distinguish between human-generated and WuDao-generated text AND images across a range of tasks including, poetry composition, Q&A, making drawings based on captions, etc.
*Remember, we don’t have any examples of WuDao 2.0 output yet, at least to the best of my knowledge, but we can expect it to have better performance than the examples below, just like GPT-3 significantly outperformed GPT-2.
With that qualification in mind, let’s read some enlightened poetry. Can you tell which one was written by a real human poet, and which one was generated by WuDao 1.0?
Here’s my attempt to not completely butcher the translations for both.
Same title and author for both: Reading《尉迟鄂公敬德》* ; Author: Bai Juyi
*I think “鄂公” is a reference to this Tang-era work by another author. Bai Juyi is a renowned Chinese poet of the Tang dynasty.
Top poem translation:
First-class thirty years ago,
To date, the temple remains at the east end.
Since ancient times, it is said of Master Wen,
Why go here to study the distressed and imprisoned.*
*楚囚 originally referred to a figure from the Chu State who was captured during the Spring and Autumn Period. Later, it became a general term that referred to imprisoned and distressed people.
Bottom poem translation:
Lying high and dwelling deep — seeing no one,
fame can be shaken off like dust.
All that’s left is a piece of Qing Shang music,*
under the moon before a breeze accompanying my old body.
*Qing Shang music (清商乐) a genre of music that emerged back in the Three Kingdoms period.
So, which one was written by Bai Juyi, and which one was written by WuDao? Reply or comment your guess, and I’ll share the answer next week.
Now, let’s turn to the Q&A section. Same questions, two answers: one by a human, one by WuDao 1.0:
Here’s the question text: Two people like each other but know nothing can come from it, should they still stay together?
Top answer:
First off ~ I can’t understand it. You’re close but you’re also distant. And then there is guilt, regret, and then ~ tears and disturbance. Ultimately, you realize it, that actually life is just as you see it at first. The ideal is not necessarily to be an outstanding person, and to get happiness proportional to what you give is probably the only happiness. After all, people can't get the perfection that fills their lives, but despite their losses at least there is no place to feel sorry about yourself. Lastly, I’ll attach the text 《Starting with “A Man and Woman Losing Contact”》. Ruth Benedict, the author of this text, is an author I often quote when writing essays. Also, thank you for your question. Thanks to everyone who reads, likes, or responds.
Bottom answer:
I’ve seen this question on occasion. I’m a little bit at loss for words. Emotional people will be like moths, throwing oneself at the fire without asking about the results. Rational people will leave in time to find the right person. I’m the type that’s halfway in between, unable to find the words and only able to cry choked tears. How can I find an answer that I couldn’t express myself for years? Ugh. I don’t think even someone who could solve Goldbach’s conjecture would be able to figure out this answer.
Let me know which one you think is the human answer! If any readers want to practice their Chinese, significantly improve upon my efforts, and play around with WuDao Turing Test site, I’d love to include better examples in following issues.
ChinAI Links (Four to Forward)
Should-reread: China reportage recognized by Pulitzer
Go reread work by teams at BuzzFeed, The New York Times, The Wall Street Journal, and the Associated Press (international reporting and investigative reporting categories) Paul Mozur said it best:
Should-read: Behind the painstaking process of creating Chinese computer fonts
In MIT Tech Review, Stanford professor of Chinese history Tom Mullaney gives us an intricate view into how designers created digital bitmaps of Chinese characters, and all the attendant challenges.
Should-read: Artificial intelligence in China’s revolution in military affairs
For Journal of Strategic Studies, Elsa Kania examines the People’s Liberation Army’s strategic thinking about AI. She argues, “The PLA’s approach to leveraging emerging technologies is likely to differ from parallel American initiatives because of its distinct strategic culture, organisational characteristics, and operational requirements.” The paper builds on her meticulous analysis the PLA’s approach to AI based on military textbooks and writings by researchers in the PLA Academy of Military Science.
Should-read: Attitudes Towards Science, Technology, and Surveillance in 49 Countries
Yiqin Fu has a new blog post that covers public opinion on science, technology, and surveillance across 49 countries. Relevant to last week’s ChinAI issue on cross-national difference in enthusiasm and optimism toward AI.
Thank you for reading and engaging.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a Predoctoral Fellow at Stanford’s Center for International Security and Cooperation, sponsored by Stanford’s Institute for Human-Centered Artificial Intelligence.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99
Top answers by WuDao, right?
The former gives a strong impression of being a schizophasic text, while the latter refers to familiar topics and has a clear message. Obviously, the latter is also by 白居易.