
Discover more from ChinAI Newsletter
ChinAI #169: general artificial intelligence and Chinese philosophy
Songchun Zhu on general artificial intelligence and Chinese philosophy
Greetings from a world where…
if Trader Joe’s really wants to keep using something like Trader Ming’s for their Chinese food, they should at least change it to Trader Zhou’s, am I rite?
…As always, the searchable archive of all past issues is here. Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).
Feature Translation: Songchun Zhu: Intelligence needs to be driven by “mind,” achieving a dynamic balance between “mind” and “principle”
Runaway leader in votes from last week’s Around the Horn was option #10: Songchun Zhu’s essay on how Chinese philosophy inspires his research on general artificial intelligence. Thanks for engaging!
Context: Zhu’s return to China, leaving UCLA, to lead Peking University’s Institute for AI made some headlines in September 2020. At the time, it was reported that he would stand up the Beijing Institute for General Artificial Intelligence (BIGAI), working with the Beijing municipal government, central government, and other universities. He recently gave talks to Tsinghua University and Peking University students on how the humanities and Chinese philosophy can inspire research on general AI. This longform article, from BIGAI’s public account, integrates the contents of these talks and also includes excerpts from the talks’ Q&As.
Key Takeaways:
Using Chinese poet Su Shi’s famous text “Ode to the Red Cliff” (published in 1082) as an entry point, Songchun Zhu explores how Su Shi’s confusion about renouncing or entering into the world can inspire research on general AI:
He writes:
The goal of general artificial intelligence research is to create general-purpose agents with autonomous perception, cognition, decision-making, learning, execution, and social collaboration capabilities that conform to human emotions, ethics, and moral values.
Such agents must have autonomous consciousness and have three views (a worldview, a view of life, and values). Even if the three views of intelligent agents are different from those of human beings, they must be able to understand the three views of human beings. Different from the answers given by philosophers, thinkers, nationalists, and aestheticians over the past 2,000 years, scientists who study general artificial intelligence must come up with cognitive frameworks and mathematical models, express them clearly in mathematical language, and be able to analyze the three views and thinking processes. Therefore, we hope to use such a framework and model to interpret Su Shi's psychological activities in a way of "reconstruction", trying to build a bridge between artificial intelligence research and classical Chinese philosophy.
Professor Zhu tells a story of when an MIT professor came to UCLA to give a lecture. He showed two images of text, one written by a human, and one formed by an AI model. Zhu easily guessed the right answer, while the audience struggled. Zhu imagined the process of writing calligraphy and the pauses needed to stop and dip the brush in ink. He uses this example to make a distinction between an image that AI can create and the cognitive interpretation map that attempts to reconstruct the actions and moods of the image’s creator:
The more people who know calligraphy, the more they can appreciate the cognitive interpretation map. They can even reconstruct Su Shi's gesture of holding a pen and writing at that time, to understand his state of mind at that time, and enter a state of dialogue with the ancients. These things (cognition) other than calligraphy (perception) depend on people's imagination (so-called "brain supplement"), which is the value it contains. The more experienced calligraphers are, the more brains will make up for it. I call the part of these people using experience and subjective imagination as the "Dark Matter of AI.”
…
In addition to calligraphy, when it comes to AI writing poetry, arranging music, and painting, there are similar problems. If an agent has no subjective value function, no subjective emotion, and cannot imagine human actions, it can only stay at the superficial image level. These neural network models cannot even reach the perception surface, let alone reach the cognitive level that forms a deep resonance with people.
Zhu draws on the two major schools of Chinese Confucianism:
Cheng-Zhu School (School of Principle), focusing on "Heavenly Principles", which includes the "physics" of the material world that we see today, corresponds to human intuition, a model of intuitive physics, and the "ethics" and social norms of humanistic society, the latter corresponds with the behavioral patterns generated by the behavioral decision function of artificial intelligence.
Lu-Wang School (School of Mind), which can be understood here as inner desires, temporarily called "mind". In modern language, this is termed values. In the language of artificial intelligence, it is called the value function or utility function. Among these include the "objective function" and "loss function" defined for specific small tasks in the field of machine learning.
Zhu argues, “It needs to be made clear that the development of artificial intelligence in the future is interlinked with research in philosophy, humanities, and social sciences. Whether it is Eastern philosophy represented by Cheng-Zhu’s School of Principle and Lu-Wang’s School of Mind, or Western philosophy represented by Kant, they all try to explore the balance between heavenly principles and human desires. This problem is also the realm that artificial intelligence should pursue.” He divides philosophy’s influence on AI into three historical phases:
In the first period of AI, from 1960 to 1990, logical expression and reasoning were the main theoretical frameworks. This benefited from Western philosophy as the source, including ancient Greek civilization, with debate and logic represented by the work of Socrates, Plato, and Aristotle.
In the second period of AI, from 1990 to 2020 (and several years to come), probabilistic modeling and stochastic computing dominated.
The next period of AI, according to Zhu, requires a transition from the School of Principle to the School of Mind, driven by the mind, to achieve a dynamic balance between the mind and and the principle. He writes, “What makes me gratified is that Eastern philosophy and wisdom can provide philosophical guidance for the second half of AI’s development.”
Zhu’s post concluded with some Q&As with students, which raised some interesting points:
In response to a question about AI’s impact on human aesthetics, Zhu says:
“From a research perspective, it is a phenomenon in which the value system is in alignment with the population. The influence of beauty is relatively light, but more importantly, AI technology can change human values. In the information age, applications analyze users' interests to push information that suits users' preferences, which may contain false or even dangerous content, leading to extreme human values, and the gradual development of society towards polarization, resulting in social division. The governance of intelligence is extremely urgent. (“pressing in on one’s eyelashes” – 迫在眉睫)
Another exchange about AI and value alignment:
Q: Hello Professor Zhu, if an agent has an internal value system, it will train and update its values internally. But we may not know what its values are from the outside world, so we can only treat the agent as a black box, and many people will express concern that it is out of control. How do you see into the value system inherent in artificial intelligence?
A: This is indeed an ethical issue with artificial intelligence. If agents want to coexist with humans for a long time, they must conform to our values, that is, achieve a dynamic balance with human values. There are bad people in our society, but bad people will be punished. I believe that in the future, intelligent agents must have their own "self-nature of Bodhi" (in Buddhism, the understanding possessed by a Buddha regarding the true nature of things), be able to independently adjust the U system (Zhu’s term for objective physical laws and social norms) and V system (Zhu’s term for the sum of value systems that humans have developed over the course of evolution), and learn values in line with human society, so that they can exist in this society stably for a long time. Of course, the legal system needs to cover intelligent agents. We need to study how legislation and law enforcement can be synchronized in a future intelligent society.
Much to unpack here, and this translation required a lot of background research (Zhu uses a lot of literary allusions in his writing), which I have only partially distilled into the Google doc and this newsletter. I’m also not sure how relevant understanding the nuances of Chinese philosophy will be for AI development, but if you just consider recent developments in Chinese large language models:
Pangu, the name for one large language model by Huawei and Peng Cheng Laboratory, is a creation figure in Chinese mythology.
WuDao, the name of another large language model by Beijing Academy of AI, refers to the Buddhist concept of enlightenment.
Some of the first Chinese applications of pre-trained natural language models were used to generate classical Chinese poetry.
This thread, at the very least, seems worth pulling on. There’s a lot more in this 20+ page Google doc, including some cool images from Zhu’s doctoral dissertation. Read the FULL TRANSLATION: Songchun Zhu: Intelligence needs to be driven by “mind,” achieving a dynamic balance between “mind” and “principle”
Thank you for reading and engaging.
A lot of translation work this week, so didn’t get a chance to round up reading. Will catch up next week.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a postdoctoral fellow at Stanford's Center for International Security and Cooperation, sponsored by Stanford's Institute for Human-Centered Artificial Intelligence.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99
ChinAI #169: general artificial intelligence and Chinese philosophy
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.
The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461