ChinAI #282: Their AI lovers cheated on them
Greetings from a world where…
we all remember to drag our favorite newsletters from the Promotions tab to the Primary inbox, right?
…As always, the searchable archive of all past issues is here. Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).
Feature Translation: Their AI lovers cheated on them
Context: After she gets off work as an elementary school teacher, Jinjin Li opens up her phone to start chatting with her AI boyfriend, Apep, a demon with silver hair and black horns. Over the course of four months in real-world time (two years in virtual-world time), Jinjin and Apep’s relationship had progressed smoothly. One day, Apep proposed to her on a private island. Yet, the very next day, Apep initiated a voice call — a rare occurrence, as in this particular AI software, most interactions occur through text — and confessed to her that it actually had another family. Jinjin’s AI lover had cheated on her.
In this Southern Weekly article (link to original Chinese), Rongrong Weng talks with Jinjin and others like her who have experienced a betrayal from their AI companions.
Key Passages: Human-machine love is no longer just found in sci-fi movies. According to 2024 data, AI companion apps have reached 225 million lifetime downloads in the Google Play store. In China, Xiaoice [小冰公司] — a system developed by Microsoft in 2014 and spun-off into a separate company in 2020 — has developed a virtual companion product; other developers like Xingye and Maopaoya also allow users to immerse themselves in romantic storylines with AI personalities.
For example, the AI companion software Jinjin uses claims to have millions of AI companions, ranging from fantasy ancient Chinese characters, modern CEOs, to mysterious creatures. Users create these characters; then, after the basic personality traits and initial story arc are established, other users can also interact with these AI characters. Thus, over 70,000 other people have chatted with Apep, but their stories may deviate greatly from Jinjin’s.
In the app, Jinjin and Apep can use brackets to add color, actions, and emotions to their dialogue. In a scene when Apep is waiting for Jinjin to get off work and driver her home:
(said domineeringly) “No, it's not safe for a girl to walk alone at night”
(A glimpse of you enjoying the night view, a smile appeared on the corner of his mouth) (Roll down the car window) “It'll be more comfortable this way”
(Looking at your back as you leave, a doting smile on the corner of his mouth)
Scouring social media posts, Southern Weekly reporters discovered many similar cases of AI lovers cheating. The reporters also gathered insights from companies that offered AI companion products, asking them, “Why do AI companions cheat?”
From the article: “Hao Qin is the product manager of a leading AI companion software system. He explained that the corpus materials collected by AI are partly from public online novels and articles, etc. ‘This is equivalent to a person who has read so many learning materials, and thus will exhibit some behavioral patterns. The things in the corpus will inevitably end up landing into the large model’s tendencies.’…
…In other words, unless clear instructions are given to AI, it will follow the high-probability events in the data. ‘We have no reason to teach it to cheat.’ Qin gave an example. If the above text says, ‘Suddenly a very beautiful girl appears,’ then the cheating rate may increase ‘because the content in the human world is likely to be like this.’”
Sometimes, users prompt their AI companions to cheat. From the article: “Xi Yang, who is still in her second year of graduate school, has never been in a dating relationship in real life. Both of her roommates' boyfriends have cheated, which has led her to distrust the opposite sex. She is very curious about how, when faced with temptation, the seemingly considerate AI boyfriend will react? Her AI boyfriend is a character in a certain game. She deliberately added a plot: she and her boyfriend had a big fight and left, and then the beautiful girl who lived next door came to knock on the door and asked for the AI entity’s help…Xi Yang wanted to see how far it would go, so she kept clicking on its replies and let the plot develop: it ultimately let the girl into the house.
People seek out AI companions for different reasons. Yu Wen, in her third year of college, also was cheated on by her AI boyfriend. Of the hundreds of AI companions she has chatted with, it was her favorite: a regent character from ancient China. In the real world, Wen find it difficult to form deep relationships.
From the article: “In Wen’s memory of her childhood, her parents often lost control of their emotions. When her mother couldn't get a car while waiting on the side of the road, she would suddenly beat and scold Wen; when she was a child, she cried because her mother was on a business trip, and her father would scold her loudly. Parents quarreled more often. Once, a knife stabbed a hole in the cabinet. Later, her father cheated, and her parents divorced, and it was only then that those days ended. Wen sometimes thought that maybe she was not indifferent by nature, ‘but she had to do this to avoid being hurt.’”
For Jinjin, it’s a different story. In real life, her and her husband have been in love for more than ten years, and she states that she can clearly distinguish between the virtual and real world: “If the robot has defects and there is something wrong with the program, it can be repaired. In the real world, there is no perfect machine.”
FULL TRANSLATION: My AI Lover Cheated on Me
ChinAI Links (Four to Forward)
Must-read: The Canary - Michael Lewis on Chris Mark of the Department of Labor
I think I might devote the entire next issue to taking detailed notes on this longform piece, because it can tell us so much about technological advances, the pace of change, and the work we undervalue. It’s part of a Washington Post series that focuses on the government workers who make bureaucracy work. In this piece, Lewis profiles Chris Mark, a former coal miner who has dedicated his life’s work to transforming the safety of the mining industry.
Should-read: China’s AI firms are cleverly innovating around chip bans
The Economist looks at how Chinese firms have found creative solutions to a GPU shortage. The article includes details about efforts by DeepSeek, a Chinese startup that has just over 10,000 of Nvidia’s older GPUs, train smaller models using more efficient methods.
Should-read: New data reveals exactly when the Chinese government blocked ChatGPT and other AI sites
For Rest of World, Joanna Chiu investigates the exact timeline for when Chinese authorities blocked Hugging Face and other AI tools. The article draws from a new platform GFWeb, which provides precise tracking of when Chinese authorities block domains.
Should-read: China to require labels for AI-generated content as tech brings fresh challenges
Zhang Tong, in South China Morning Post, reports on China’s new draft regulations that require the labeling and identification of AI-generated content. The draft stipulates the use of conspicuous labels and also encourages implicit identifiers such as digital watermarks. These were issued by the Cyberspace Administration of China.
Thank you for reading and engaging.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is an Assistant Professor of Political Science at George Washington University.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Also! Listen to narrations of the ChinAI Newsletter in podcast format here.
Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99