ChinAI #79: A Mother and her AI Daughter
Welcome to the ChinAI Newsletter!
An early Happy Chinese New Year to all — if you haven’t called your mom or daughter recently, this week’s piece will really make you want to do that.
As always, the archive of all past issues is here and please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).
Feature Translation: A Mother Who Lost Her Only Daughter Decides to Maker Her into an AI
This week’s translation, of a piece in人物 (a Chinese magazine magazine that profiles celebrities and contemporary figures), tells the story of Li Yang and her daughter Chen Jin, who passed away due to T-lymphoblastic lymphona at age 14. If there was a way to have your loved ones stay by your side forever, what choice would you make? This week’s piece follows Li Yang as she tries to restore Chen Jin in the form of an AI companion.
Some excerpts follow:
In Li Yang's imagination, she can take AI Chen Jin anywhere: go to the cafe, to Jeju Island to look at the sea, to Australia, where there are lazy koalas and bouncing kangaroos, or Turkey, to take a ride in a hot air balloon to look at the scenery... they can travel together, talk and laugh together and share food, just like before.
According to a research by the Chinese Academy of Social Sciences, China currently has at least one million families whose only child has passed away. According to data from the Ministry of Health, this number is increasing at an annual rate of 76,000.
In September, Alibaba AI Labs received a private letter from Li Yang asking for help: "Hello, I have something that I hope you can help me with. My daughter has died, but I miss her so much. Can I send photos and videos of her to you so that you can make it into software that interacts with me in her likeness? "
On the synthesized recording of Chen Jin’s voice: The recording was of an essay written by Chen Jin, which recounted the story of her going hiking with her mother. When the girl climbed halfway up the mountain, she was exhausted and wanted to give up and go down the mountain. At this moment, "Mom smiled and answered meaningfully:" Child, remember that a famous person once said that success is persistence, and success depends not on the size of your strength but on how long you can persists. As long as you persist, you will be able to climb to the top of the mountain in no time! "After listening to her mother's encouragement, she, “pulled her mother's hand and rushed upwards, her fighting spirit re-ignited. Repeatedly gritting her teeth in persistence, exhausting her body’s chaotic energy. Finally, I finally climbed to the top of the mountain. I was so excited that danced for joy, jumping up and down, just like a general who won the battle.”
FULL TRANSLATION: A Mother Who Lost Her Only Daughter Decides to Make Her Into an AI
ChinAI Links (Four to Forward)
An All-GovAI week of links, featuring some work I didn’t get to cover in previous issues:
Must-read: GovAI 2019 Annual Report
It’s been an incredibly fruitful year from the team here at GovAI. This annual report provides a summary of what we got up to in this past year, expertly compiled by our head of ops and policy engagement, Markus Anderljung. As our Director, Allan Dafoe, writes in his note, “As part of our growth ambitions for the field and GovAI, we are always looking to help new talent get into the field of AI governance, be that through our Governance of AI Fellowship, hiring researchers, finding collaborators, or hosting senior visitors. If you’re interested, visit www.governance.ai for updates on our latest opportunities, or consider reaching out to Markus Anderljung (markus.anderljung@philosophy.ox.ac.uk).”
Should-Read: Who Will Govern AI? Learning from the history of strategic politics in emerging technologies
Jade Leung’s D.Phil thesis examines how the control over previous strategic general purpose technologies – aerospace technology, biotechnology, and cryptography – changed over the technology’s lifecycle. Specifically, she highlights out the relationships among the state, private actors, and researchers has evolved as the technology matured, highlighting key implications for how political dynamics may play out in the AI space.
Should-Read: Near term versus long term AI risk framings
Unpacks the divide between near-term and long-term AI risks into four different dimensions, including: what kinds of technological capabilities issues relate to, the immediate impacts of AI or possible impacts much further into the future, how well-understood or speculative issues are; whether to focus on impacts at all scales or to prioritize those that may be particularly large in scale. Interestingly, they note that proejcts focused on the intermediate scale of AI impacts may be receiving relatively less attention.
This paper by Carina Prunkl, a senior research scholar at FHI, and Jess Whittlestone (Centre for the Study of Existential Risk, Cambridge) was accepted to the AAAI AI Ethics & Society Conference taking place in Feb 2020.
Should-Read: The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse
Toby Shevlane and Allan Dafoe’s paper, also accepted in the AIES conference, examines publication norms in AI through an offense-defense framework. Crucially, they show that the existing conversation around AI has heavily borrowed concepts and conclusions from one particular field: vulnerability disclosure in computer security, concluding, We caution against AI researchers treating these lessons as immediately applicable. There are important differences between vulnerabilities in software and the types of vulnerabilities exploited by AI. It is therefore important to explore analogies with multiple fields and to consider any properties that may make AI unique. Ultimately, we suggest that the security benefits of openness are likely weaker within AI than in computer security.
Thank you for reading and engaging.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a Rhodes Scholar at Oxford, PhD candidate in International Relations, Researcher at GovAI/Future of Humanity Institute, and Research Fellow at the Center for Security and Emerging Technology.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99