ChinAI #88: Deepfake Drama

The First Appearance of AI Face-Swapping on a Chinese Web TV Series

Welcome to the ChinAI Newsletter!

Greetings from a land where they promised us hover cars and we got deepfake porn instead…

…as always, the archive of all past issues is here and please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).

Feature Translation: First Appearance of AI Face-Swapping in Chinese Web Series

If you want a pretty clear example of how society shapes technology just as much if not more than technology shapes society, look no further than deepfakes. This week we look at the face-swapping special effects of the web drama “Love of Thousand Years” (三千鸦杀), which did not work well (distorted faces, stiff expressions, dissonance between neck and head). It’s no Tiger King but it did invite a recent wave of ridicule comparing the face-swapping effects to a horror movie, which ultimately led more people to watch the show just to see the face-swap (which means it kinda worked out for the show in the end, I guess).

Context: After filming the drama, one of the actresses (Liu Lu) was involved in a kerfuffle with public transport authorities regarding the allowable amount of flammable compressed gas cans. She was reportedly blacklisted on a “bad artists list” [劣迹艺人] and Mango TV terminated their contract with her. To avoid the new drama from being suppressed, the producer swapped her face with that of another actress. Sourced from a longstanding ChinAI favorite, jiqizhineng (Synced), this week’s article digs deeper into this “first” case of deepfakes used in TV and film.

Key Nuggets:

  • Traces history of DeepFakes back to 2017 when a Reddit user face-swapped Gal Gadot’s face onto a mature film. References important developments in China, such as in February 2019 when a video swapping the face of one of China’s best-known actors, Yang Mi, into a classic Hong Kong TV drama, The Legend Of The Condor Heroes, went viral, picking up an estimated 240m views before it was removed by Chinese authorities (see related Guardian article). Subsequently, ZAO, a face-changing app also got really hot in China.

  • Face-Swap Black Market Industries: a lowered threshold for face-swapping has gradually created an industrial chain where face-swap software and technology are provided upstream, video and photo customization is supplied midstream, and the finished erotic videos are sold downstream. “Relevant products are available for sale in Tieba, QQ groups…the prices of finished erotic videos range from 2 RMB for 1; to 30 RMB for 46; 100 RMB for 150; and 100 RMB for 200, etc. Generally, they are sold in packages, and the videos mainly feature domestic first- and second-tier female stars.”

  • The piece notes the possible application of China’s May 2019 data security measures to govern DeepFake-altered videos: In China, at the end of May 2019, the Cyberspace Administration of China and relevant departments issued the ‘Data Security Management Measures (Draft for Comment),’ which requires that network operators who use big data, artificial intelligence and other technologies to automatically synthesize news, blog posts, forum posts, comments etc., to clearly mark such information as “synthesized”; they should not automatically synthesize information with the aim of seeking benefits or harming other people’s interests.” Note: It seems like whenever I come across a relevant regulation for a particular piece of Chinese tech news, DigiChina has already done the translation. Check out their excellent translation of the draft data security management measures, which I drew from for this section, here.

  • Still large obstacles to high-performance applications of face-swaps in TV and film: not something ordinary folk can do as it requires a relatively high-quality PC, a very good graphics card, and a decent amount of server usage. Problems related to personal privacy of celebrities and black market industry issues are being magnified as well.

Go Deeper:

  • Interested in the difference between various face-swap architectures? The piece references DeepFakes, which are the “classic” face-swap; Face2Face: focuses on “facial expression” manipulation -- examples include high-quality videos of a person (e.g. Obama) changing what he is really saying in a target video; and CycleGAN: unpaired image-to-image translation using a generative adversarial network model architecture (e.g. translating a summer landscape to winter rather than translating X specific person’s face to another Y specific person)

  • Want to get in the weeds of the economics of GAN-powered face-swaps vs. “manual” face-swaps (i.e. going frame by frame and manually editing, I assume?) Market price for manual face-swaps is a couple 100,000 RMB per minute, whereas the price of AI face-swapping is about 15,000 RMB per minute.

FULL(ish) TRANSLATION: First Appearance of AI Face-Swapping in Chinese Web Series

Must-read: The Precipice: Existential Risk and the Future of Humanity

At the beginning of this month, I was whining about our narrow view of national security, and recommended this book by Toby Ord, a Senior Research Fellow at the Future of Humanity Institute. I feel like the events of this past month have inspired many of us to have considerably widened our views. This book, now available in the US & Canada, presents a grand vision of the potential human flourishing of the future as well as a wake-up call to the existential catastrophes (e.g. climate change, engineered pathogens, nuclear weapons, unaligned artificial intelligence) from which we could never come back. In combination, ending these existential risks is among the most pressing moral issues of our time. The link lets you subscribe to Toby’s newsletter to download the first chapter now. 

Should-read: OpenMined for Maximizing Privacy and Effectiveness in COVID-19 Apps

There is a boom in apps related to COVID-19 occurring right now, the team at OpenMined is trying to help those who are building/auditing/procuring such apps to do so in a way that helps reduce economic and epidemic threats to society while also checking against the erosion of privacy in the process. See their live document here. OpenMined is a project that aims to "decentralize AI,” led by Andrew Trask, a PhD Student at the University of Oxford and a research affiliate at GovAI.

Should-read: ChinAI Syllabus

With Sophie-Charlotte Fischer, Brian Tse, and Chris Byrd, we compiled a preliminary syllabus of readings on China’s AI landscape, which covers a range of topics. Inspired by Remco Zwetsloot’s really useful syllabus on AI and International Security. Grateful to Emmie Hine for her help and proposing the tutorial that sparked the syllabus in the first place as well as others, esp. Jade Leung, for suggestions. Hope is for this to be a living document so please send recommendations for stuff to be added.

Should-read: Megvii Open-Sources Deep Learning Framework

For SCMP, Sarah Dai and Minghe Hu in Beijing report on Megvii’s open sourcing of its deep learning framework (MegEngine). Dai and Hu get some really interesting quotes from Gao Wen, a Peking University professor who is also director-general of the country’s New Generation AI Technology Innovation and Strategic Alliance: “The latest tide of AI cannot live without deep learning technologies, whose development has everything to do with open-sourced infrastructures.” The piece cites Gao saying that Tensor Flow and PyTorch hold a combined 95 per cent of market share for deep learning frameworks. The framework, which contains 300,000 lines of code in its alpha version, is available for downloads at the company’s website and Github.

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a researcher at the Center for the Governance of AI at Oxford’s Future of Humanity Institute.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99

ChinAI #87: Chinese Academy of Sciences 2019 AI Development White Paper

Another day another white paper, daylight comes I'm on my way

Welcome to the ChinAI Newsletter!

Greetings from a land in which the residents seek to derive meaning and purpose amidst a stream of endless technocratic white papers…

…as always, the archive of all past issues is here and please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).

Feature Translation: 2019 AI Development White Paper

Context: Published last month by the Key Laboratory of Big Data Mining and Knowledge Management, Chinese Academy of Sciences, this White Paper provides a good overview-style update on China’s AI ecosystem divided into four parts:

  1. Key technological breakthroughs: slide 6 has some good stats on Chinese submissions at top computer vision conferences as well as mentions of key papers (e.g. “Densenet” architecture that won the best paper award at one of the aforementioned conferences). Interestingly, this is framed as a Tsinghua University team accomplishment, even though the primary author was a postdoc fellow at Cornell (he is now an assistant professor at Tsinghua).

  2. AI-empowered industry verticals: there’s a nice table that assesses various verticals by the degree of diffusion of AI technologies (slide 19), with security, finance, retail, transport, and medicine ranking the highest across a range of interesting variables (e.g. data cleanliness, maturation of data storage processes)

  3. AI Open Innovation Platforms: some cool case studies here — I highlighted two slides on Tencent Miying’s auxiliary diagnosis platform (34-35), which makes some really strong claims about Miying’s capabilities and also provides an illustrative graphic of Miying’s industrial ecosystem and the interaction pathways with medical software providers and hospitals. Side note: I’m not that convinced about these “open innovation platforms” associated with companies. Isn’t Miying still a proprietary framework and really the only thing that’s different here is data sharing? Open-source toolkits for neural machine translation, like THUMT developed by Tsinghua, seem more important for open innovation platforms that actually help a bunch of other companies build on top of open source code (slide 9).

  4. List of world’s top AI companies: obviously very arbitrary but interesting to see what researchers from this CAS key lab highlight and their selection criteria (45-46).

For those interested in digging more into the Google slide deck, the above screenshot gives a good picture of my approach. Whenever I replace text directly in the PPT deck, it's my attempt at a direct translation (all the text in the table). Whenever I comment on a slide, those are paraphrased summaries. Only slides that are marked with a comment were translated/analyzed in-depth because I thought there was interesting, new information, so skip the others unless you read Mandarin or something catches your eye in particular:

(very) PARTIAL TRANSLATION: 2019 AI Development White Paper

ChinAI Links (Four to Forward)

Must-read: China Neican Newsletter for policy-focused China analyses

Have recently started following the China Neican 内参 newsletter edited by Yun Jiang and Adam Ni, two experienced China researchers who provide weekly briefs of commentary, analysis, and policy recommendations on a range of China-related topics, such as geopolitical competition, trade dependence, technology competition, foreign interference, regional security, and human rights. As both Adam and Yun have advised the Australian government on these issued, it’s particularly refreshing to get a non-U.S. centric view. I particularly enjoyed the recent March 15 newsletter which highlighted how Chinese netizens avoided censorship of an interview article with a Wuhan doctor via creative “translations” of the article into emoji and oracle bone versions.

Should-read: The Economist on China’s Use of High-tech Surveillance Tools to Curb Covid-19

A sobering analysis about the impact of high-tech surveillance on social distancing in China. “Much of China’s success so far in containing the virus’s spread outside Hubei has depended on mobilising legions of people to man checkpoints armed with clipboards and thermometer guns, or to go door-to-door making note of sniffles…For now, China’s digital monitoring methods for covid-19 are a hodgepodge of disjointed efforts by city and provincial governments…”

Should-read: (somewhat) Friendly Explanation of DenseNet paper

A helpful explainer, by Mukul Khanna, of the Densenet paper mentioned above.

Should-read: The Education China Hands Need, But Most Do Not Get

Tanner Greer, for The Scholar’s Stage Forum, argues for a text-based approach to the study of China’s Communist Party politics — drawing lessons from Simon Leys 1990 review of Laszlo Ladany’s book on the Chinese Communist Party. Ladany published a newsletter, called China News Analysis, that “was drawn exclusively from official Chinese sources (press and radio).” H/t to Ben Garfinnkel for pointing me to this post.

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and researcher at GovAI/Future of Humanity Institute.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99

ChinAI #86: Privacy in the Time of Coronavirus (Part 2)

Plus, a case and model for privacy optimism

Welcome to the ChinAI Newsletter!

Greetings from a land where social distancing brings us all a little closer together…

…as always, the archive of all past issues is here and please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).

Feature Translation: Part II of The Public Interest and Personal Privacy in a Time of Crisis

Context: A March 8 follow-up blog post authored by Hu Yong, a Professor at Peking University’s School of Journalism and Communication, and a well-known new media critic and active blogger/microblogger whose microblog has 800,000 followers. In last week’s issue, we covered Hu’s view that “the infringement of privacy by public health surveillance can be described as shocking” in the response to the coronavirus. In this week’s issue, Hu analyzes a Feb. 4 notice by the Cyberspace Administration of China (CAC) on data privacy in coronavirus response.

Rui Zhong, Rogier Creeemrs, and Graham Webster did a great analysis and full translation of the notice for DigiChina here.

Key Takeaways:

  • Hu structures this essay around three principles for balancing the public interest and personal privacy. 1) Treat public interest (concerns) as exceptions to (the protection of) privacy. Hu writes, “Any law or policy that interferes with basic human rights must prove its legitimacy. Legitimacy comes from i) must comply with the law, ii) is necessary to achieve a legitimate goal, and iii) is commensurate with the goal. From this point of view, in the process of preventing the epidemic, many policies implemented throughout China violated the peoples' basic human rights and were inherently illegitimate.” (emphasis mine)

  • 2) If it is really necessary to manage (restrict) privacy for the sake of public interest, then we must establish appropriate guarantees for basic civil rights and personal interests in the process of managing (restricting) privacy. Hu argues here, “If an individual's right to privacy is restricted during a particular crisis, this does not mean that he or she must yield to the public interest in an unlimited fashion. For example, (imposing) isolation via restrictions on the right to freedom of movement is legal and humane only if it is carried out on a basis that is reasonable, time-limited, and necessary for purpose as well as in a method that is voluntary and non-discriminatory wherever possible. Otherwise, it could extremely easily result in large-scale discrimination and stigmatization, and cause irreparable social harm to the targets of discrimination and stigma.”

  • 3) Insist on fair use of information. Under this provision, Hu argues “We can see that the large-scale violation of citizens' rights under the premise of preventing and controlling the epidemic clearly violated the second provision of the "Notice" (Cyberspace Administration of China notice referenced above), which states: "The collection of personal information required for joint prevention and joint control shall occur with reference to the national standard "Personal Information Security Specification," uphold the principle of minimal scope, and limit the targets of collection in principle to diagnosed individuals, suspected individuals, individuals having come in close contact, and other such focus groups. Collection is generally not aimed at all groups in a particular locality, and actual discrimination against groups in particular locations must be prevented.”

  • Full translation also has a screenshot of a social media post by a public security bureau in Zhejiang that calls for people to stop leaking the information of people returning from Wuhan to their homes. In the full translation, there’s also another screenshot from Hu’s Wechat Moments (I think) where his friends debate this topic.

  • One more interesting line, framed in the context of discussing the San Bernardino and Pensacola cases in the U.S. But you might as well ask yourself: Has history ever shown that once the government has surveillance tools, it will maintain modesty and caution when using them?” [但是,不妨再问一下自己:历史何曾显示,政府一旦拥有监视工具,会在使用它们时保持谦虚谨慎?]

  • Finally, his powerful closing: “Sometimes we think that technology will inevitably erode privacy; however, ultimately, humans (not "technology") choose whether or not to set default settings that permit routine access to information. The saying that the erosion of privacy is an inevitable development should be greatly scrutinized. The loss of privacy is not inevitable, just as its reconstruction is far from certain. We do not lack in our capacity to rebuild the private spaces we have lost. The key is: Do we have the will?”

FULL(ish) TRANSLATION: Part II of Hu Yong on Protecting Personal Privacy in a Time of Crisis

ChinAI Links (Four to Forward)

Must-read: The Case for Privacy Optimism

In his run-through of an extremely condensed history of privacy over the past couple hundred years, Ben Garfinkel, my colleague at GovAI, argues: "I think that the historical effect of technological progress on privacy, while certainly very mixed, has been much more positive than standard narratives suggest. It’s enhanced our privacy in a lot of ways that typically aren’t given very much attention, but that do hold great significance both practically and morally.

I think that the long-run trend seems to be one of rising social privacy [e.g. protection from gossiping neighbors] and declining institutional privacy [e.g. protection from state surveillance]. Whether or not this corresponds to a “net improvement” depends on the strength of institutional safeguards. So, insofar as technology has given certain people more “overall privacy,” technology has not done this on its own. Good governance and good institution design have also been essential.

One might expect AI to continue the long-run trend, further increasing social privacy and further decreasing institutional privacy. I don’t think it’s unreasonable, though, to hope and work toward something more. It’s far too soon to rule out a future where we have both much more social privacy and much more institutional privacy than we do today.

In short: You don’t need to be totally nuts to be an optimist about privacy."

Should-read: The Taiwan Model on Fighting the Coronavirus

There has been good critical coverage of the notion that China’s success in slowing down the coronavirus proves the validity of the “China model.” This isn’t a binary choice between the China model and the “Western” approach. This ABC News article by Stacy Chen highlights Taiwan’s impressive efforts to contain the disease: “Taiwan has only had 49 confirmed cases and one death, an astonishingly low number considering its proximity to China.” Audrey Tang, Taiwan's digital minister, had led efforts to map local supplies of face masks and integrate big data streams from the National Health Insurance Administration and Immigration Agency to to identify high-risk individuals See her views on Taiwan has a model for digital democracy in this Economist article.

Should-read: Cross-national survey on facial recognition technology

In this paper, Genia Kostka, Léa Steinacker, and Miriam Meckel present results from a cross-national survey on facial recognition technology:

  • 67% of Chinese respondents either strongly or somewhat accept the use of facial recognition technology in general while only 38% of Germans do (50% and 48% for the UK and U.S., respectively)

  • Chinese support for facial recognition technology use by private enterprises is only 17% compared to 30% for Americans, but support for the central government as a provider rises to 60% in China whereas it is only 35% in the U.S

  • Full paper contains even more great insights and highlights limitations to consider

Also Professor Kostka’s project at the Free University of Berlin is hiring for a new post doc position, so please share with those who’d be interested in this type of work.

Should-read: Tsinghua Prof on the Hidden Dangers of Facial Recognition Technology

In ChinAI #77 we featured a personal essay by Tsinghua law professor Lao Dongyan that criticized the proposition to install facial recognition technology in the Beijing subway system. Thanks to Professor David Ownby at the Université de Montréal for editing the translation and featuring it on “Reading the China Dream,” which is a project that provides translations of Chinese establishment intellectuals in an effort to understand intellectual life in contemporary China.

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and researcher at GovAI/Future of Humanity Institute.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99

ChinAI #85: Privacy in the Time of Coronavirus

Plus, A Critique of the Narrow View of National Security

Welcome to the ChinAI Newsletter!

Greetings from a land where drifters are allowed to find their way back home…

…as always, the archive of all past issues is here and please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).

Feature Translation: The Public Interest and Personal Privacy in a Time of Crisis

Context: A March 6 blog post authored by Hu Yong, a Professor at Peking University’s School of Journalism and Communication, and a well-known new media critic and active blogger/microblogger whose microblog has 800,000 followers. Hu writes that “the infringement of privacy by public health surveillance can be described as shocking” in the response to the coronavirus.

Key Takeaways:

  • Hu draws from two reports — first one by Southern Metropolis Daily that details how the personal information of more than 7000 people who returned home from Wuhan (for the holidays) was leaked. For instance, Wu Xiao, freshman student at a university of geosciences in Wuhan, returned home to Ningdu County (Jiangxi Province) for the holidays on January 10. Two weeks later, she saw on her family's Wechat group that there was a "Data Sheet of People who Returned to Ningdu from Wuhan" separated into four types of transportation methods into Ningdu (e.g. flight, rail). Apart from her, there were 400 or 500 people who had their personal information leaked, including their identification number, phone number, specific home address, train information, etc. Feeling angry and helpless, Wu Xiao said, "College students and workers returning home for the winter holidays is normal, how come some people say we are sinners for returning?"

  • Second is a Xinhua Daily Telegraph column telling the story of Xu Chang, a native of Hubei, who went with her husband to the countryside in her husband’s native Xuzhou to spend the New Year. When returning to their apartment community in the Shunyi District of Beijing on February 16th, the neighborhood security staff said Xu Chang could not enter because her ID showed she was from Hubei (Wuhan is the capital of this province). Even after they tried to prove they had not gone to Hubei by texting their telecom provider to get mobile location data for the past 14 days (these types of queries exceeded 50 million in the first week the big data platform was available), they were denied entry to their homes. Commenting on this type of ID card/hukou discrimination — many Hubei netizens have posted that they have not been able to return home because “their ID cards start with 420” — Professor Hu writes, “Here, private spaces have vanished without a trace.”

FULL(ish) TRANSLATION: Hu Yong on Protecting Personal Privacy in a Time of Crisis

Mini-Reflections and ChinAI Links (Four to Forward)

I’ve been thinking a lot lately about how so much of my research and thinking on technology & international politics has been co-opted by such a narrow view of national security — namely, that the U.S. needs to be more technologically dominant than China. The proliferation of the U.S.-Sino AI Arms Race meme is a clear byproduct of this view. To put it bluntly, it’s a glorified dick-measuring contest. A lot of my work tries to provide more clarity on what technological dominance even means — my testimony before the U.S. China Commission to present a better framework for comparing “national AI capabilities” for instance — but all of that is still operating within the confines of such a narrow view of national security. All I’m doing is offering a better ruler, if you will. It genuinely makes me sad how much of the mental space of other young bright minds has also been sucked up by this narrow view — it shapes which research projects we pursue, the justification for certain points has to be couched in this narrow view (the U.S. should be open to Chinese immigrants not because it’s a guiding principle rooted in our nation’s founding but because it increases our technological capabilities)!!.

I know there are a diversity of views in the U.S. national security community (and I’m painting with a broad, crude brush), but it sometimes feels like we are running a campaign to “Make America Technologically Great Again (MATGA).” Although, we may need to choose a different color for the campaign hats, since the red may be confused for support of the Soviets — oops, excuse me, the Chinese.

To be clear, understanding how AI affects U.S.-Sino military dynamics is obviously very important to national security. But military competition is affected by factors other than technological size, and the conversation about national security should not be dominated by military concerns. What would a broader vision of national security look like? How about one that values factors such as the health of our institutions in protecting individual rights like privacy just as much as the strength of our national innovation system. Or one that believes that a society that avoids both digital authoritarianism and surveillance capitalism — even if that third way slightly reduces intelligence capabilities or economic productivity — is more secure. One that recognizes actions like the internment of Japanese-Americans during WWII, even if the justifications of military necessity were actually true (which they weren’t), were still net-negative for the security of the American nation.

Or maybe it could start with this week’s must-read, which helped me break out a little from this narrow view of national security.

Must-read: The Precipice: Existential Risk and the Future of Humanity

Toby Ord, Senior Research Fellow at the Future of Humanity Institute, presents a grand vision of the potential human flourishing of the future as well as a wake-up call to the existential catastrophes (e.g. climate change, engineered pathogens, nuclear weapons, unaligned artificial intelligence) from which we could never come back. In combination, ending these existential risks is among the most pressing moral issues of our time. The link lets you subscribe to Toby’s newsletter to download the first chapter now. ***Available in the UK, Australia, and New Zealand — available for pre-order in the US & Canada (comes out March 24th)

Should-listen: Institute for Freedom and Community Streamed Talk on China: Big Data, AI, and Privacy

Was really powerful to participate alongside Joy Ma and Xiao Qiang in this panel discussion at St. Olaf’s. There’s a really powerful moment at the around the 1 hr. 1 minute mark where Xiao notes how all three of us are connected to China but in different ways (Xiao Qiang, for instance, is an exiled dissident) — and how our personal relationship to China inevitably seeps into our differing perspectives on AI-enabled surveillance in China. Personally, I see the acknowledgement of one’s personal biases as a sign of strength not weakness, and I think there should be more of these types of reflections on how one’s personal experiences in China/with China frames how one views China.

  • Consider Trump’s trade advisor Peter Navarro. Let’s just say that my hypothesis is that the relationship between these two is statistically significant: his views on U.S.-China relations and his belief that “you’ve got to be nuts to eat Chinese food.”

Should-read: Chinese Hospitals Deploy AI to Help Diagnose Covid-19

Tom Simonite for Wired looks at the use of Infervision’s Covid-19 diagnostic tool in Zhongnan Hospital of Wuhan University. A project that makes one medical imaging research director “both skeptical and cautiously optimistic” — it’s plausible that the algorithm could help staff reading CT scans to work faster but it would only make a significant difference if radiologists’ time is the major bottleneck in a hospital’s operations.

Should-read: Chinese Citizens are Racing Against Censors to Preserve Coronavirus Memories on Github

By Jane Li for Quartz, Chinese citizens are using sites like GitHub to preserve memories of the coronavirus epidemic erased from the Internet by censors. Just another example of how Quartz is consistently providing some of the best China tech coverage.

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a researcher at the Center for the Governance of AI at Oxford’s Future of Humanity Institute.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99

ChinAI #84: Biometric Recognition White Paper 2019

More from white papers on technical standards

Welcome to the ChinAI Newsletter!

Greetings from a land in which the residents seek to derive meaning and purpose amidst a stream of endless technocratic white papers…

…as always, the archive of all past issues is here and please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).

Feature Translation: Biometric Recognition White Paper 2019

Context: Published in November 2019 by China Electronics Standardization Institute (CESI) & the Subcommittee on Biometric Recognition, National Information Technology Standardization (NITS) Technical Committee

  • Writing units included Sensetime, Xiaomi, Pingan, Ant Financial, Cloudwalk, Fudan University, etc. Notable absentees were Megvii and Yitu --- two of the “big four” facial recognition giants.

  • Sensetime’s leading role: it was appointed to lead a working group in charge of drawing up national standards for the facial recognition industry in December 2019. This is one of six working groups under the NITS Biometric Recognition Sub-Committtee, established in 2013. The other five mentioned in the White Paper are on iris recognition, vein recognition, behavior recognition, genome recognition, and mobile device biometric recognition standards.

  • In Sensetime’s Wechat post about the announcement, it claims to have been the chief compiler of 55 international, national, industry, and group standards, as well as obtained important seats in international standards organizations such as ISO and IEEE.

  • A lot of the big-picture implications about China’s push to shape AI standards was covered in an earlier DigiChina article I wrote with Paul Triolo and Samm Sacks.

Key Takeaways:

  • The rapidly growing biometric recognition market in China: more than 4,000 companies in the field of biometric recognition, with 558 new companies in 2018. The size of China’s biometrics market has increased from 80 million RMB in 2002 to 17.01 billion RMB in 2018.

  • Government guidance has played an important role: “In 2012, Chinese investment and financing in biometrics was only 9 million RMB. With the government ’s policy support for biometrics in China, the scale of biometrics investment and financing reached 16.381 billion RMB by the end of 2018.”

  • Why does standardization matter: White Paper notes that currently there’s uneven product quality, inadequate ways to test/compare which algorithms are better. Section 5 of the report discusses CESI’s efforts to serve as a third-party testing agency to test whether biometric products comply with standards.

  • The Appendices reveal the systematic nature of this standards effort — Appendix 1 tracks 118 international standards in biometrics (along with which have been adopted domestically); A2 lays out the 43 biometric recognition standards published domestically; and A3 highlights 5 standards currently being developed. In 2019 the most recent one was on: “Information Technology: Biometric Recognition Presentation Attack Detection -- Part 1: Framework.” Note: think of presentation attack detection as anti-spoofing mechanisms. For instance, how to detect if someone is holding up an artifact sample (e.g. a printed photo) to try and fool the facial recognition system.

Blast from the past — let’s compare some of the stats from the White Paper to a 2006 presentation on China’s biometrics industry by a Professor at the Center for Biometrics and Security Research (part of Chinese Academy of Sciences and the largest biometrics team in China at the time ):

  • Gives a really nice way to track how the biometrics industry has evolved over the last decade or so: In 2006, fingerprint was 95.2% of market share, facial recognition comprised 1.1% of market share, and iris recognition was .5% of market share; in 2019, fingerprint recognition still leads with over 1/3 of the overall market share, but facial recognition is up to 16%, iris recognition is at 11%, and voiceprint recognition is at 11%.

  • China had just joined SC37 two years earlier in 2004 (The ISO/IEC joint committee branch focused on biometrics)

EXCERPTED TRANSLATION: BIOMETRIC RECOGNITION WHITE PAPER 2019

ChinAI Links (Four to Forward)

Must-read: China Digital Times Roundup on Concerns Over Adoption and Export of Biometric Surveillance

For more coverage on China’s use of biometrics in surveillance, see Samuel Wade’s excellent roundup in China Digital Times. For the other three links in this week’s Four to Forward, I’ll highlight a few from CDT’s roundup I found especially interesting.

Should-read: Facial Recognition: How China Cornered the Surveillance Market

From a FT piece by Yuan Yang and Madhumita Murgia on Chinese companies’ serving a growing international market, this paragraph caught my eye:

“US companies, for all the lip service they pay to technology and ethics, are also building surveillance tech, and indeed supplying Chinese companies that produce it,” says Stephanie Hare, an independent researcher of facial recognition technology and ethics, and a former employee of Palantir. “This leaves everyone else with a decision: be spied on by the US or by China? This point was made in the German parliament last week, and the US was very upset about it, saying there can be no moral equivalence between China’s authoritarianism and US values.”

Should-read: China’s Genetic Research on Ethnic Minorities Sets Off Science Backlash

By Sui-Lee Wee and Paul Mozur of NYT, this piece unpacks backlash to high-profile journals that publish papers by Chinese scientists affiliated with surveillance agencies — including a call for retraction of papers written by scientists backed by Chinese security agencies that focus on the DNA of minority ethnic groups.

Should-read: Use of Concerns Over Facial Recognition High, China Survey Says

Cai Xuejiao of Sixth Tone reports on a survey conducted by Nandu Personal Information Protection Research Center (a favorite of past ChinAI newsletters): “More than 73% said they would prefer alternatives to sharing their facial data, and 83% said they wanted a way to access or delete the data.” Nandu surveyed 6000+ people between October and November 2019.

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford, Researcher at GovAI/Future of Humanity Institute, and non-resident Research Fellow at the Center for Security and Emerging Technology.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99

Loading more posts…