ChinAI #86: Privacy in the Time of Coronavirus (Part 2)
Plus, a case and model for privacy optimism
Welcome to the ChinAI Newsletter!
Greetings from a land where social distancing brings us all a little closer together…
…as always, the archive of all past issues is here and please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors).
Feature Translation: Part II of The Public Interest and Personal Privacy in a Time of Crisis
Context: A March 8 follow-up blog post authored by Hu Yong, a Professor at Peking University’s School of Journalism and Communication, and a well-known new media critic and active blogger/microblogger whose microblog has 800,000 followers. In last week’s issue, we covered Hu’s view that “the infringement of privacy by public health surveillance can be described as shocking” in the response to the coronavirus. In this week’s issue, Hu analyzes a Feb. 4 notice by the Cyberspace Administration of China (CAC) on data privacy in coronavirus response.
Rui Zhong, Rogier Creeemrs, and Graham Webster did a great analysis and full translation of the notice for DigiChina here.
Key Takeaways:
Hu structures this essay around three principles for balancing the public interest and personal privacy. 1) Treat public interest (concerns) as exceptions to (the protection of) privacy. Hu writes, “Any law or policy that interferes with basic human rights must prove its legitimacy. Legitimacy comes from i) must comply with the law, ii) is necessary to achieve a legitimate goal, and iii) is commensurate with the goal. From this point of view, in the process of preventing the epidemic, many policies implemented throughout China violated the peoples' basic human rights and were inherently illegitimate.” (emphasis mine)
2) If it is really necessary to manage (restrict) privacy for the sake of public interest, then we must establish appropriate guarantees for basic civil rights and personal interests in the process of managing (restricting) privacy. Hu argues here, “If an individual's right to privacy is restricted during a particular crisis, this does not mean that he or she must yield to the public interest in an unlimited fashion. For example, (imposing) isolation via restrictions on the right to freedom of movement is legal and humane only if it is carried out on a basis that is reasonable, time-limited, and necessary for purpose as well as in a method that is voluntary and non-discriminatory wherever possible. Otherwise, it could extremely easily result in large-scale discrimination and stigmatization, and cause irreparable social harm to the targets of discrimination and stigma.”
3) Insist on fair use of information. Under this provision, Hu argues “We can see that the large-scale violation of citizens' rights under the premise of preventing and controlling the epidemic clearly violated the second provision of the "Notice" (Cyberspace Administration of China notice referenced above), which states: "The collection of personal information required for joint prevention and joint control shall occur with reference to the national standard "Personal Information Security Specification," uphold the principle of minimal scope, and limit the targets of collection in principle to diagnosed individuals, suspected individuals, individuals having come in close contact, and other such focus groups. Collection is generally not aimed at all groups in a particular locality, and actual discrimination against groups in particular locations must be prevented.”
Full translation also has a screenshot of a social media post by a public security bureau in Zhejiang that calls for people to stop leaking the information of people returning from Wuhan to their homes. In the full translation, there’s also another screenshot from Hu’s Wechat Moments (I think) where his friends debate this topic.
One more interesting line, framed in the context of discussing the San Bernardino and Pensacola cases in the U.S. “But you might as well ask yourself: Has history ever shown that once the government has surveillance tools, it will maintain modesty and caution when using them?” [但是,不妨再问一下自己:历史何曾显示,政府一旦拥有监视工具,会在使用它们时保持谦虚谨慎?]
Finally, his powerful closing: “Sometimes we think that technology will inevitably erode privacy; however, ultimately, humans (not "technology") choose whether or not to set default settings that permit routine access to information. The saying that the erosion of privacy is an inevitable development should be greatly scrutinized. The loss of privacy is not inevitable, just as its reconstruction is far from certain. We do not lack in our capacity to rebuild the private spaces we have lost. The key is: Do we have the will?”
FULL(ish) TRANSLATION: Part II of Hu Yong on Protecting Personal Privacy in a Time of Crisis
ChinAI Links (Four to Forward)
Must-read: The Case for Privacy Optimism
In his run-through of an extremely condensed history of privacy over the past couple hundred years, Ben Garfinkel, my colleague at GovAI, argues: "I think that the historical effect of technological progress on privacy, while certainly very mixed, has been much more positive than standard narratives suggest. It’s enhanced our privacy in a lot of ways that typically aren’t given very much attention, but that do hold great significance both practically and morally.
I think that the long-run trend seems to be one of rising social privacy [e.g. protection from gossiping neighbors] and declining institutional privacy [e.g. protection from state surveillance]. Whether or not this corresponds to a “net improvement” depends on the strength of institutional safeguards. So, insofar as technology has given certain people more “overall privacy,” technology has not done this on its own. Good governance and good institution design have also been essential.
One might expect AI to continue the long-run trend, further increasing social privacy and further decreasing institutional privacy. I don’t think it’s unreasonable, though, to hope and work toward something more. It’s far too soon to rule out a future where we have both much more social privacy and much more institutional privacy than we do today.
In short: You don’t need to be totally nuts to be an optimist about privacy."
Should-read: The Taiwan Model on Fighting the Coronavirus
There has been good critical coverage of the notion that China’s success in slowing down the coronavirus proves the validity of the “China model.” This isn’t a binary choice between the China model and the “Western” approach. This ABC News article by Stacy Chen highlights Taiwan’s impressive efforts to contain the disease: “Taiwan has only had 49 confirmed cases and one death, an astonishingly low number considering its proximity to China.” Audrey Tang, Taiwan's digital minister, had led efforts to map local supplies of face masks and integrate big data streams from the National Health Insurance Administration and Immigration Agency to to identify high-risk individuals See her views on Taiwan has a model for digital democracy in this Economist article.
Should-read: Cross-national survey on facial recognition technology
In this paper, Genia Kostka, Léa Steinacker, and Miriam Meckel present results from a cross-national survey on facial recognition technology:
67% of Chinese respondents either strongly or somewhat accept the use of facial recognition technology in general while only 38% of Germans do (50% and 48% for the UK and U.S., respectively)
Chinese support for facial recognition technology use by private enterprises is only 17% compared to 30% for Americans, but support for the central government as a provider rises to 60% in China whereas it is only 35% in the U.S
Full paper contains even more great insights and highlights limitations to consider
Also Professor Kostka’s project at the Free University of Berlin is hiring for a new post doc position, so please share with those who’d be interested in this type of work.
Should-read: Tsinghua Prof on the Hidden Dangers of Facial Recognition Technology
In ChinAI #77 we featured a personal essay by Tsinghua law professor Lao Dongyan that criticized the proposition to install facial recognition technology in the Beijing subway system. Thanks to Professor David Ownby at the Université de Montréal for editing the translation and featuring it on “Reading the China Dream,” which is a project that provides translations of Chinese establishment intellectuals in an effort to understand intellectual life in contemporary China.
Thank you for reading and engaging.
These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and researcher at GovAI/Future of Humanity Institute.
Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).
Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99