ChinAI #125: Top 10 Lists for 2020 & 2021

Where'd you come from and where'd you go

Greetings from a world where…

the best Plan B will emerge from the multitudes

…Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors). As always, the searchable archive of all past issues is here.

***Translation note: Thanks to many readers for commenting on a translation question from the previous ChinAI issue re: 有线通信, for which suggestions converged on “fixed-line communications” as the best translation.

Feature Translation: The Ten Biggest Technological Advances in AI in 2020

The Beijing Academy of Artificial Intelligence asked its scholars to come up with a joint top 10 list (links to the original Mandarin) of the most important AI advances in 2020. Instead of a Google doc, I’ve just translated what they came up with below:

10: Controlling Fairness and Bias in Dynamic Learning-to-Rank

Cornell University proposed an unbiased and fair ranking model to alleviate “Matthew effect” issues with online search rankings

In recent years, the fairness of retrieval and recommendation models based on counterfactual learning have become important research directions in the field of information retrieval. Related research results have been widely used in click data correction, offline model evaluation, etc. Some technologies have already been implemented in recommendation and search products from companies such as Alibaba and Huawei. In July 2020, the team of Professor Thorsten Joachims of Cornell University published FairCo, a fair and unbiased ranking learning model, which won the SIGIR 2020 Best Paper Award (conference on Research and Development in Information Retrieval)…The improvement has received wide attention and praise from the industry.

9: Google and Facebook teams each proposed new unsupervised representation learning algorithms

At the beginning of 2020, Google and Facebook proposed SimCLR and MoCo, respectively, both of which can learn representations of images on unlabeled datasets. The framework behind the two algorithms is contrastive learning. The core training signal in comparative learning is the "distinguishability" of pictures. The model needs to distinguish whether two inputs are from different perspectives of the same picture, or inputs from two completely different pictures. This task does not require human annotation, so a large amount of unlabeled data can be used for training. Although the two works of Google and FaceBook deal with many details of training differently, they both show that unsupervised learning models can approach or even match the results of supervised models.

8: MIT uses only 19 brain-like neurons to control self-driving cars

Inspired by small animal brains such as the nematode, teams from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), TU Wien, and IST Austria used only 19 brain-like neurons to control autonomous vehicles. The conventional deep neural network requires millions of neurons. In addition, this neural network can imitate learning, and has the potential to be extended to warehouse automation robots and other application scenarios. The results of this research have been published on October 13, 2020 in a Nature subsidiary journal — Nature Machine Intelligence.

7: Peking University achieves for the first time a neural network high-speed training system based on phase-change memory

In December 2020, the team of BAAI scholar and Peking University’s Yuchao Yang’s team proposed and implemented a neural network high-speed training system based on phase-change memory (PCM), which effectively reduces the time and energy costs of training artificial neural networks — problems that are difficult to implement on the chip. The system is improved on the basis of the direct error feedback algorithm (DFA), and uses the randomness of the PCM conductance to naturally generate random weights for the propagation errors, which effectively reduces the hardware overhead of the system and the time and energy consumption in the training process. The system performs well in the training process of large-scale convolutional neural networks, which provides a new direction for the application of artificial neural networks on terminal platforms and the realization of on-chip training.

***Comment: I couldn’t find the paper for this, so I linked to Professor Yang’s research page.

6: Tsinghua University first proposes the concept of neuromorphic completeness of brain-like computing and a corresponding system hierarchy

In October 2020, a team including BAAI scholars Zhang Youhui, Li Guoqi, and Song Sen of Tsinghua University put forward the concept of "neuromorphic completeness" and a corresponding system hierarchy, which affects the compatibility between software and hardware, and proved this type through theoretical applications. and prototype experiments. The hardware completeness and compiling feasibility of the system expand the application range of the brain-like computing system to support general-purpose computing. The research results were published in the journal Nature on October 14, 2020. Nature commented that the new concept of “completeness” promotes neuromorphic computing, constituting a "breakthrough solution" for the tight coupling of software and hardware in brain-like systems.

5: Baylor College of Medicine in the United States achieves high-efficiency "visual cortex implanting" through dynamic intracranial electrical stimulation

For more than 40 million blind people around the world, seeing the light again is an unattainable dream. In May 2020, researchers at Baylor College of Medicine in the United States used the new technology of dynamic intracranial electrical stimulation to form a visual prosthesis with an implanted microelectrode array, drawing the shapes of letters such as W, S, and Z in the human primary visual cortex, and successfully allowed blind people to "see" these letters. Combined with the high-bandwidth, fully implantable brain-computer interface system released by Neuralink, a brain-computer interface company founded by Musk, the next-generation visual prosthesis may accurately stimulate every neuron in the primary visual cortex of the brain, helping the blind "see" more complex info to realize their dream of seeing the world clearly.

***Comment: Anything related to Elon Musk gets a lot of coverage in China.

4: DeepMind and others use deep neural networks to solve the Schrödinger equation, promoting the development of quantum chemistry

I’ve excerpted their summary of FermiNet (I’ve linked DeepMind’s blog post). They also summarize related research: “In addition, in September 2020, several scientists from the Free University of Berlin, Germany, also proposed a new deep learning method, which can obtain nearly exact solutions to the electronic Schrödinger equation. Related research was published in Nature Chemistry. This type of research shows not only the application of deep learning in solving a specific scientific problem, but also a great prospect for deep learning to be widely used in scientific research in various fields such as biology, chemistry, materials, and medicine.”

3: Gordon Bell Prize for Deep Potential Molecular Dynamics Research

On November 19, 2020, at the SC20 conference in Atlanta, USA, the “Deep Potential” team, which included BAAI scholar Wang Han of the Beijing Institute of Applied Physics and Computational Mathematics, won the highest award in the international high-performance computing application field — the "Gordon Bell Prize…”

***Comment: Press release of the award states, “ACM, the Association for Computing Machinery, named a nine-member team, drawn from Chinese and American institutions, recipients of the 2020 ACM Gordon Bell Prize for their project, ‘Pushing the limit of molecular dynamics with ab initio accuracy to 100 million atoms with machine learning.’ ”Practical applications include accelerating drug development. Here’s a question: How many people researching AI and U.S.-China relations know about this accomplishment? Until reading this article, I was not one of them. A reminder that U.S.-China cooperation in AI happens all the time, and a lot of it occurs for the greater good.

2: DeepMind's AlphaFold2 solves the problem of protein structure prediction

1: OpenAI releases the world's largest pre-trained language model GPT-3

Comment: Probably would be 1-2 on any top ten list. Chose not to include the article’s summaries for these two, as these have been amply covered elsewhere.

Feature Translation: IDC’s 10 Big Predictions for China’s AI Market in 2021

The International Data Corporation (IDC) published a report, “IDC FutureScape: Global AI Market 2021 Predictions — China Implications.” (links to original summary in Mandarin) Their 10 big predictions are as follows:

Prediction 1: By 2023, more than 15% of consumer-centric AI decision-making systems in the financial, medical, government and other regulated public sectors will introduce relevant regulations that explain their analysis and decision-making processes.

Prediction 2: By 2021, more than 50% of organizations will add AI capabilities to environments that process incoming calls.

Prediction 3: By 2024, 45% of repetitive tasks will be automated or enhanced through the use of “digital workers” supported by AI, robotics, and robotic process automation (RPA).

Prediction 4: By 2023, the number of data analysts and data scientists using an end-to-end machine learning platform from data preparation to model deployment that is encapsulated using automated machine learning (AutoML) technology will double.

Prediction 5: By 2024, automated operation and maintenance (AIOps) will become the new normal of IT operations. At least 50% of large enterprises will adopt automated operation and maintenance solutions to automate major IT systems and service management processes.

Prediction 6: By 2025, 10% of artificial intelligence solutions will be closer to general artificial intelligence (AGI) — using neural symbolic technology to combine deep learning and symbolic methods to create methods that are more reliable and closer to human decision-making.

Prediction 7: By 2021, at least 65% of China’s top 1000 companies will use AI tools such as natural language processing (NLP), machine learning (ML) and deep learning (DL) to empower 60% of use cases in business areas such as customer experience, security, operations management, and procurement.

Prediction 8: By 2024, more than 30% of China’s top 1000 companies will deploy AI workloads more evenly on the end, edge and cloud. These workloads will be managed by artificial intelligence software platform providers to make AI infrastructure "invisible.”

Prediction 9: By 2023, 30% of enterprises will run different analysis and AI models on the edge. Among them, 30% of edge AI applications will be accelerated by heterogeneous acceleration solutions.

Prediction 10: By 2022, 80% of China's top 1000 companies will invest in internal learning platforms and third-party training services to meet the needs of new skills and work style changes brought about by the adoption of AI.

Comment: I find these predictions useful mostly because they set out some interesting indicators that can be measured. As for the veracity of these predictions, I would say they’re overly optimistic and under-specified — though, to be fair, you can’t go through every condition in a one-sentence prediction. I would say that Roy Amara’s quote applies here: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

ChinAI Links (Four to Forward)

Should-read: Machine learning is going real-time

On the “MLOps race between the US and China,” Chip Huyen writes: “Few American Internet companies have attempted online learning, and even among these companies, online learning is used for simple models such as logistic regression. My impression from both talking directly to Chinese companies and talking with people who have worked with companies in both countries is that online learning is more common in China, and Chinese engineers are more eager to make the jump.”

Comment: This was a thought-provoking read. I still have a lot to learn here and haven’t talked to as many insiders on the ground on this issue, so this causes me to update my views slightly on China’s adoption speed of MLOps. Two caveats spring to mind for me: 1) MLOps is more than just online learning; 2) MLOps applies to more than just mobile apps (which is what it seems like most of the anecdotal evidence draws on)

Should-read: Nordic lights? National AI policies for doing well by doing good

Jacob Dexe and Ulrik Franke for Journal of Cyber Policy. Here’s the abstract: Getting ahead on the global stage of AI technologies requires vast resources or novel approaches. The Nordic countries have tried to find a novel path, claiming that responsible and ethical AI is not only morally right but confers a competitive advantage. In this article, eight official AI policy documents from Denmark, Finland, Norway and Sweden are analysed according to the AI4People taxonomy, which proposes five ethical principles for AI: beneficence, non-maleficence, autonomy, justice and explicability…

Should-read: Year-in-review Recs from Robot Humanities (机器人人文) Account (in Mandarin)

Came across a cool account that recommends excellent articles at the intersection of robotics and the humanities. Happy to feature translations from this list if there are any ChinAI readers wanting to contribute in 2021.

Should-read: Why Chinese youngsters are embracing a philosophy of “slacking-off”

Jane Li writes for Quartz:

The intense anxiety felt by younger people, and exacerbated by the pandemic, prompted a wider discussion on a once niche academic concept: neijuan. Translated as “involution,” the anthropological term was first applied to agriculture, and has come to describe conditions in which a society ceases to progress, and instead starts to stagnate internally. Increased output and competition intensify but yield no clear results or innovative, technological breakthroughs.

Neijuan has become a hot topic on the Chinese internet and in media reports this year as a word that “captures urban China’s unhappiness.” Complaints of their work becoming too “involuted”—more competitive with little corresponding rewards—are as likely to be discussed on Weibo by white-collar workers as food delivery drivers.

H/t to Doug Orr for sharing.

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a researcher at the Center for the Governance of AI at Oxford’s Future of Humanity Institute.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at chinainewsletter@gmail.com or on Twitter at @jjding99

***1.6.21: this post was edited to fix a typo in a comment on the 3rd ranked development in the first translation.