ChinAI #143: 2021 AI Company Rankings

Plus, a history of Russian machine translation

Greetings from a world where…

I'll be presenting on my dissertation at a CISAC seminar this Wednesday. RSVP here

…Please please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay support access for all AND compensation for awesome ChinAI contributors). As always, the searchable archive of all past issues is here.

Feature Translation: eNet & Ciweek Rankings

Context: In early May, eNet and Ciweek teamed up to rank the top Chinese AI companies across various verticals, such as facial recognition and speech recognition. For each company, they derived an overall score based on the company’s market, word-of-mouth praise, and technology, etc. Both eNet and China Internet Weekly (Ciweek) are influential IT business media portals — the 2021 AI Company Rankings had 22.5k views on my WeChat link when I looked at it this weekend. I think the rankings provide a useful panoramic view of China’s AI ecosystem:

eNet & Ciweek produce rankings for many IT industries. Two others I found intriguing: a 2020 Top 50 list of Chinese industrial software companies; a 2021 Top 50 list of Chinese big data middleware companies.

Key Takeaways: To survey recent trends, let’s use the 2019 version of the rankings as a point of comparison:

  • In some areas, the top companies have maintained their lead. For instance, the top two remained the same in security applications (Hikvision and Dahua) and drones (DJI and XAG).

  • In other domains where the technical roadmap is constantly shifting, there has been significant upheaval. For example, the top 5 AI chip companies in 2019 were: Cambricon, Allwinner Technology, Intellifusion, Horizon Robotics, and Baidu. Two years later, the top 5 are: HiSilicon (Huawei’s chip subsidiary), Horizon Robotics, Pingtouge (Alibaba’s chip subsidiary), Unisoc, and Vimicro.

Forgive a little boasting before we dig in. Three years ago (ChinAI #10), I highlighted Mininglamp (明略数据) as and up-and-coming AI company. Now it’s ranked #1 in knowledge graphs, followed by Baidu and Alibaba in that category.

eNet and Ciweek slice up the AI ecosystem into 7 layers: Cognition (technology); Perception (technology); Computing (technology); Infrastructure; Intelligent Terminals; Scenario Applications; Comprehensive. What follows are some rankings I found particularly interesting:

In the cognition layer, here’s which companies are best at getting computers to understand meaning from text:

In the perception layer, two things that caught my eye: i) hardware-facing companies like Dahua and Hikvision succeeding in facial recognition software; ii) “Social responsibility” is one of the factors on which companies were ranked…

Again, here’s the full rankings (in Mandarin), which have often extend to 20 to 50 companies. I’ll leave you with the top 10 in AI chips:

Jeff Jots: The Forgetting and Rediscovery of Soviet Machine Translation

Switching out this week’s Four to Forward with some notes on an article published in the summer 2020 issue of Critical Inquiry, written by Professor Michael D. Gordin. Gordin argues that our “sense of history” with respect to new neural-net-based translation programs is “not very deep,” perhaps only reaching as far back as the birth of Google Translate in 2006. But we can rewind further back to the story of Russian machine translation in the mid-1950s. Some fascinating discoveries:

  • In 1954, a Georgetown-IBM experiment produced a translation program that generated “almost perfect translations of specially constrained Russian sentences into English every few seconds.” This made a splash, attracting the attention of Soviet researchers. In 1956, a Soviet team at the Steklov Institute developed impressive French-Russian translations.

  • A funny line about American perceptions of Soviet machine translation capabilities: “Nevertheless, the leaders of American programs were still nervous about Soviet progress, ironically compounded by the fact that their Russian-language knowledge was limited and they were not always able to read the relevant publications.” Now, why does that sound so familiar?

  • After the news about the Soviet French-Russian translation, the NSF gave a substantial grant to the Georgetown MT program: “Money began pouring into programs across the United States and its allies, but to Georgetown more than any other American institution: ‘There exists no other group in the United States, or in England for that matter, which has been working on such a broad front.’ Although some of the collaborations included work on German, French, Chinese, and Japanese, the bulk of the research, unsurprisingly, concentrated on Russian. This was a race and a competition between two superpowers, their languages, and their computers.” (emphasis mine)

  • Here’s what Washington State linguist Erwin Reifler said about MT in 1960: “It is clear that the impact of MT on human culture and civilization will by far surpass that of the invention of book printing.”

One more point to entice you to read the whole article. One of the coolest recent papers in neural machine translation is work by a Googel team that was able to do “zero-shot translation” — translate between language pairs that the model had never been trained on. The authors argue that this hints at “a universal interlingua representation in our models.” Now read what Gordin writes about how the Russian MT approach differed from the Western one:

  • “For practitioners, besides the differences in the languages, the most obvious contrast with Western research was a shift of emphasis from “direct” approaches—hard-coding a specific language pair, often in a single direction, as had been the case for Georgetown-IBM as well as the Kulagina Frenchto-Russian pilot program—in favor of what its most vigorous advocate, Igor Mel’čuk (born in 1932, only recently retired in Montreal, where he emigrated in 1977 after being fired for political dissidence), called interlingual methods. Instead of building an algorithm that would transfer morphological, syntactic, and semantic features on a one-to-one basis, thus needing to be redesigned for every new language, Mel’čuk insisted on developing a machine interlingua, the same for all the linguistic codes, into which each language would be translated into and then out of.”

Thank you for reading and engaging.

These are Jeff Ding's (sometimes) weekly translations of Chinese-language musings on AI and related topics. Jeff is a PhD candidate in International Relations at the University of Oxford and a Predoctoral Fellow at Stanford’s Center for International Security and Cooperation, sponsored by Stanford’s Institute for Human-Centered Artificial Intelligence.

Check out the archive of all past issues here & please subscribe here to support ChinAI under a Guardian/Wikipedia-style tipping model (everyone gets the same content but those who can pay for a subscription will support access for all).

Any suggestions or feedback? Let me know at or on Twitter at @jjding99