Who Will Be Next “China’s NVIDIA” After US Tightens Restrictions? Alibaba Unveils World’s Largest AI Computing Center; Baidu Text-to-Image AI Hailed by Japanese Pixiv Lovers

Weekly China AI News from Aug 29 to Sep 4

Recode China AI
6 min readSep 6, 2022

Dear readers, I hope you enjoyed labor day weekend!

In this week’s issue, we will discuss the latest tension between U.S. and China on high-end AI chip export bans and which Chinese companies may take the place. The news came out two days after Alibaba unveiled the world’s largest AI computing center. Plus, Baidu’s new text-to-image AI model gained huge traction on Twitter among Japanese netizens. Please note that we will skip this week’s Rising Startup as no big funding news was spotted.

News of the Week

Chinese Chip Makers Scramble to Fill the Gap After US Restricts High-end AI Training Chip

What’s new: The U.S. government is doubling down on curbing China’s rise in AI by restricting the sales of high-end AI chips to China. Nvidia said in a Wednesday SEC filing that the company is prohibited to sell A100 GPUs and H100 GPUs, its most advanced chips for training complex machine learning models, to China and Russia without licenses. The U.S government claimed China will use such chips for military research. The new restrictions also include AMD’s MI250 accelerator chip.

How big is the impact: A100 and H100 are Nvdia’s latest-generation high-performance AI chips for supercomputers, data centers, and cloud computing servers. These two chips can significantly accelerate the training of sophisticated large-scale models in AI and other scientific research. Nvidia claimed H100 GPUs provide up to 9X faster training over the prior generation for mixture-of-experts (MoE) models. While the restrictions will not affect most Chinese companies’ business in the near term, they will hamper the competitiveness of China in advancing state-of-the-art AI and scientific research.

Who will fill the gap? While the stock price of Nividia tumbled by 9 percent on Wednesday, the Chinese semiconductor market welcomed the news with a stock price spike. Cambricon, a Chinese chip maker that produces processors for smart cloud servers, jumped by 20 percent.

The reality is no domestic chip makers can replace Nvidia’s high-end AI chips in China, unfortunately. But a number of Chinese GPU and ASIC makers are striving to fill the gap in case the worst scenario transpires.

  • Iluvatar CoreX is a developer of high-performance computing solutions founded in 2015. Last year, the company unveiled its first 7nm general-purpose GPU for cloud training “Tiangai 100”, which delivers 147 TeraFLOPS for FP16.
  • Biren Technology is a designer and developer of processors for GPU and DSA computation founded in 2019. The company’s newly released BR100 is a 7nm chip for training and inferencing. The chip packs 77 billion transistors, featuring 256 TeraFLOPS for FP32 and over 1000 TeraFLOPS for BF16, but not supportive of FP64.
  • Huawei released its AI training chip Ascend 910 in 2019 which delivers 256 TeraFLOPS for FP16 and 512 TeraOPS for INT8.
  • Baidu’s 2nd-generation AI chip Kunlun II is used for both training and inferencing, delivering 128 TeraFLOPS for FP16.
  • Cambricon, once Huawei’s AI core supplier, introduced MLU370, a 7nm AI chip for both training and inferencing last year. An accelerator card equipped with MLU370 can deliver 24 TeraFLOPS for FP32.
  • The Shanghai-headquartered Enflame Tech introduced its 2nd-gen AI training chip named “Suisi 2.0”, which features 40 TeraFLOPS for FP32 and 160 TeraFLOPS for TF32, slightly better than A100.

Alibaba Announces New Computing Center Featuring 12 EFLOPS

What’s new: Alibaba Cloud last week unveiled a computing center that provides a peak aggregate performance of 12 EFLOPS, claiming the world’s top spot that overshadows machine learning clusters from Google and Tesla. Located at Zhangbei, a village in China’s central Henan province, the computing center is designed to train and inference sophistical models to power applications such as autonomous driving, AI applications, and spatial geology.

The Zhangbei-based computing center is part of a new offering Alibaba Cloud announced to serve enterprise clients who demand powerful cloud computing power to accelerate their AI workloads. For example, Xpeng, a Chinese Tesla-like EV maker, has sped up the training time of its autonomous driving model by 170 times, thanks to Alibaba Cloud.

Alibaba Cloud boasts its solution that achieves 90% efficiency when running a thousand processors in parallel, leading to an 11-times boost in training efficiency and 6 times lift in inferencing efficiency.

Why it matters: 58% of Chinese companies are using AI, said Cai Yinghua, President of Alibaba Cloud Global Sales and former Huawei exec. The volume of data in China will reach 48.6 Zettabytes by 2025. On the other hand, AI doubles its computing every 3.5 months, according to OpenAI’s study in 2018.

Cai added the total computing power in China increased by nearly 5 times between 2016 and 2020. While the general-purpose computing power has increased by about 3 times, intelligent computing power has increased by nearly a hundred times.

Baidu’s New Text-to-Image AI Demo Hailed by Japanese Pixiv Lovers

What’s New: In the past week, hundreds and thousands of Twitter accounts posted AI-generated pictures of Japanese anime characters with the hashtag “ernievilg”. Japanese netizens hailed the excellence of the AI model for creating good-looking female figures in the style of pixiv, a Japanese online community for artists. Yet, “ernigvilg” is not a Japanese version of “DALL-E” but was created in Japan’s neighboring country, China.

Earlier last week, Huggingface, a startup known for open-sourcing AI models and demos, released a publicly-available web demo of Baidu’s text-to-image generative model, ERNIE-ViLG. Similar to DALL-E 2, Imagen, and Stable Diffusion, ERNIE-ViLG is able to create imaginative images in different artistic styles based on both Chinese or English text prompts.

The Japanese community quickly discovered ERNIE-ViLG particularly excels at painting pixiv-style characters, by simply adding “pixiv” in the text prompt. Below is a library of pixiv-style girls generated by ERNIE-ViLG (created to @AIGirlsSelfie). As China’s young generation is embracing ACG (anime, cartoons, games) and Japanese anime culture, it’s unsurprised that ERNIE-ViLG is well-trained on massive anime data.

Trending Rapers

Researchers from Tsinghua University and Meta AI discovered that “the key ingredients behind the vision Transformers, namely input-adaptive, long-range and high-order spatial interactions, can also be efficiently implemented with a convolution-based framework.” In the paper HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions, they presented “the Recursive Gated Convolution (gnConv) that performs high-order spatial interactions with gated convolutions and recursive designs.” Based on that, they introduced HorNet as a visual backbone that outperforms Swin Transformers and ConvNeXt by a significant margin in multiple benchmark tasks.

Alibaba this week announced its family of large AI models that encompass NLP, multimodal, and computer vision models. The underlying multimodal model “OFA” is a unified sequence-to-sequence pretrained model (support English and Chinese) that unifies modalities (i.e., cross-modality, vision, language) and tasks (finetuning and prompt tuning are supported): image captioning, VQA, visual grounding, text-to-image generation, text classification, text generation, image classification, etc. The paper OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework has been accepted at ICML 2022 and open sourced on GitHub.

--

--

Recode China AI

A weekly newsletter on emerging AI trends and technologies in China