💯 Friendlier China’s Regulation on Generative AI; Intel Launches AI Chip in China to Challenge NVIDIA; Meet 100 Chinese LLMs

Weekly China AI News from July 10 to July 16

Recode China AI
7 min readJul 18, 2023

Dear readers, this week we dive into China’s new Interim Measures regulating generative AI services. We’ll also look at Intel’s launch of its Gaudi 2 AI accelerators in China, as the AI chip race heats up. And an impressive compilation reveals over 100 large language models (LLMs) have already emerged in China so far this year

China Issues New Rules on Oversight of Generative AI

What’s new: China’s top Internet regulator, Cyberspace Administration of China (CAC), released the widely-anticipated Interim Measures for the Administration of Generative Artificial Intelligence Services (Interim Measures), in a joint effort with six other ministries and departments. The Interim Measures will come into effect on August 15, 2023.

What’s changed: The new regulation is less stringent compared to the draft version, the Measures for the Administration of Generative Artificial Intelligence Services (Draft for Comments), released this April. Below are noteworthy highlights:

  • Scope: The Interim Measures exempt industry organizations, enterprises and others that develop generative AI but do not offer public services. This likely exclude open-source models.
  • Classification: Generative AI services shall be regulated in an inclusive, prudent and classified manner, according to the Interim Measures. My speculation is instead of one-fits-all, regulators will classify services into different regulatory categories based on factors like service type, users and capabilities.
  • User identity: The real-name registration requirement has been removed, dialing back the level of user monitoring.
  • Dataset: While the Draft demanded generative AI product provider to ensure the “authenticity, accuracy, objectivity, and diversity” of the training data, the Interim Measures soften this to “take necessary steps to improve” data quality. Legal sourcing is still required.
  • Disinformation: The Draft’s mandate to prevent disinformation/hallucination is absent from the Interim Measures, signaling more leeway for providers.
  • Assessment: Safety assessments are now limited to services with public opinion/mobilization abilities rather than all providers.
  • Licensing: Generative AI service providers should obtain administrative permits.
  • Responsibility: Providers’ responsibility is narrowed from policing generated content to “Cyber Information Security” in the Interim Measures.

What remains unchanged in the Interim Measures is generative AI services must uphold socialist core values and abide by laws, regulations, social morals and ethics, prohibiting discriminatory, violent, and pornographic content.

Expert’s take: Matt Sheehan, China tech policy researcher at Carnegie, shared his first take on the regulation. Read his full take on Twitter.

The final version is much less strict than the April draft version. This reflects a very active policy debate in China’s economy concerns…This regulation is explicitly “provisional” (暂行). Chinese regulators are taking an iterative approach, trying things out, getting feedback, making changes. More to come here.

Angela Zhang, director of the Center for Chinese Law at the University of Hong Kong, told Bloomberg that:

The final version of the law significantly watered down many stringent requirements in the earlier draft released by the CAC, sending a strong signal of cautious and tolerant approach in the oversight of generative AI.

The East is Read, a blog hosted at the Center for China and Globalization (CCG), a leading non-government thinktank in Beijing, said:

The gist of the changes between Call for Comments and Interim Measures is that Chinese decision-makers want to “balance development and security,” instead of over-regulating Generative AI Services and thus stifling potentially important progress from innovation.

Intel Launches Gaudi 2 AI Accelerator in China, Challenging Nvidia

What’s new: Intel officially launched its second-generation Gaudi deep learning accelerator, the Habana Gaudi2, in the Chinese market at a press conference in Beijing on July 11.

More details: On MLPerf Training 3.0, the Gaudi2 accelerator delivered strong training performance on computer vision models like ResNet-50 (8 cards) and UNet3D (8 cards), as well as natural language processing models like BERT (8/64 cards), surpassing the A100 on each model and nearing the H100 on some tasks.

It was also one of only two chips (along with Nvidia’s H100) to submit GPT-3 LLM training results, completing GPT-3 training in 311 minutes using 384 Gaudi 2 accelerators and achieving 95% scaling from 256 to 384 accelerators.

Last year, Intel’s Habana Labs launched the second-generation Gaudi2 AI training and inference chip. Compared to the first generation, Gaudi 2 uses a 7nm rather than 16nm fabrication process, and increases tensor processor cores to 24.

Why it matters: The launch of the Gaudi2 in China signals Intel’s intent to compete with Nvidia on AI acceleration. The impressive benchmark results versus Nvidia chips and linear scalability up to 384 accelerators could make it an appealing option for companies looking to train LLMs in China.

The new China version of Gaudi 2 also complies with latest US export restrictions on AI/GPU chips for China, Intel claimed.

Meet 100+ Chinese Large Language Models

What’s new: A comprehensive list compiled by Adina Yakup (@AdeenaY8) on Twitter reveals over 100 LLMs have emerged in China as of July 13. While the original source is unclear, Yakup’s 103 Chinese LLMs appears legitimate with only minor errors.

The list includes each model’s name, details, affiliations, and location across provinces and cities. I translated and summarized the first 20 entries to provide a snapshot. Below are my observations:

  • Almost half of LLMs come from the capital, China.
  • All LLM services to the public are still under the trial mode.
  • More than half of LLMs don’t even have websites.
  • A majority of open-source models are built on LLaMa or BLOOM.

Why it matters: The list demonstrates the rapid proliferation of Chinese LLMs beyond flagship models like ERNIE and Wudao. While model capabilities remain unclear, the sheer volume highlights the vibrant AI race underway in China.

Weekly News Roundup

🀄️ Tencent’s AI “Fine Art LuckyJ” has reached the master level on the Japanese mahjong platform Tenhou and topped the platform’s rankings. Tenhou has 238,000 active users, with only 27 people able to reach the master level.

🆓 On July 14, Zhipu AI and Tsinghua KEG announced that their models ChatGLM-6B and ChatGLM2–6B are available for commercial use without licensing fees. Other commercially free open-source LLMs include Baichuan-13B from Baichuan Intelligent Technology, and Aquila from BAAI.

🎨 Meitu’s new “AI Image Expanding” feature can predict and generate missing parts of an image based on context and texture, providing wider framing and perspective for original photos.

🛍 JD.com has officially unveiled its Yanxi large language model, along with the Yanxi AI Development Platform. The platform has opened reservations for its August launch. JD.com CEO Sandy Ran Xu said LLMs have achieved tangible results internally already.

Trending Research

Skilful nowcasting of extreme precipitation with NowcastNet (Nature)

  • Affiliations: Tsinghua University, China Meteorological Administration
  • New NowcastNet model combines physical evolution schemes and conditional learning in a neural network optimized to forecast extreme precipitation, producing sharp multiscale patterns over large regions for up to 3 hours. Evaluated by meteorologists across China, NowcastNet ranked first 71% of the time against leading methods, skillfully forecasting light and heavy rain including previously intractable advective and convective extremes.

RAPHAEL: Text-to-Image Generation via Large Mixture of Diffusion Paths

  • Affiliations: The University of Hong Kong, SenseTime Research
  • New text-to-image model RAPHAEL uses stacked mixture-of-experts layers enabling billions of diffusion paths to depict textual concepts as “painters” onto image regions over time. It outperforms leading models like Stable Diffusion in image quality, style switching across comics/realism/cyberpunk/ink, and human evaluation on ViLG-300. A 3B parameter model trained on 1000 A100 GPUs for 2 months achieves SOTA COCO FID of 6.61, advancing image generation research.

xTrimoPGLM: Unified 100B-Scale Pre-trained Transformer for Deciphering the Language of Protein

  • Affiliations: BioMap Research, Tsinghua University
  • A new 100B parameter protein language model unifies autoencoding and autoregressive pretraining to concurrently handle understanding and generation tasks. By exploring joint optimization of the two objectives, the model called xTrimoPGLM significantly outperforms other methods on 13 of 15 protein understanding benchmarks and generates structurally valid novel sequences. A 1B parameter version focused on antibodies sets new records in predicting antibody naturalness and structure for drug design with faster inference than AlphaFold2, showing xTrimoPGLM’s versatility.

--

--

Recode China AI

A weekly newsletter on emerging AI trends and technologies in China