Updated May 2, 2026

AI Power Rankings

IndependentยทFrontier AI lab index
โ† Companies
๐Ÿ‡จ๐Ÿ‡ณ

Zhipu AI

Beijing, China

First pure-play foundation model company to IPO (Jan 2026); Tsinghua spinout.

Efforts: GLM-5, GLM-5.1, ChatGLM, Z.ai platform
Rank
#15
Structural
6.24
Unweighted
6.20
vs #1
-3.22

Profile across ten areas

Area breakdown

๐Ÿง Model Quality
Core Engine ยท 18%
GLM-5 (744B params, 44B active, MoE) and GLM-5.1 (open-sourced April 8, 2026) reached the global #3 / Chinese #1 spot on combined SWE-bench Pro / Terminal-Bench / NL2Repo and can run an autonomous task for over 8 hours; performance outside coding is still less distinguished and benchmarks are not yet broadly verified.
โ€” Claude Opus 4.7
Claude: 6.9Claude: 6.6Gemini: 7.6GPT-5.5: 7.8
7.2
panel mean
๐Ÿ”ฌResearch
Core Engine ยท 12%
As a Tsinghua University spinout, Zhipu has a strong academic research foundation, GLM-5 was trained entirely on domestic Huawei Ascend chips (a notable compute-research result), and 8-hour autonomous task execution is a real capability; foundational research output still trails DeepSeek and Alibaba's Qwen team.
โ€” Claude Opus 4.7
Claude: 6.0Claude: 5.6Gemini: 8.0GPT-5.5: 7.4
6.8
panel mean
๐Ÿ—๏ธProduct
Delivery ยท 10%
Z.ai and GLM APIs support assistant, websites, slides, data analysis, and developer workflows; the limitation is that the platform is smaller and less globally embedded than Alibaba Cloud, ByteDance, Baidu, OpenAI, or Anthropic.
โ€” GPT-5.5 Pro
Claude: 4.8Claude: 4.8Gemini: 7.8GPT-5.5: 7.1
6.1
panel mean
๐Ÿ’ฐBusiness
Accelerants & Stabilizers ยท 5%
Zhipu is a significant Chinese AI startup with visible model and platform releases, but public revenue, valuation durability, and customer-scale data are limited; the counterpoint is that its government and enterprise positioning could be stronger than public data reveals.
โ€” GPT-5.5 Pro
Claude: 5.6Claude: 5.7Gemini: 7.7GPT-5.5: 5.5
6.1
panel mean
โšกCompute
Core Engine ยท 14%
Zhipu has enough compute access to train and serve competitive GLM models, but there is little public evidence of owned hyperscale infrastructure or custom silicon; the limitation is especially large versus Alibaba, ByteDance, Baidu, or U.S. hyperscalers.
โ€” GPT-5.5 Pro
Claude: 4.0Claude: 3.8Gemini: 7.4GPT-5.5: 6.3
5.4
panel mean
๐Ÿ›ก๏ธSafety
Accelerants & Stabilizers ยท 6%
Zhipu operates in a heavily regulated Chinese AI environment and provides model documentation, but public red-team, system-card, and alignment-research disclosures are limited; the counterpoint is that enterprise/government compliance may be stronger than externally visible.
โ€” GPT-5.5 Pro
Claude: 3.7Claude: 3.7Gemini: 7.3GPT-5.5: 6.4
5.3
panel mean
๐Ÿ‘ฅTalent
Delivery ยท 10%
Strong Tsinghua University connection provides a steady talent pipeline and academic prestige; the team is relatively small, and competing for top Chinese AI talent against better-funded ByteDance, Alibaba, and DeepSeek is increasingly difficult.
โ€” Claude Opus 4.7
Claude: 5.8Claude: 5.8Gemini: 8.3GPT-5.5: 7.0
6.7
panel mean
๐ŸŒDistribution
Delivery ยท 8%
Growing in China with competitive pricing (~$0.80/M input tokens), but limited global presence; the IPO provides visibility but the platform has not achieved the user scale of Doubao, Ernie Bot, or Kimi.
โ€” Claude Opus 4.7
Claude: 4.2Claude: 4.2Gemini: 7.5GPT-5.5: 6.6
5.6
panel mean
๐ŸฐData & Moats
Core Engine ยท 12%
First-mover advantage as a publicly traded AI model company and the Tsinghua academic network provide some differentiation; but the company lacks proprietary data sources and the open-source strategy means model-level moats are thin.
โ€” Claude Opus 4.7
Claude: 3.9Claude: 3.9Gemini: 7.3GPT-5.5: 6.8
5.5
panel mean
๐Ÿš€Momentum
Accelerants & Stabilizers ยท 5%
GLM-5.1 open-source release, the 19% stock pop, and the proven Huawei-only training stack are real wins; but compute shortages, capped product reach, and intense Chinese-market competition keep momentum middle-of-the-pack.
โ€” Claude Opus 4.7
Claude: 5.9Gemini: 7.9GPT-5.5: 8.1
7.3
panel mean

News log

Zhipu AI โ€” News Log

Tracked for AI Power Rankings scoring. Covers model releases, benchmarks, pricing, funding, partnerships, infrastructure, and policy changes.


2026-04-08 โ€” Zhipu AI open-sources GLM-5.1; GLM-5 at 744B parameters trained on Huawei chips

Published: 2026-04-08 | Logged: 2026-05-02T09:00Z | Area: Model Quality, Research & Innovation, Compute & Infra

Zhipu AI released GLM-5.1 as open-source on April 8, following the GLM-5 launch in February 2026. GLM-5 features 744 billion parameters (44B active) using MoE architecture, trained entirely on domestic Huawei Ascend chips. GLM-5.1 positioned as "world's strongest open-source model," achieving third place globally and first among Chinese/open-source models on combined SWE-bench Pro, Terminal-Bench, and NL2Repo benchmarks. Can work independently on a single task for over 8 hours. Zhipu stock surged 19% intraday on the announcement. API pricing raised 10% but still approximately $0.80/M input tokens โ€” roughly 6x cheaper than Claude Opus 4.6.

Scoring impact: Strong Model Quality improvement (frontier-competitive performance, #1 open-source). Research & Innovation boosted (domestic chip training, 8-hour autonomous tasks). Compute & Infra notable for Huawei-only training infrastructure.

Sources:


โ† Higher rank
๐Ÿ‡จ๐Ÿ‡ณ Moonshot AI
#14 ยท 6.50