First pure-play foundation model company to IPO (Jan 2026); Tsinghua spinout.
GLM-5 (744B params, 44B active, MoE) and GLM-5.1 (open-sourced April 8, 2026) reached the global #3 / Chinese #1 spot on combined SWE-bench Pro / Terminal-Bench / NL2Repo and can run an autonomous task for over 8 hours; performance outside coding is still less distinguished and benchmarks are not yet broadly verified.โ Claude Opus 4.7
As a Tsinghua University spinout, Zhipu has a strong academic research foundation, GLM-5 was trained entirely on domestic Huawei Ascend chips (a notable compute-research result), and 8-hour autonomous task execution is a real capability; foundational research output still trails DeepSeek and Alibaba's Qwen team.โ Claude Opus 4.7
Z.ai and GLM APIs support assistant, websites, slides, data analysis, and developer workflows; the limitation is that the platform is smaller and less globally embedded than Alibaba Cloud, ByteDance, Baidu, OpenAI, or Anthropic.โ GPT-5.5 Pro
Zhipu is a significant Chinese AI startup with visible model and platform releases, but public revenue, valuation durability, and customer-scale data are limited; the counterpoint is that its government and enterprise positioning could be stronger than public data reveals.โ GPT-5.5 Pro
Zhipu has enough compute access to train and serve competitive GLM models, but there is little public evidence of owned hyperscale infrastructure or custom silicon; the limitation is especially large versus Alibaba, ByteDance, Baidu, or U.S. hyperscalers.โ GPT-5.5 Pro
Zhipu operates in a heavily regulated Chinese AI environment and provides model documentation, but public red-team, system-card, and alignment-research disclosures are limited; the counterpoint is that enterprise/government compliance may be stronger than externally visible.โ GPT-5.5 Pro
Strong Tsinghua University connection provides a steady talent pipeline and academic prestige; the team is relatively small, and competing for top Chinese AI talent against better-funded ByteDance, Alibaba, and DeepSeek is increasingly difficult.โ Claude Opus 4.7
Growing in China with competitive pricing (~$0.80/M input tokens), but limited global presence; the IPO provides visibility but the platform has not achieved the user scale of Doubao, Ernie Bot, or Kimi.โ Claude Opus 4.7
First-mover advantage as a publicly traded AI model company and the Tsinghua academic network provide some differentiation; but the company lacks proprietary data sources and the open-source strategy means model-level moats are thin.โ Claude Opus 4.7
GLM-5.1 open-source release, the 19% stock pop, and the proven Huawei-only training stack are real wins; but compute shortages, capped product reach, and intense Chinese-market competition keep momentum middle-of-the-pack.โ Claude Opus 4.7
Tracked for AI Power Rankings scoring. Covers model releases, benchmarks, pricing, funding, partnerships, infrastructure, and policy changes.
Published: 2026-04-08 | Logged: 2026-05-02T09:00Z | Area: Model Quality, Research & Innovation, Compute & Infra
Zhipu AI released GLM-5.1 as open-source on April 8, following the GLM-5 launch in February 2026. GLM-5 features 744 billion parameters (44B active) using MoE architecture, trained entirely on domestic Huawei Ascend chips. GLM-5.1 positioned as "world's strongest open-source model," achieving third place globally and first among Chinese/open-source models on combined SWE-bench Pro, Terminal-Bench, and NL2Repo benchmarks. Can work independently on a single task for over 8 hours. Zhipu stock surged 19% intraday on the announcement. API pricing raised 10% but still approximately $0.80/M input tokens โ roughly 6x cheaper than Claude Opus 4.6.
Scoring impact: Strong Model Quality improvement (frontier-competitive performance, #1 open-source). Research & Innovation boosted (domestic chip training, 8-hour autonomous tasks). Compute & Infra notable for Huawei-only training infrastructure.
Sources: