Knowledge Atlas Technology Joint Stock Company Limited
2513 · HKEX · China
Trains Chinese-language AI models on domestic Chinese data and sells access to those models through a compliance-ready cloud platform.
Knowledge Atlas Technology trains large Chinese-language AI models — called GLM — on internet data held inside China's regulatory perimeter, then deploys the resulting models through its Zhipu MaaS Platform, where enterprises build custom applications on top of them. The Chinese-language performance advantage comes directly from that domestically held training corpus, which foreign competitors cannot legally acquire at scale because doing so requires ICP licensing, domestic data residency, and algorithmic audit approval — a sequence no foreign provider has completed. Once an enterprise builds a custom application on the MaaS Platform, switching to a different provider means rebuilding that application from scratch and then putting it through a full regulatory requalification process, so the compliance overhead that made Zhipu attractive in the first place is the same thing that makes leaving expensive. The part of the business that cannot scale freely is training each new generation of GLM: that requires exponentially more compute from Beijing-based GPU clusters, and US semiconductor export controls prevent Zhipu from simply buying more advanced chips to expand that pool.
How does this company make money?
Zhipu charges businesses and individual users a recurring subscription fee to access Zhipu QingYan and the MaaS Platform. It also charges by the volume of API calls made when someone runs a query through a GLM model. Larger enterprises that want to run GLM on their own servers rather than the cloud pay a separate licensing fee for on-premise deployments. Developers who use CodeGeeX for code generation pay a subscription for that tool.
What makes this company hard to replace?
Any enterprise that has built a custom application on the Zhipu MaaS Platform would have to retrain that application from scratch if it moved to a different AI provider. Chinese regulations then require the rebuilt application to go through a full requalification process before it can be used — a slow and costly approval cycle. Research institutions connected to the AMiner platform face additional data migration costs because their academic data and workflows are already embedded in that system.
What limits this company?
Training each new version of GLM requires more computing power than the last, but Zhipu can only use GPU clusters inside China. US export controls block purchases of NVIDIA A100 and H100 chips freely on the open market, so the pool of hardware available for training new models is capped and cannot simply be expanded by spending more money.
What does this company depend on?
Zhipu cannot operate without NVIDIA A100 or H100 GPUs for training its models, Chinese internet crawl data as the raw material for those models, Beijing datacenter infrastructure to store and process that data, China's ICP licensing to legally run internet services, and ongoing compliance with China's Cybersecurity Law to process user data at all.
Who depends on this company?
Chinese enterprises using AutoGLM-Web would lose their localized AI agent tools for automated web tasks. Chinese software developers using CodeGeeX would lose a code generation tool built specifically for Chinese-language programming contexts. Chinese researchers using the AMiner platform would lose tools for analysing Chinese academic papers. Chinese government agencies would lose their only ready-made option for compliant, on-premise AI deployment.
How does this company scale?
Once a version of GLM is trained, the resulting model weights can be deployed across as many users or applications as needed at very low additional cost — adding one more customer does not require retraining the model. What does not get cheaper as the company grows is building the next model: each new generation of GLM requires exponentially more compute from a GPU supply that US export controls keep constrained, and the Chinese-language training data cannot be processed outside China's regulatory perimeter.
What external forces can significantly affect this company?
US semiconductor export controls directly limit how many advanced AI training chips Zhipu can obtain, which slows every new model release. China's own evolving AI regulation framework requires algorithmic audits and content controls that add compliance overhead and can change without warning. Broader geopolitical tensions between China and Western countries affect cross-border data flows and technology partnerships, potentially cutting off access to tools or research collaborations that the company relies on.
Where is this company structurally vulnerable?
If Chinese regulators changed the rules so that companies could no longer freely crawl domestic internet data for AI training — or required that all training data come from state-controlled exchanges — Zhipu would lose access to the corpus that makes GLM good at Chinese. That would stop new model development and leave all the enterprise applications built on the MaaS Platform with no path to improvement.