MetaX Integrated Circuits (Shanghai) Co., Ltd.
688802 · SSE · China
Builds Chinese GPU chips and the software that makes them run AI code written for NVIDIA hardware.
MetaX Integrated Circuits builds GPU chips for Chinese AI companies, but because its chips cannot run CUDA — the programming language that nearly every AI training framework is written in — the company also built a compiler that intercepts CUDA code at runtime and translates it into instructions its own hardware understands, so customers can run existing models without rewriting them. That compiler is what makes the hardware usable, but each new NVIDIA GPU generation adds new CUDA features the compiler must learn to translate, turning the toolchain into a permanent maintenance obligation that always runs one step behind the target. The chips themselves must be manufactured at TSMC or Samsung on advanced process nodes, and US export controls can block access to those nodes, which means the performance ceiling of each chip generation is decided in Washington before the design even reaches the factory. If the translation layer ever accumulates enough compatibility gaps that AI models produce different numerical results on MetaX hardware than on NVIDIA's, Chinese cloud providers would have a concrete technical reason to tolerate the compliance risk of switching back — and the entire structure collapses.
How does this company make money?
The company earns money primarily by selling GPU chips directly to system integrators and OEMs on a per-unit basis. It also charges licensing fees when other companies use its GPU IP cores to design their own chips. Nearly all of this revenue comes from Chinese customers, both because domestic preference policies steer buyers toward local suppliers and because US export restrictions limit competing NVIDIA products from entering that market.
What makes this company hard to replace?
Switching away from the company's chips is not simple even if a customer wanted to. Integrating the GPU drivers and compiler toolchain into a working AI development environment takes months of validation and tuning. Any existing CUDA-based models must be recompiled and tested on the proprietary architecture before they can be trusted in production. On top of that, Chinese regulations actively favor domestic semiconductors in sensitive applications, which pushes customers toward this hardware and makes switching back to US suppliers a compliance risk.
What limits this company?
Each new chip generation must be manufactured at an advanced foundry — either TSMC or Samsung — and getting approved to use a new fabrication process takes two to three years. US export controls can block or delay that approval entirely, which means the performance ceiling of every chip is set by whatever Washington permits at the time of manufacturing, not by what the company's engineers designed.
What does this company depend on?
The company cannot operate without TSMC or SMIC to physically manufacture its chips. It also depends on its own proprietary GPU compiler toolchain — without it, customers cannot run their code. Memory controller IP is required to connect the chips to HBM and GDDR memory. Packaging and assembly suppliers are needed to put multi-chip GPU modules together. And Chinese government approvals govern what semiconductor technology the company can import or export.
Who depends on this company?
Chinese AI companies training large language models would lose their main domestic source of GPU compute for strategic computing independence. Autonomous vehicle developers in China would lose the locally sourced inference chips their regulatory compliance requires. Chinese cloud service providers would lose the only domestic alternative to NVIDIA's data center GPUs for running customer AI workloads.
How does this company scale?
The compiler toolchain and developer tools, once built, can serve an unlimited number of customers at almost no extra cost — a new user just downloads and runs them. What does not scale easily is the chip itself: each new GPU architecture requires a two-to-three year design and foundry qualification cycle, and that timeline stays fixed no matter how fast the company grows.
What external forces can significantly affect this company?
US-China semiconductor export controls are the most direct pressure — they restrict which foundry process nodes and EDA software tools the company can access, directly limiting how advanced its chips can be. Chinese government self-sufficiency policies work in the opposite direction, pushing domestic buyers toward the company's products. Meanwhile, NVIDIA keeps expanding CUDA with every new GPU generation, which forces the company's compiler to keep chasing a moving target it never fully catches.
Where is this company structurally vulnerable?
If the translation layer develops gaps wide enough that mission-critical AI models produce different numerical results on the domestic chip than they would on a native NVIDIA GPU, cloud providers and large language model developers would face a serious accuracy problem. At that point, no government policy favoring domestic chips could convince them to stay — the toolchain's entire value would collapse, and the foundry investment behind the chips would have nothing to justify it.