← Back to Library

Nvidia's Climb to $4T and the Peaks Ahead

Nvidia is now worth $4 trillion. Peak? Nope. Just a checkpoint. This company has a habit of turning summits into basecamps.

To see why $4T is a milestone rather than the peak, we’ll retrace Nvidia’s rise from dominating the training era to powering the macro shift to inference. Then we’ll look ahead. Agentic AI, autonomy, and robotics are each shaping up to become a trillion-dollar market in their own right.

We’ll also explore why constrained supply has quietly been a strength and why Nvidia’s long-term growth will hinge on access to China, the world’s most strategically important market for physical AI.

If you’re tired of Nvidia being caught in geopolitical crossfire… its future is even more entangled with China.

The First Trillion

Nvidia’s first trillion-dollar milestone was the payoff of being the first and best platform for large-scale AI training.

Chips alone weren’t enough. Nvidia had to control the full system to make clusters scale. It already had the parallel compute (GPUs) and the software (CUDA). The missing piece was high-bandwidth interconnect.

That came in 2019 with the Mellanox acquisition. From the press release:

Datacenters in the future will be architected as giant compute engines with tens of thousands of compute nodes, designed holistically with their interconnects for optimal performance.

That vision came true faster than expected.

As transformer-based LLMs took off, training needs exploded. Only two companies had the AI stack to meet that scale: Nvidia and Google.

Yes, Google.

Over the past decade, Google built its own accelerated computing platform, including high-bandwidth interconnects. As Google Cloud’s blog recounts,

“In late 2014, when TPU v1 was being fabbed, we realized training capability was the limiting factor... So we built an interconnected machine with 256 TPU chips connected with a very high-bandwidth, custom interconnect to really get a lot of horsepower behind training models”

But Google doesn’t sell silicon. TPUs were kept internal for years and only became available through Google Cloud in 2018. GCP remains a small slice (~10%) of Google’s ad-driven business. And while TPUs delivered strong performance on internal workloads, they lacked a robust external-facing software ecosystem. Combined with Google’s track record of shelving side projects, TPUs were never a serious option for the broader market.

Nvidia, by contrast, exists to sell silicon.

Nvidia was the first, best, and only option for labs training at the frontier.

OpenAI, Meta, xAI, and others all built ...

Read full article on Chipstrat →