Stellar Financial Performance
On November 19, NVIDIA announced a record-breaking third-quarter fiscal 2026 revenue of $57 billion, marking a 22% increase from the previous quarter and a remarkable 62% growth year over year. This surge was fueled by robust demand for accelerated computing solutions and AI infrastructure. The company’s data center segment led the charge, generating $51.2 billion, up 66% compared to the same period last year.
Blackwell Architecture in High Demand
CEO Jensen Huang emphasized that the demand for NVIDIA’s Blackwell architecture continues to surpass supply. “Blackwell sales are off the charts, and cloud GPUs are sold out,” Huang said in the earnings release. He noted that computing demand is growing exponentially, both in AI training and inference, signaling that the industry has entered what he calls a “virtuous cycle of AI.”
Data Center Growth and Strategic Drivers
During the earnings call, CFO Colette Kress highlighted the company’s record Q3 data center revenue, attributing the growth to multiple factors. “Record Q3 data center revenue of $51 billion increased 66% year over year, a significant feat at our scale,” she said. Kress credited the increase to the ramp-up of the GB300 GPU, strong networking demand, and the broader adoption of AI across hyperscalers and model developers.
Kress further stated, “The clouds are sold out, and our GPU installed base, including Blackwell, Hopper, and Ampere generations, is fully utilized.”
Addressing AI Bubble Concerns
Responding to market concerns about a potential AI bubble, Huang dismissed the notion, stating, “There’s been a lot of talk about an AI bubble. From our vantage point, we see something very different.”
Blackwell Performance and Platform Transition
The GB300 GPU now contributes roughly two-thirds of total Blackwell revenue, surpassing the GB200 as clients transition to the new platform. Huang highlighted significant performance improvements, noting that Blackwell Ultra delivers five times faster training compared to Hopper. Additionally, on DeepSeek R1 benchmarks, it offers ten times higher performance per watt and ten times lower cost per token than the H200.
Rubin Platform and Future Roadmap
NVIDIA’s next-generation Rubin platform remains on track for a 2026 rollout, with first silicon already in hand. “Our ecosystem will be ready for a fast Rubin ramp,” Kress confirmed, underscoring the company’s preparation for rapid deployment.
Expanding Customer Base and AI Deployments
NVIDIA revealed involvement in AI factory projects totaling five million GPUs, covering cloud providers, sovereign initiatives, enterprises, and supercomputing centers. Large-scale deployments include xAI’s Colossus 2, a gigawatt-scale data center, as well as expanded collaborations with AWS and Humane, planning up to 150,000 NVIDIA AI accelerators.
A new strategic partnership with Anthropic, adopting NVIDIA’s architecture for the first time, includes a commitment of up to one gigawatt of compute for future systems.
“We run every AI model—OpenAI, Anthropic, xAI, Gemini, science models, biology models, robotics models,” Huang emphasized, showcasing NVIDIA’s wide-ranging influence across the AI ecosystem.
