CoreWeave Becomes First Cloud Provider to Deploy Nvidia GB300 NVL72 Platform
Accelerating on the Cloud AI service
This is a Press Release edited by StorageNewsletter.com on July 9, 2025 at 2:01 pmCoreWeave, an AI Hyperscaler, announced it is the first AI cloud provider to deploy the latest Nvidia GB300 NVL72 systems for customers, with plans to scale deployments worldwide.
The Nvidia GB 300 NVL72 represents a significant leap in performance for AI reasoning and agentic workloads, delivering up to a 10x boost in user responsiveness, a 5x improvement in throughput per watt compared to the previous generation Nvidia Hopper architecture, and a 50x increase in output for reasoning model inference.
“CoreWeave is constantly working to push the boundaries of AI development further, deploying the bleeding-edge cloud capabilities required to train the next gen of AI models,” said Peter Salanki, co-founder and CTO, CoreWeave. “We’re proud to be the first to stand up this transformative platform and help innovators prepare for the next exciting wave of AI.”
CoreWeave collaborated with Dell, Switch, and Vertiv to build the initial deployment of Nvidia GB 300 NVL72 systems, enabling greater speed and efficiency to bring the latest accelerated computing offerings from Nvidia to CoreWeave’s cloud platform.
The deployment of the GB 300 NVL72 is tightly integrated with CoreWeave’s cloud-native software stack, including its CoreWeave Kubernetes Service (CKS) and Slurm on Kubernetes (SUNK) to its deep observability and custom-designed Rack LifeCycle Controller (RLCC). CoreWeave recently announced that hardware-level data and cluster health events are now integrated directly through Weights & Biases’ developer platform, which CoreWeave acquired earlier this year.
This achievement continues CoreWeave’s legacy of delivering first-to-market access to the world’s most advanced AI infrastructure demanded by the world’s leading AI labs and enterprises. This initial deployment of Nvidia GB 300 NVL72 rack-scale systems expands on CoreWeave’s existing Blackwell fleet, which also includes the HGX B200 and the GB 200 NVL72 systems. Last year, CoreWeave was among the first to offer H200 GPUs and was the first AI cloud provider to make GB 200 NVL72 systems available.
In June 2025, CoreWeave, in collaboration with Nvidia and IBM, submitted the largest-ever MLPerf Training v5.0 benchmark using nearly 2,500 GB 200 Grace Blackwell Superchips, achieving a breakthrough result on the most complex model, Llama 3.1 405B, in just 27.3 minutes. CoreWeave is the only hyperscaler to achieve the highest Platinum rating by SemiAnalysis’s GPU Cloud ClusterMAX Rating System, an independent AI cloud industry benchmark.