Oracle and NVIDIA Help Enterprises and Developers Accelerate AI Innovation
NVIDIA AI Enterprise available natively through Oracle Cloud Infrastructure Console, enabling customers to easily access 160+ AI tools for training and inference
This is a Press Release edited by StorageNewsletter.com on June 16, 2025 at 2:01 pmOracle International Corp. has expanded its collaboration with NVIDIA Corp. to help customers streamline the development and deployment of production-ready AI, develop and run next-gen reasoning models and AI agents, and access the computing resources needed to further accelerate AI innovation.
As part of the initiative, NVIDIA AI Enterprise, an end-to-end, cloud-native software platform, is available natively through the Oracle Cloud Infrastructure (OCI) Console. In addition, NVIDIA GB200 NVL72 systems on OCI Supercluster are generally available with up to 131,072 NVIDIA Blackwell GPUs. Oracle has also become one of the 1st hyperscalers to integrate with NVIDIA DGX Cloud Lepton, an AI platform with a compute marketplace that connects developers with a global network of GPU compute.
“Oracle has become the platform of choice for AI training and inferencing, and our work with NVIDIA boosts our ability to support customers running some of the world’s most demanding AI workloads,” said Karan Batta, SVP, Oracle Cloud Infrastructure. “Combining NVIDIA’s full-stack AI computing platform with OCI’s performance, security, and deployment flexibility enables us to deliver AI capabilities at scale to help advance AI efforts globally.”
“Developers need the latest AI infrastructure and software to rapidly build and launch innovative solutions,” said Ian Buck, VP, hyperscale and HPC, NVIDIA. “With OCI and NVIDIA, they get the performance and tools to bring ideas to life, wherever their work happens.”
Oracle Expands Distributed Cloud Capabilities with NVIDIA AI Enterprise
Unlike other NVIDIA AI Enterprise offerings that are available through a marketplace, OCI is making it natively available through the OCI Console and enabling customers to purchase it with their existing Oracle Universal Credits. This reduces the time it takes to deploy the service and allows customers to benefit from direct billing and support. In addition, with NVIDIA AI Enterprise on OCI, customers can quickly and easily access 160+ AI tools for training and inference, including NVIDIA NIM microservices, a set of optimized, cloud-native inference microservices designed to simplify the deployment of GenAI models. With an end-to-end set of training and inference capabilities on OCI, customers can combine them with OCI services for building applications and managing data across a range of distributed cloud deployment options.
By making NVIDIA AI Enterprise available through the OCI Console, Oracle is helping customers to deploy it across OCI’s distributed cloud, which includes OCI’s public regions, Government Clouds, OCI sovereign cloud solutions, OCI Dedicated Region, Oracle Alloy, OCI Compute Cloud@Customer, and OCI Roving Edge Devices. This helps customers address security, regulatory, and compliance requirements when developing, deploying, and operating their enterprise AI stack.
NVIDIA Blackwell on OCI Enables AI Anywhere
To help meet the increasing need for AI training and inference, Oracle and NVIDIA continue to evolve AI infrastructure with new NVIDIA GPU types across Oracle’s distributed cloud. For example, OCI now offers liquid-cooled NVIDIA GB200 NVL72 systems on OCI Supercluster that can scale to up to 131,072 NVIDIA GPUs. In addition, customers can now use 1,000s of NVIDIA Blackwell GPUs on NVIDIA DGX Cloud and OCI to develop and run next-gen reasoning models and AI agents.
Oracle’s distributed cloud, AI infrastructure, and GenAI services, combined with NVIDIA accelerated computing and GenAI software, are enabling governments and enterprises to deploy AI factories. These new AI factories leverage the NVIDIA GB200 NVL72 platform, a rack-scale system that combines 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs, to help deliver exceptional performance and energy efficiency for agentic AI accelerated by advanced AI reasoning models.
Connecting Developers with Global GPU Compute
To help developers easily access the advanced GPU resources they need to further accelerate AI development and deployment, Oracle is one of the 1st hyperscalers to integrate with NVIDIA DGX Cloud Lepton. This integration enables developers to access OCI’s high-performance GPU clusters and the scalable compute needed for AI training and inference, digital twins, and massively parallel HPC applications. It also helps developers support strategic and sovereign AI goals by allowing them to tap into GPU compute capacity in specific regions for both on-demand and long-term computing.
Expansive AI and Cloud Options Help Customers Accelerate AI Capabilities
Oracle and NVIDIA are enabling organizations in Europe and around the world to take advantage of accelerated computing and AI.
Almawave, an Italian AI leader, is using OCI AI infrastructure and NVIDIA Hopper GPUs to run training and inferencing workloads for Velvet, a family of multilingual GenAI models. Designed and built in Italy, the Velvet models focus on the Italian language and content while also supporting several major European languages, including English, French, German, Portuguese, and Spanish. The models are being integrated into Almawave’s broad portfolio of vertical AI applications as part of its AIWave service platform.
“Our commitment is to accelerate innovation by building a high-performing, transparent, and fully integrated Italian foundational AI in a European context—and we are only just getting started,” said Valeria Sandei, CEO, Almawave. “Oracle and NVIDIA are valued partners for us in this effort, given our common vision around AI and the powerful infrastructure capabilities they bring to the development and operation of Velvet.”
Cerebriu, a leading Danish health tech company, is using OCI and NVIDIA Hopper GPUs to build an AI-accelerated tool that it aims to transform the clinical analysis of brain MRI scans. Developed by training Cerebriu’s proprietary deep learning models on 1,000s of multi-modal MRI images, the new AI-accelerated tool is helping medical professionals accelerate clinical diagnosis across many time-sensitive medical conditions by enabling significantly faster interpretation of MRI scans compared to current methods.
“AI plays an increasingly critical role in how we design and differentiate our products,” said Marko Bauer, ML researcher, Cerebriu. “OCI and NVIDIA offer AI capabilities that are critical to helping us advance our product strategy, giving us the computing resources we need to discover and develop new AI use cases quickly, cost-effectively, and at scale. Finding the optimal way of training our models has been a key focus for us. While we’ve experimented with other cloud platforms for AI training, OCI and NVIDIA have provided us the best cloud infrastructure availability and price performance.”
Resources:
OCI AI infrastructure
Oracle and NVIDIA partnership
NVIDIA AI Enterprise with OCI
Oracle AI Infrastructure capabilities with NVIDIA Blackwell