What are you looking for ?
Advertise with us
RAIDON

HPE Deepens Integration with NVIDIA on AI Factory Portfolio

With HPE Private Cloud AI, co-developed with NVIDIA, Alletra Storage MP X10000 offers an SDK for NVIDIA AI data platform, HPE AI servers, ProLiant Compute DL380a Gen12, and OpsRamp software

Summary :

  • HPE Private Cloud AI, co-developed with NVIDIA, will support feature branch model updates from NVIDIA AI Enterprise and the NVIDIA Enterprise AI Factory validated design.
  • Alletra Storage MP X10000 offers an SDK for NVIDIA AI Data Platform to streamline unstructured data pipelines for ingestion, inferencing, training and continuous learning.
  • HPE AI servers rank No.1 in over 50 industry benchmarks and ProLiant Compute DL380a Gen12 will be available to order with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs starting June 4.
  • HPE OpsRamp software expands accelerated compute optimization tools to support NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.

Hewlett Packard Enterprise announced enhancements to the portfolio of NVIDIA AI Computing by HPE solutions that support the entire AI lifecycle and meet the unique needs of enterprises, service providers, sovereigns and research and discovery organizations.

Hpe Nvidia Ai Solutions IntroThese updates deepen integrations with NVIDIA AI Enterprise – expanding support for HPE Private Cloud AI with accelerated compute, launching Alletra Storage MP X10000 SDK for NVIDIA AI Data Platform. HPE is also releasing compute and software offerings with NVIDIA RTX PRO 6000 Blackwell Server Edition GPU and NVIDIA Enterprise AI Factory validated design.

Our strong collaboration with NVIDIA continues to drive transformative outcomes for our shared customers,” said Antonio Neri, president and CEO, HPE. “By co-engineering cutting-edge AI technologies elevated by HPE’s robust solutions, we are empowering businesses to harness the full potential of these advancements throughout their organization, no matter where they are on their AI journey. Together, we are meeting the demands of today, while paving the way for an AI-driven future.”

Enterprises can build the most advanced NVIDIA AI factories with HPE systems to ready their IT infrastructure for the era of generative and agentic AI,” said Jensen Huang, founder and CEO, NVIDIA Corp. “Together, NVIDIA and HPE are laying the foundation for businesses to harness intelligence as a new industrial resource that scales from the data center to the cloud and the edge.

HPE Private Cloud AI adds feature branch support for NVIDIA AI Enterprise
Private Cloud AI, a turnkey, cloud-based AI factory co-developed with NVIDIA, includes a dedicated developer solution that helps customers proliferate unified AI strategies across the business, enabling more profitable workloads and significantly reducing risk. To further aid AI developers, HPE Private Cloud AI will support feature branch model updates from NVIDIA AI Enterprise, which include AI frameworks, NVIDIA NIM microservices for pre-trained models, and SDKs. Feature branch model support will allow developers to test and validate software features and optimizations for AI workloads . In combination with existing support of production branch models that feature built-in guardrails, HPE Private Cloud AI will enable businesses of every size to build developer systems and scale to production-ready agentic and GenAI applications while adopting a safe, multi-layered approach across the enterprise.

HPE Private Cloud AI, a full-stack solution for agentic and GenAI workloads, will support the NVIDIA Enterprise AI Factory validated design.

Newest storage solution supports NVIDIA AI Data Platform

Hpe Alletra Storage Mp X10000Alletra Storage MP X10000 will introduce an SDK which works with the NVIDIA AI Data Platform reference design. Connecting HPE’s newest data platform with NVIDIA’s customizable reference design will offer customers accelerated performance and intelligent pipeline orchestration to enable agentic AI. A part of the company’s growing data intelligence strategy, the X10000 SDK enables the integration of context-rich, AI-ready data directly into the NVIDIA AI ecosystem. This empowers enterprises to streamline unstructured data pipelines for ingestion, inference, training, and continuous learning across NVIDIA-accelerated infrastructure.

Primary benefits of SDK integration include:

  • Unlocking data value through flexible inline data processing, vector indexing, metadata enrichment, and data management.
  • Driving efficiency with remote direct memory access (RDMA) transfers between GPU memory, system memory, and the X10000 to accelerate the data path with the NVIDIA AI Data Platform.
  • Right-sizing deployments with modular, composable building blocks of the X10000, enabling customers to scale capacity and performance independently to align with workload requirements.

Customers will be able to use raw enterprise data to inform agentic AI applications and tools by seamlessly unifying storage and intelligence layers through RDMA transfers. Together, HPE is working with NVIDIA to enable a new era of real-time, intelligent data access for customers from the edge to the core to the cloud.

Additional updates about this integration will be announced at HPE Discover Las Vegas 2025.

AI server levels up with NVIDIA RTX PRO 6000 Blackwell support

ProLiant Compute DL380a Gen12

Hpe Proliant Compute Dl380a Gen12The company’s ProLiant Compute DL380a Gen12 servers featuring NVIDIA H100 NVL, H200 NVL and L40S GPUs topped the latest round of MLPerf Inference: Datacenter v5.0 benchmarks in 10 tests, including GPT-J, Llama2-70B, ResNet50 and RetinaNet. This AI server will soon be available with up to 10xNVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, which will provide enhanced capabilities and deliver performance for enterprise AI workloads, including agentic multimodal AI inference, physical AI, model fine tuning, as well as design, graphics and video applications.

Key features include:

  • Advanced cooling options: ProLiant Compute DL380a Gen12 is available in both air-cooled and direct liquid-cooled (DLC) options, supported by HPE’s liquid cooling expertise (1), to maintain optimal performance under heavy workloads.
  • Enhanced security: HPE Integrated Lights Out (iLO) 7, embedded in the ProLiant Compute Gen12 portfolio, features built-in safeguards based on Silicon Root of Trust and enables the 1st servers with post-quantum cryptography readiness and that meet the requirements for FIPS 140-3 Level 3 certification, a high-level cryptographic security standard.
  • Operations management: HPE Compute Ops Management provides secure and automated lifecycle management for server environments featuring proactive alerts and predictive AI-driven insights that inform increased energy efficiency and global system health.

Two additional servers topped MLPerf Inference v5.0 benchmarks, providing 3rd-party validation of the firm’s leadership in AI innovation, showcasing the superior capabilities of the HPE AI Factory. Together with the ProLiant Compute DL380a Gen12, these systems lead in more than 50 scenarios.

Highlights include:

Hpe Proliant Compute Dl384 Gen12

  • ProLiant Compute DL384 Gen12 server, featuring the dual-socket NVIDIA GH200 NVL2, ranked first in 4 tests including Llama2-70B and Mixtral-8x7B.

Hpe Cray Xd670

  • Cray XD670 server, with 8xNVIDIA H200 SXM GPUs, achieved the top ranking in 30 different scenarios, including large language models (LLMs) and computer vision tasks.

Advancing AI infrastructure with new accelerated compute optimization
The company’s OpsRamp software is expanding its AI infrastructure optimization solutions to support the upcoming NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs for AI workloads. SaaS solution from the firm will help enterprise IT teams streamline operations as they deploy, monitor and optimize distributed AI infrastructure across hybrid environments. OpsRamp enables full-stack AI workload-to-infrastructure observability, workflow automation, as well as AI-powered analytics and event management. Deep integration with NVIDIA infrastructure – including NVIDIA accelerated computing, NVIDIA BlueField, NVIDIA Quantum InfiniBand and Spectrum-X Ethernet networking and NVIDIA Base Command Manager – provide granular metrics to monitor the performance and resilience of AI infrastructure.

OpsRamp gives IT teams the ability to:

  • Observe overall health and performance of AI infrastructure by monitoring GPU temperature, utilization, memory usage, power consumption, clock speeds and fan speeds.
  • Optimize job scheduling and resources by tracking GPU and CPU utilization across the clusters.
  • Automate responses to certain events, for example, reducing clock speed or powering down a GPU to prevent damage.
  • Predict future resource needs and optimize resource allocation by analyzing historical performance and utilization data.
  • Monitor power consumption and resource utilization in order optimize costs for large AI deployments.

Availability :

  • Private Cloud AI will add feature branch support for NVIDIA AI Enterprise by Summer.
  • Alletra Storage MP X10000 SDK and direct memory access to NVIDIA accelerated computing infrastructure will be available starting Summer 2025.
  • ProLiant Compute DL380a Gen12 with NVIDIA RTX PRO 6000 Server Edition will be available to order starting June 4, 2025.
  • OpsRamp Software will be time-to-market to support NVIDIA RTX PRO 6000 Server Edition.

(1) HPE has built and delivered the world’s fastest direct-liquid cooled supercomputers per the November 2024 TOP500 list.

Resources:
Supercharge AI by unleashing object storage data intelligence and performance    
HPE ProLiant Compute Gen12: HPE sets new AI inference world records: Continued excellence in performance    
HPE Cray XD670: HPE delivers leadership performance for AI inferencing in latest MLPerf benchmarks

Read also :
Articles_bottom
ExaGrid
AIC
Teledyne
ATTO
OPEN-E