What are you looking for ?
itpresstour
RAIDON

Nvidia GTC 2026: Wiwynn Showcases Nvidia Vera Rubin NVL72 AI Factory Infrastructure

Bringing the power of the Nvidia processing platform to data centers worldwide

Wiwynn, an innovative cloud IT infrastructure provider, showcased the latest Nvidia-powered AI solutions, developed in collaboration with Wistron, at GTC 2026. Wiwynn LogoFrom board-level innovation to rack-scale integration and validation, Wiwynn’s end-to-end expertise in accelerated computing, storage, and liquid cooling delivers future-ready AI factories with breakthrough performance, exceptional energy efficiency, and faster time-to-value. 

“As AI accelerates, customers need a trusted partner to rapidly build out integrated, rack-scale solutions that harmonize compute, storage, networking, and liquid cooling,” said William Lin, president and CEO, Wiwynn. “With end-to-end integration capability and manufacturing excellence, Wiwynn brings the latest platforms and innovations powered by Nvidia to market faster-unlocking performance-per-watt gains and building a robust AI infrastructure layer that keeps customers at the forefront of the AI era.”

Technology highlights

  • Nvidia Vera Rubin NVL72: Wiwynn and Wistron are among the first in line with readiness for the fully liquid-cooled, rack-scale platform unifying 72 Nvidia Rubin GPUs and 36 Nvidia Vera CPUs. Optimized for frontier AI model training, inference and reasoning, the platform delivers up to 10X higher performance per watt through extreme co-design – delivering breakthrough performance and efficiency for AI factories
  • Nvidia HGX Rubin NVL8: A top-tier modular accelerated computing platform – a fanless, high-density 2U system with eight Nvidia Rubin GPUs and 100% liquid cooling to achieve remarkable PUE. Scales from 8 to 16 systems per rack over Nvidia Spectrum-X Ethernet or Nvidia Quantum-X800 InfiniBand, adapting to evolving compute needs at scale
  • Nvidia RTX PRO Server: A compact 2U Arm-based platform that pairs dual Nvidia Vera CPUs with two Nvidia RTX PRO 4500 Blackwell Server Edition GPUs (32GB GDDR7 each) to power neural rendering and AI-driven design
  • Storage‑Next: Part of Nvidia’s Storage-Next initiative, the GPU-initiated storage architecture leverages Nvidia SCADA to directly orchestrate I/O across a 96-drive NVMe array via the GPU, delivering ultra-high IO/s, sub-millisecond tail latencies and petabyte-class density for GNN, LLM inference, and RAG. Direct liquid cooling, integrated per-drive telemetry and multi-zone leak detection enable high efficiency and hot-serviceability

“The next generation of AI innovation will run on rack-scale accelerated computing platforms designed for extreme performance and efficiency,” said Kaustubh Sanghani, VP, product management, Nvidia. “With deep expertise in liquid-cooled infrastructure and rack-scale system integration, Wiwynn is bringing the power of the Nvidia Vera Rubin platform to data centers worldwide-enabling customers to build AI factories that can scale training and inference for the agentic AI era.”

Read also :
Articles_bottom
SNL Awards_2026
AIC