Computex 2025: Gigabyte to Present End-to-End AI Portfolio
From scalable solutions to full-stack AI infrastructure, and storage
This is a Press Release edited by StorageNewsletter.com on May 16, 2025 at 2:01 pmGigabyteTechnology Co., Ltd. will return to Computex 2025 from May 20 to 23, in Tapei, Taiwan, under the theme Omnipresence of Computing: AI Forward.
Demonstrating how the company’s complete spectrum of solutions spanning the AI lifecycle, from data center training to edge deployment and end-user applications reshapes the infrastructure to meet the next-gen AI demands.
As GenAI continues to evolve, so do the demands for handling massive token volumes, real-time data streaming, and high-throughput compute environments. Gigabyte’s end-to-end portfolio – ranging from rack-scale infrastructure to servers, cooling systems, embedded platforms, and personal computing – forms the foundation to accelerate AI breakthroughs across industries.
Scalable AI Infrastructure Starts Here: GIGAPOD with GPM Integration
At the heart of the firm’s exhibit is the enhanced GIGAPOD, a scalable GPU cluster designed for high-density data center and large AI model training. Designed for high-performance AI workloads, GIGAPOD supports the latest accelerating platforms including AMD Instinct MI325X and NVIDIA HGX H200. It is now integrated with GPM (Gigabyte POD Manager), the company’s proprietary infra and workflow management platform, which can enhance operational efficiency, streamline management, and optimize resource utilization across large-scale AI environments.
This year will also see the debut of the GIGAPOD Direct Liquid Cooling (DLC) variant, incorporating Gigabyte’s G4L3 series servers and engineered for next-gen chips with TDPs exceeding 1,000W. The DLC solution is demonstrated in a 4+1 rack configuration in partnership with Kenmec, Vertiv, and nVent, featuring integrated cooling, power distribution, and network architecture. To help customers deploy faster and smarter, Gigabyte offers end-to-end consulting services, including planning, deployment, and system validation, accelerating the path from concept to operation.
Built for Deployment: From Super Compute Module to Open Compute and Custom Workloads
As AI adoption shifts from training to deployment, Gigabyte’s flexible system design and architecture ensure seamless transition and expansion. Gigabyte presents the cutting-edge NVIDIA GB300 NVL72, a fully liquid-cooled, rack-scale design that unifies 72 NVIDIA Blackwell Ultra GPUs and 36 Arm-based NVIDIA Grace CPUs in a single platform optimized for test-time scaling inference. Also shown at the booth are 2 OCP-compliant server racks: an 8OU AI system with NVIDIA HGX B200 integrated with Xeon processors, and an ORV3 CPU-based storage rack with JBOD design to maximize density and throughput.
The company also exhibits modular and diverse servers from high-performance GPU to storage-optimized to meet different AI workloads:
- Accelerated Compute: Air- and liquid-cooled servers for the latest AMD Instinct MI325X, Intel Gaudi 3, and NVIDIA HGX B300 GPU platforms, optimized for GPU-to-GPU interconnects
- CXL Technology: CXL-enabled systems unlock shared memory pools across CPUs for real-time AI inference
- High-density Compute and Storage: Multi-node servers packed with high-core count CPUs and NVMe/E1.S storage, developed in collaboration with Solidigm, Adata, Kioxia, and Seagate
- Cloud and Edge Platforms: Blade and node solutions optimized for power, thermal efficiency, and workload diversity – for hyperscalers and managed service providers
Bringing AI to Edge – and to Everyone
Extending AI to real-world applications, Gigabyte introduces a new-gen of embedded systems and mini PCs that bring compute closer to where data is generated.
- Jetson-Powered Embedded Systems: Featuring NVIDIA Jetson Orin, these rugged platforms power real-time edge AI in industrial automation, robotics, and machine vision.
- BRIX Mini PCs: Compact yet powerful, the latest BRIX systems include onboard NPUs and support Microsoft Copilot+ and Adobe AI tools, perfect for lightweight AI inference at the edge.
Expanding leadership from cloud to the edge, the company delivers on-premises AI acceleration with advanced Z890/X870 motherboards and GeForce RTX 50 and Radeon RX 9000 Series graphics cards. The innovative AI TOP local AI computing solution simplifies complex AI workflows through memory offloading and multi-node clustering capabilities. This AI innovation extends throughout our consumer lineup – from Microsoft-certified Copilot+ AI PCs and gaming powerhouses to high-refresh OLED monitors. On laptops, the exclusive ‘Press and Speak’ GIMATE AI agent enables intuitive hardware control, enhancing both productivity and everyday AI experiences.
The company invites everyone to explore the AI Forward era, defined by scalable architecture, precision engineering, and a commitment to accelerating progress.
Resource:
Gigabyte at Computex 2025: Connect the Dots from Data to Inspiration