Wiwynn Delivers Best MLPerf Training v5.1 Llama 2 70B LoRA Results at YTL Malaysia Data Center
Best verified results on Llama 2 70B LoRA using 1x and 8x Nvidia GB 200 NVL72 systems, production-deployed to power APAC AI training at YTL
This is a Press Release edited by StorageNewsletter.com on November 27, 2025 at 2:00 pmWiwynn announced best results in the MLPerf Training v5.1 Llama 2 70B LoRA benchmark (Closed division), earning best performance on both 1x and 8x Nvidia GB200 NVL72 configurations. The submissions were executed on production systems already deployed by YTL AI Cloud, spanning a 1-rack Nvidia GB 200 NVL72 (with 72 Nvidia Blackwell GPUs) and an 8-rack Nvidia GB 200 NVL72 integrating 576 GPUs-demonstrating leadership from single-rack to multi-rack scale.
The verified MLPerf scores highlight Wiwynn’s strengths in system design, manufacturing, liquid cooling, multi-rack integration, and hardware/software co-optimization, combined with YTL’s excellence in AI infrastructure integration and operations. Together, the partners demonstrate how close collaboration between system manufacturers and data center operators can deliver production-grade, benchmark-verified AI training performance.
“Wiwynn designs for workload optimization and real-world deployment,” said William Lin, president and CEO, Wiwynn. “Our collaboration with YTL spans from L11 system integration to L12, delivering expanded infrastructure and software integration. Our solution built on Nvidia GB 200 NVL72 infrastructure at YTL shows how system engineering and software tuning unlock the of large-scale GPU clusters.”
“At YTL AI Cloud, we are building the region’s most advanced AI infrastructure to serve as a premium hub for AI training and inferencing across APAC,” said Philip Lin, CEO, YTL AI Cloud. “Our collaboration with Wiwynn demonstrates how the right facility design, infrastructure readiness, and system partnership can deliver world-class AI capability at production scale.”
“Congratulations to Wiwynn on their strong achievements in MLPerf Training v5.1. We appreciate their active, transparent participation and knowledge sharing, which strengthen the ecosystem and supports our mission of open collaboration to improve AI systems’ accuracy, safety, speed, and efficiency,” said David Kanter, founder and head MLPerf, MLCommons.
YTL AI Cloud’s large scale clusters are purpose built with the most advanced GPUs, liquid-cooled high-density racks, redundant power architecture, and low-latency interconnects. As a strategic AI hub for the AsiaPac region, YTL AI Cloud in Johor, Malaysia enables global and regional customers to deploy, train, and scale frontier AI models such as Llama 2 70B efficiently and sustainably.
By combining Wiwynn’s cutting-edge system integration with YTL’s robust AI data center foundation, the collaboration establishes a new standard for high-performance, scalable, and sustainable AI training infrastructure in the region.
Footnote: [1] MLPerf Training v5.1 Closed Llama 2 70B LoRA; systems: 1x Nvidia GB 200 NVL72 (72 GPUs) and 8x Nvidia GB 200 NVL72 (576 GPUs). Official results verified by MLCommons Association. Retrieved from the MLCommons results site on Nov. 12, 2025. The MLPerf name and logo are registered and unregistered trademarks of MLCommons Association in the United States and other countries. All rights reserved. Unauthorized use strictly prohibited.










