IBM Storage Scale 6.0.0 Introduces Data Acceleration Tier
To deliver extreme IOs and latency for AI inferencing
By Francis Pelletier | November 7, 2025 at 2:01 pmBlogpost written by Mike Kieran, AI and HPC storage, IBM Corp., published Oct. 22, 2025
IBM Corp. announced IBM Storage Scale 6.0.0, a major evolution in our global data platform, purpose-built to meet the demands of AI-driven enterprises.
Scale System 6000
At the heart of this release is support for the company’s Storage Scale System data acceleration tier (DAT), a high-performance NVMeoF-based storage layer that works with Scale System 6000 to deliver extreme IOs and ultra-low latency for real-time AI inferencing workloads.
AI inference is only as effective as the data pipeline that feeds it. If storage cannot sustain the throughput and low latency required, organizations risk delayed fraud detection, missed insights in healthcare, and unreliable decision-making in other critical domains. These challenges typically arise from latency spikes, dropped data during ingestion, or inconsistent performance in shared environments. By ensuring predictable, high-performance access to data, enterprises can maintain trust in AI outcomes and accelerate time-to-insight.
The Storage Scale System data acceleration tier is designed to help organizations tackle the IOs bottlenecks that can constrain real-time AI inferencing by delivering fast, stable, and predictable performance.
When using IBM Storage Scale System 6000 NVMe systems with more than 20 clients, configured with centralized Data Acceleration Tier (DAT) you can achieve up to 28 million IOs without Protection Information (PI) and up to 25 million IOs with PI. (1) An Storage Scale System 6000 NVMe system configured with the centralized Data Acceleration Tier (DAT) achieved a read throughput of 164 GB/s. (2) This level of performance is ideal for AI inferencing use cases where delays or dropped data come with significant business costs.
For decentralized DAT deployment, the performance data replica is deployed on client-local storage, with performance determined by the disk configuration and compute capabilities of the client nodes. An Storage Scale System 6000 NVMe system configured with decentralized Data Acceleration Tier (DAT) achieved up to 1.4 million IOs per single client node. (3)
The data acceleration tier is optimized for the small, random I/O patterns of AI workloads, eliminating queuing and instability under load. By integrating DAT into their infrastructure, customers gain a scalable, AI-optimized storage foundation that enables faster decision-making, higher model throughput, and more reliable AI services whether deployed on-premises or in hybrid cloud environments.
Storage Scale 6.0.0 is engineered to accelerate AI on-prem and in the cloud with new capabilities that demonstrates IBM’s ability to deliver certified, high-performance solutions for GPU-accelerated workloads:
-
Content Aware Storage (CAS): Adds async notifications to enable faster, event-driven data ingestion into AI inferencing workflows;
-
Expanded NVIDIA integration: Includes CNSA support for GPUDirect Storage, enhanced Base Command Manager support, and NVIDIA Nsight integration;
-
Additional NVIDIA certifications and reference architectures: Aligned with NVIDIA BasePOD/SuperPOD and Grace Blackwell platforms, ensuring performance and compatibility.
Storage Scale 6.0.0 also introduces improvements designed to reduce operational overhead and accelerate time-to-value for enterprise IT teams:
-
One-button GUI upgrades, enhanced prechecks, and unified protocol deployment simplify operations;
-
API-driven control plane enhancements enable automation of advanced features like quotas
-
Improved problem determination diagnostics for expels and snapshots to streamline root cause analysis and remediation.
It also adds protocol innovation to ensure seamless, high-speed data access across diverse environments:
-
NFS nconnect enables high-throughput AI workloads over standard Ethernet;
-
SMB Multichannel supports high-speed data acquisition from Windows-based instruments.
With these new capabilities, Storage Scale 6.0.0 helps organizations enable AI at scale and unlock the full potential of their AI investments on-prem and in the cloud.
Talk to your IBM representative or partner today to explore how Storage Scale 6.0.0 can accelerate your AI outcomes on-prem, in the cloud, or across hybrid environments.
(1) This result is based on IBM internal testing using a 4 KiB random-read workload configuration. Performance may vary depending on system configuration, workload, and operating environment.
(2) This result is based on IBM internal testing using the gpfsperf benchmark with a 16 MiB sequential-read workload configuration. Performance may vary depending on system configuration, network environment, and workload characteristics.
(3) This result is based on IBM internal testing using a 4 KiB random-read workload configuration. In decentralized Data Acceleration Tier configurations, the performance data replica is deployed on client-local storage, and performance may vary depending on the disk configuration and compute capabilities of the client nodes.











