When Market Realism Forces Storage Players to Adopt Tactical Solutions and Approaches
Thanks to pNFS for some of them
By Philippe Nicolas | November 19, 2025 at 2:03 pmAI changed lots of things for all of us, it’s trivial to say that today but the reality is we all live a real tsunami at least for people who are able to see it, live with it and participate and surf on this big wave.

For storage, it shakes established positions, some players are more visionary than others and some of them continue to ignore the reality even if they know they have to do something…And in details, when you scratch the surface, you see players without real interest for HPC and HPC storage in particular, completely naked when AI lands on the planet. As AI is a super set of HPC, if you ignore HPC it is immediately though to fill the gap and be considered a a serious player in the domain. AI has introduced high pressure on the storage stack and require high I/O performance to sustain service and the feed of GPUS becomes paramount.
HPC storage players appear to be better placed in the AI storage battle, they already addressed high I/O requirements fro many years. Among them, clearly DDN occupies a special place, Weka as a pure player with a modern parallel file system and Vast Data who finally prove that the file storage performance limitations or bottleneck doesn’t come from NFS thanks to its DASE architecture. HPE has an interesting DNA thanks to its several acquisitions in HPC with Cray, SGI, or indirect ones with ClusterStor, a long time ago with Convex. For some deals they even resell Weka, relies on Vast Data, but Antonio Neri, president and CEO, HPE, announced recently the strategic decision to stop working with Weka and also Qumulo in the file storage segment. IBM has strong roots with Storage Scale, renamed several times, illustrating that modern workloads require modern solutions and designs.
This point is key as even DDN, a reference in HPC storage, reinvents itself with Infinia, confirming this reality.
Rest of storage players, NetApp, Pure Storage and the Dell EMC storage business, who never seriously developed HPC storage, had to make decisions and answer tactically as well. The absence of solutions is visible. They choose to extend their product offering with pNFS, Pure Storage announced FlashBlade//EXA at their Accelerate show before the summer, NetApp unveiled AFX during its recent Insight conference and Dell PowerScale during this SC25, all 3 with some special configurations and limitations. We wait the full Project Lightning introduced at Dell Technologies World 2024, 18 months ago. But at least, these 3 players now have something to say. In this pNFS game, we find also Hammerspace and Peak:AIO with some comprehensive model. Hammerspace also initiated the Open Flash Platform initiative and we invite our readers to read what we wrote a few weeks ago about it.
Vdura, the new name of Panasas, a real pioneer in HPC storage, has took time to jump into the AI train. Other players exist as well such as ThinkParQ or Quobyte, both from Germany, even with an interesting “classic” parallel file storage solutions but they’re super small. Lenovo is absent as they chose to partners with DDN, IBM, ThinkParQ, Vast Data and Weka, so finally almost everyone…
All this confirms once again that speed is mandatory for AI and parallelism represents a way to deliver that. But it appears that file striping at client level, a key feature at scale to sustain high throughput, is no offered by all players. This criteria represents finally the real definition of a parallel file system. In the opposite way, let’s consider pNFS without file striping and in that case, our feeling is it is a file I/O orchestrator of multiple 1-1 relations or NFS client – NFS server with a file granularity. For each file, an operation is established between a NFS client and a NFS server, every file could touch a different server but the entire file is sent and store to one only server. Of course distribution policies are in place but still at the file level. There is no parallelism at the file level but between files and finally the performance of the file transfer is delivered by 1 NFS server. If now you aggregate file transfer numbers across several NFS servers, it appears to be good globally, but the key questions is the performance received from that operation. In others words, for instance, it reflects the operation of sending a 1GB file to 1 NFS server versus several 10MB chunks from the same 1G file to 10 or 20 NFS servers in parallel. Who wins? Who is faster? And you can imagine the pain when you have multi TB files…
On top of this, we can even add some file level protection computed at the client level with some erasure coding schema that finally distributes data and “parity” chunks to all-back data servers, here we think pNFS data servers, normally just “classic” NFS entities.
So we see plenty of work has to be done and it appears that some of these iterations will appear from agile players…










