Xinnor and the University of Utah’s SCI Institute Replace Aging PostgreSQL Backend with xiRAID
Delivering 2.65x database performance gains
This is a Press Release edited by StorageNewsletter.com on May 11, 2026 at 2:00 pmXinnor, developer of the data protection solution xiRAID, announced the successful deployment of xiRAID at the Scientific Computing and Imaging (SCI) Institute at the University of Utah.
The project replaced an aging single-drive PostgreSQL backend that had become a bottleneck for one of the world’s largest research data environments, cataloging over 3 billion files across several petabytes of storage.
The SCI Institute uses Starfish to manage its research storage infrastructure. As the institute’s metadata catalog grew, the PostgreSQL database running on a single NVMe drive became the limiting factor: queries were sluggish, scans were exceeding their allotted time windows, and disk capacity had been exhausted. The team needed a solution that addressed both performance and capacity simultaneously, while keeping operational overhead to a minimum.
After evaluating free alternatives including mdadm and ZFS, SCI selected xiRAID for a new server built around ten 3.84 TB enterprise NVMe drives in a RAID10 configuration.
Independent fio benchmarks run by the SCI team across four configurations tell the story clearly. Ten raw NVMe drives running in parallel established the hardware ceiling at 8.03 million read IOPS and 125GB/s. xiRAID RAID10 on a raw block device matched and slightly exceeded that ceiling, reaching 8.24 million read IOPS and 129GB/s — a result consistent with xiRAID’s lockless read path benefiting from striping across mirrors. Adding the ext4 filesystem on top, which is the actual production configuration, brought read performance to 7.71 million IOPS at 120GB/s, representing only about 6% overhead versus the raw RAID device — exceptionally low for a journaled filesystem on a fast block device. The old server, a single NVMe drive with ext4, managed 200K read IOPS and 3.1GB/s. The new production system therefore delivers roughly 39 times more read IOPS and 5 times more write IOPS at peak, while keeping latency sub-millisecond under the same high-concurrency workloads that previously caused the old server’s response times to collapse to 24 milliseconds.
In production database testing using Starfish’s standard pgbench benchmark, the new system running PostgreSQL 17 on xiRAID recorded 15,069 transactions per second at a mean latency of 0.265 milliseconds, compared to 5,690 transactions per second at 0.703 milliseconds on the old system — a 2.65x improvement in both throughput and latency. In live operation, the system sustained approximately 76,600 metadata transactions per second across concurrent Starfish diff scans spanning multiple namespaces and billions of files.
Todd Green, IT director, SCI Institute, described the experience: “Operationally, xiRAID has been a non-event in the best possible way: initialization completed in minutes not days like mdadm, every lifecycle operation we have needed is one xicli command, and Xinnor’s support was responsive and technically sharp during evaluation. We would choose xiRAID again.”
Green also outlined what drove the selection: “We had a short, focused list. A vendor we trust. A single, growable RAID volume with quick growth times. Operational simplicity. And documentation and support — Xinnor has been very responsive and available to us outside of normal business hours.”
Dmitry Livshits, CEO, Xinnor, commented: “The SCI Institute represents exactly the kind of demanding, data-intensive environment that xiRAID was built for. When a research institution managing billions of files and petabytes of data needs storage that simply gets out of the way, we are proud to deliver that. This deployment demonstrates that enterprise-grade RAID performance and operational simplicity are not a trade-off — you can have both.”
The full case study is available here.











