What are you looking for ?
Infinidat
Articles_top

Panasas Joins MLCommons to Advance ML Storage Innovation

Company software architect Curtis Anderson co-chairs MLPerf Storage working group to develop ML storage benchmarks.

Panasas, Inc. announced its collaboration with MLCommons, the consortium behind MLPerf, to create industry-wide benchmarks for ML storage.

Mlcommons

It will work with MLCommons to help steer these benchmarks by establishing best practices for measuring ML storage performance and ultimately helping to develop the next gen storage systems for AI/ML.

Panasas builds price/performance storage solutions that drive innovation across a range of computing environments. The Panasas ActiveStor portfolio features the all-NVMe ActiveStor flash system, which enables small and random file performance and enhanced support for AI/ML projects. All firm’s solutions are powered by the company’s PanFS parallel file system, a reliable and autonomic data engine that orchestrates networked servers into a single file system serving data to clients at up to hundreds of gigabytes per second.

MLCommons, the open and global engineering consortium dedicated to making ML better, promotes widespread ML adoption and democratisation through benchmarks, large-scale public datasets, and best practices.

Panasas approached MLCommons to discuss the storage challenge in the ETL (extract, transform, and load) process and its impact on the overall performance of the ML pipeline. The discussion was timely, as MLCommons had been in the early stages of forming the MLPerf Storage working group to develop a storage benchmark that evaluates performance for ML workloads including data ingestion, training, and inference phases.

MLCommons invited Panasas to attend the foundational meetings, after which Curtis Anderson, software architect at Panasas, was named co-chair.

The end goal of the MLPerf Storage working group is to create a storage benchmark for the full ML pipeline which is compatible with diverse software frameworks and hardware accelerators,” said David Kanter, founder and executive director, MLCommons. “I’d like to thank Panasas for contributing their extensive storage knowledge, and Curtis specifically for the leadership he is providing as a co-chair of this working group.

It is an honor to be a co-chair of the MLPerf Storage working group, and I look forward to the meaningful progress that this team will accomplish,” said Anderson. “AI/ML practitioners are doing some of the most revolutionary work today, and I am excited to help develop the benchmarks that will enable them to determine the storage systems they need to carry out their projects.

Articles_bottom
AIC
ATTO
OPEN-E