Fastest Runtimes Recorded by SAS Labs for Grid Execution
On DDN SFA 10K-E platform with the GRIDScaler parallel file system
This is a Press Release edited by StorageNewsletter.com on September 19, 2012 at 2:58 pmDataDirect Networks, Inc. announced that its SFA 10K-E platform with its GRIDScaler parallel file system produced the fastest runtimes recorded to date by SAS labs for a SAS Grid execution of a highly parallel model calibration workload.
SAS – DDN GridScaler Network Diagram
SAS engineers also concluded in recent testing that DDN GRIDScaler offered the most consistent and predictable performance of any shared file system tested to date with SAS Grid Computing, supporting applications such as SAS Drug Development, SAS Warranty Analysis and SAS Enterprise Miner for fraud detection, risk management and enterprise business optimization.
"DDN and SAS share a common mission: to help organizations extract the greatest possible value from their data," said Jean-Luc Chatelain, EVP, Strategy and Technology at DDN. "Paired together, DDN and SAS technologies unleash the incredible potential of the big data era and accelerate the speed of business."
The DDN SFA 10K-E is a unified virtual server and storage appliance featuring DDN’s In-Storage Processing technology. This virtualized environment eliminates externalized data processing systems and allows data-intensive applications to exist within the storage engine – expediting data access, minimizing latency and lowering the cost and complexity of data-intensive computing.
"The DDN SFA 10K-E platform with the DDN GRIDScaler parallel file system is an excellent choice for SAS Grid deployments," said Cheryl Doninger, senior director, Research and Development, SAS. "IO intensive SAS Grid workloads have demonstrated excellent performance characteristics utilizing this storage appliance. The choice of shared file system and storage is a critical component of high-performance SAS Grid deployments."
SAS Grid Computing enables organizations to create a managed, shared environment to process large volumes of data and analytic programs more efficiently, making it easier and cost-effective to scale compute-intensive applications for greater complexity and more users.