52% of Respondents Consider Movement of Large Data Between Storage Systems, Storage Devices and Servers Significant Problem – NGD Systems/G2M Research
Need for 'intelligence storage'
This is a Press Release edited by StorageNewsletter.com on December 13, 2017 at 2:36 pmG2M Research, an analyst firm covering the NVMe marketplace, released the results of its recent survey on the need for ‘intelligence storage’ for applications with large data sets.
The survey, sponsored by NGD Systems, Inc. (formerly NxGnData), was conducted across 112 respondents from organizations involved in big data, artificial intelligence/machine learning, and the IoT. The purpose of the study was to gauge whether the movement of large data sets across existing processing and storage architectures negatively impacts the cost and usability of the data by applications. The results of the survey show that existing compute and storage architectures adversely impact the performance and cost of these applications, and that new architectures are needed if these applications are to continue to scale in size and capabilities.
“Datasets for applications such as big data, AI/ML, and IoT continue to grow at an exponential rate,” said Mike Heumann, managing partner, G2M Research. “Our research study shows that the majority of users in these application spaces are very concerned about how this growth will impact their ability to use these applications over the next 12 months. The majority of these end-users also believe that new approaches like processing data within storage devices will be necessary to overcome existing data movement bottlenecks.”
The movement of very large data stores is increasingly critical for real-time analytics in a variety of applications. However, this data movement is not without cost or impact.
Key findings of survey include:
- 52% of respondents consider the movement of large data stores between storage systems, storage devices, and servers will be a significant problem for their organization either today or within the next 12 months.
- 92% of respondents expect that data movement will adversely impact their organization, with 62% responding that it will impact server, networking, or storage costs, 48% saying it will impact application performance, and 29% saying it will limit the way data can be used.
- Only 21% of respondents believe that current processing/storage architectures will be able to handle the amount of data in their industry in the next 5 years.
- 64% of respondents believe that processing or preprocessing data inside storage systems/devices could help solve the data movement problem.
“As the capacity of each drive is increasing exponentially, along with the number of drives within a server, moving the data will be become exponentially harder and more cumbersome,” said Nader Salessi, president and CEO, NGD. “The G2M Research survey clearly illustrates the issues that large data stores present to application architects for big data, IoT, and AI/ML, among others. In-situ processing like that of the NGD Systems Catalina 2 SSD provide a compelling alternative to moving large amounts of data between storage systems, storage devices, and servers/CPU complexes.”
One of the most promising concepts to address the storage-CPU bottleneck is the use of in-situ processing within storage devices. In-situ processing changes the deployment of a variety of applications that today require huge clusters of expensive multi-socket servers with large amounts of RAM. By reducing the amount of data that has to be moved between storage systems/devices and servers/CPUs/GPUs, in-situ processing within NVMe flash SSDs can reduce network size/complexity, CPU/GPU workload, and power consumption for applications utilizing high number of IO/s.
NGD Systems Catalina 2 NVMe SSD is embodies the concept of in-situ processing, and is a product to help reset the CPU/GPU-storage gap and improve data center TCOs. Its NVMe SSDs also have high capacities and low power per terabyte.