Watch Out, Intel: Computation Defined Storage Has Arrived
By Jon Toigo on Symbolic IO Iris
This is a Press Release edited by StorageNewsletter.com on February 28, 2017 at 2:37 pmThis article was published on the blog of Jon Toigo, business information technology advisor and author, on February 23 2017.
Watch Out, Intel. Computation Defined Storage Has Arrived.
In a few hours, there will be a crescendo of noise around, of all things, a hardware platform. Yup, in these days of disdain for all commodity hardware and widespread embrace of software-defined everything, a major hardware event is about to happen.
The evangelists for the new tech are three faces that have been around the storage industry for about 30 years: Brian Ignomirello, CEO and founder of Symbolic IO Corporation, Rob Peglar, Symbolic’s SVP and CTO, and Steve Sicola, adviser and board member of the company. Together, they are introducing an extraordinary advance in server and storage technology that could well change everything in the fields of HPC, silicon storage and hyper-converged infrastructure. They call their innovation ‘Iris.’
Iris stands for Intensified RAM Intelligent Server and it is trademarked for good reason.
Under the hood, there is so much IP that I had to sign a pile of NDAs just to get an advanced look when I flew to Symbolic IO HQs in what used to be Bell Labs last week. Fortunately, you don’t need the non-disclosure because, as of midnight tonight, Iris is going to get a lot of exposure from the usual news outlets and analyst houses. (It goes availability next month.)
Why?
Simply put, Iris changes the game on so much of what we take for granted today in computer design, server architecture and storage operations. Collectively, the innovations in Iris, which have been in development since before the company’s formal founding in 2012, stick a hot poker in the eye of Intel, NVMe, and the whole HCI crowd.
With the introduction of Iris, it is as though server and storage technology just went through what Gail Sheehy called a ‘passage’ or Erickson, Piaget and Kohlberg termed a “stage of psycho/social development.” Just as healthy humans move through stages in life, usually signaled by a crisis, in which they reconsider past assumptions, discarding those acquired from parents, peers and society that do not seem to have relevance to them and embracing new truths and directions for the future, so it is with Iris and the tech industry.
The crisis is real. Things are in disarray for tech consumers and vendors alike. We are creating data much faster than we can create the capacity to store it with current technology. We want to be able to share and collaborate using data, but the latencies of reads, writes and copies are getting in the way and hypervisor virtualization has stressed out the IO bus. We grasp at straws, allowing Intel to define NVMe as a de facto standard because vendors want to push silicon into data centers tomorrow and relying on each flash storage maker to define their own device drivers and controller logic was delaying adoption, compromising vendor profitability and exposing the whole silicon storage market to rampant balkanization.
Iris is what happens when the crisis above forces good engineers to question old assumptions and to discard those that no longer apply.
For example:
- Why are we using simplistic common binary to store (logically and physically) bits on storage media? Why not use a more elastic and robust algorithm, using fractals for example, to store more data in the same amount of space? That is analogous to the way data is stored using DNA, which packs far more content into a much smaller space.
- Why are we pushing folks to deploy flash memory on a PCIe bus and calling that a ‘huge improvement’ over installing flash behind a PCIe bus-attached SAS/SATA controller? While doing so yields a performance improvement, isn’t that the same dog, with different fleas? Why not put storage directly in the memory channel instead?
- Why do we continue to use cumbersome and self-destructive file systems that overwrite the last valid copy of data with every new save, a reflection of a time when storage cost several hundred thousand dollars per gigabyte? Why not use a richer recording algorithm that expedites first write, then records change data for subsequent versions in a space optimized manner?
- And in these days of virtual servers and hypervisor computing, why don’t we abandon silos of compute and storage created by proprietary hypervisors and containers in favor of a universal, open workload virtualization platform that will run any VM and store any data?
- And finally, why pretend that flash is as good or as cheap as DRAM for writing data? Why not deliver write performance at DDR4 speeds (around 68Gb/s) instead of PCIe G3 throughput speeds of 4.8Gb/s)?
Ladies and gentlemen, welcome to Iris. Those who read this blog regularly know that I am as critical of proprietary hardware as the next guy and have welcomed the concept, if not always the implementation, of software-defined storage as a hedge against vendor greed. But, from where I am standing, this ‘Computation Defined Storage”‘ idea from Symbolic IO has so much going for it, I can’t help but find myself enamored with the sheer computer science of it.
They had me at I/O. But they are holding my attention for the many other innovations that they have put into the kit, including a remarkable, new DRAM-3D NAND hybrid storage target, a rich open hypervisor, an OS that changes the game with respect to data encoding and data placement, and a really cool technology for data protection via replication called BLINK.
Read also:
Symbolic IO Emerges From Stealth With Computational-Defined Storage Solution
Founded by former CTO of HP StorageWorks
2016.06.08 | Press Release | [with our comments]