Leander Yu, 49, born in Taiwan, is CEO of Graid Technology, a company based in Santa Clara, CA, he co-founded in 2019. Before that he was VP at Silicon Motion for a little more than 2 years following the acquisition of Bigtera, a developer of a SDS solution based on Ceph. He also spent some times at EZ Cloud Tech and TCloud Computing. As an engineer, he likes new technologies and plays with new tech gadgets and thinks about what’s missing or how to make them better and cooler.
StorageNewsletter: Graid is an American company with R&D in Taiwan, founded in 2020. Could you share with us some figures and information regarding your financial trajectory and employees?
Leander Yu: Graid now has almost 60 employees in USA, EU and Taiwan. We are still growing to fulfill the strong marketing demand, especially in our R&D. We have more than 5 active OEM customers and more than 40 distributors and channel partners selling our product in USA, EMEA, Korea, Japan, Australia, etc.
I see you raised $18 million so far, any plan to raise a new round, it appears to be a bit low for a company playing in your domain?
We are thrilled to announce that we are in the final stages of closing a successful A+ funding round this month, raising a total of $7.5 million. This brings our total funding to $25 million to date. We stand out from other players in our domain, as we differentiate ourselves by being a pure software company, rather than a hardware-based company. Unlike our hardware-focused competitors, we benefit from being highly capital efficient, as we do not face the same resource-intensive demands of hardware design, manufacturing, and inventory management. Furthermore, our R&D center in Taiwan adds another advantage to our operations. If you were to compare our company to hardware-based companies in terms of bringing their first product online, you would be amazed at the efficiency of Graid.
What are the background of you and your co-founders?
The co-founders, including myself, have a strong background in the SDS industry. In fact, I had previously founded a SDS company that was later acquired by a leading SSD controller company. It was during this experience that we realized the limitations of relying solely on software for driving SSD performance. While observing the market, we didn’t see any viable solutions emerging from the major RAID players in the industry. This realization, combined with our domain knowledge in both SDS and SSD technology, led us to form our current venture. We recognized that we had the ideal team to address this problem by combining our expertise in software-defined solutions with our understanding of SSD technology.
What are the roots of the project? What was the trigger that invited you and your peers to start the company?
The trigger for the development of SupremeRAID was the ever-growing proliferation of NVMe in the data center. Companies are investing in NVMe for performance reasons, and we quickly realized that traditional methods of protecting the data stored on NVMe causes serious I/O bottlenecks. So, we set out to deliver a data protection product to the market that not only gives CIO’s and IT managers peace of mind that their valuable data is protected, but would also eliminate the I/O bottlenecks allowing for access to the full performance capabilities of the NVMe drives – ultimately providing the ROI that customers are looking for from their IT investments.
We understand you deliver a new RAID model based on an off-load CPU PCIe board running a GPU supporting NVMe SSDs, could you detail your product line?
Our product line is elegantly simple. The SupremeRAID SR-1000 is a single slot GPU, perfect for 1U servers with limited PCIe slots. The SR-1010 is a double width card perfect for 2U and larger server platforms. Both offerings support PCIe Gen 3, 4, and 5 servers, run the identical code set, and support RAID-0, -1, -5, -6, and -10 on all major Linux distributions and Windows server. Both offerings also support an HA option for failover in the event of a card failure.
Ok but how does it work, both in terms of write and read operations and RAID mechanisms?
The unique thing about how SupremeRAID is implemented is that we are out of the data path for both read and write operations. Being out of the data path allows for access to the full performance of NVMe drives as the I/O does not have to pass through our card and the only potential limitation is in the PCIe bus. Additionally, SupremeRAID does not consume CPU cycles as the entire RAID stack is on the GPU. What is a better calculation engine than a GPU?
What are the performance of such an approach and what kind of other benefits adopters can expect?
As shown in our recent white paper, investing in hardware or software RAID as data protection bottlenecks the valuable performance capabilities of NVMe SSDs. The test results demonstrate that with SupremeRAID customers can realize the full benefits of PCIe Gen 5 and enterprise-grade data protection without sacrificing SSD performance.
The performance numbers that are generated when we measure the I/O for different types of workloads are found in the chart below. All scenarios were RAID-5.
|I/O type (8 jobs, 64 depth)
|Theoretical raw RAID performance
|Sequential read 128KB
|Sequential write 128KB
|Random read 4KB
|16 million IO/s
|16 million IO/s
|Random write 4KB
|1.9 million IO/s
|1.9 million IO/s
The hardware selected for benchmarking consists of commercially available items, including one ASUS RS520A-E12-RS24U server, one AMD EPYC 9334 32-Core processor, twelve Micron MTC40F2046S1RC48BA1 DRAM, and sixteen Kioxia CM7 SSDs.
How do you see the competition?
While there are certainly other RAID providers in the marketplace, given the way that we address RAID we don’t believe that we have any competition, if I can be so bold! No one can do what Graid does and even if someone were to try starting today, we have a 3-year head start in the market.
What are your unique differentiators, in other words why do users prefer Graid cards vs. other solutions?
There are several unique differentiators that users of SupremeRAID tell us that they enjoy when using our product. In no order of importance, these are:
- When SupremeRAID is deployed, there is less drag on their server CPU than other data protection methods.
- With SupremeRAID, users enjoy absolutely stunning I/O performance while still protecting their data.
- SupremeRAID is deployed on a COTS (commercially available, off-the-shelf) platform.
- SupremeRAID is easy to deploy and maintain.
- SupremeRAID is highly scalable, supporting up to 32 NVMe drives in a single server (we’ll be adding more to that in the near future!)
- SupremeRAID supports NVMe-oF out of the box.
- Adding new features or o/s support is easy with SupremeRAID – we’re future-ready!
- Users finally have a way to get a full ROI on their investment in NVMe infrastructure.
What about your cloud strategy? Any interest from hyperscalers?
SupremeRAID is ideally positioned in the “bare metal” marketplace, and we are currently in discussions with several hyperscalers both in USA and EMEA. These discussions include some active PoC’s as well. In the meantime, we are extending this technology to become a decentralized clustering RAID architecture for datacenters. This will be a complimentary product line to make SupremeRAID the preferred storage infrastructure for hyperscalers.
How do you sell your products; it appears to be a perfect fit for OEMs? Any specific partners to mention?
Our preferred route to market is through OEMs and tiered distribution. In a very short time, we have signed 5 major distribution agreements, 40+ reseller agreements, and one very large OEM agreement with Supermicro. We are excited about the Supermicro partnership as they have a keen focus on NVMe platforms and utilizing SupremeRAID to protect their customers data gives them a competitive advantage.
What is the associated pricing model? Any MSRP price you can mention?
Pricing for SupremeRAID can be obtained through one of our OEM, distribution, or reseller partners. We are priced very competitively in the marketplace.
What are the next steps for Graid both in terms of company but also in terms of products? What can we expect for the rest of 2023, you told us a few months ago that you plan to release erasure coding?
Graid has a robust short-term and long-term roadmap. We are already supporting PCIe Gen5 infrastructure seamlessly. Erasure coding is in our near-term view, as is ESXi support, K8S and support for more physical drives on a single card. Also, the clustering design I mentioned earlier. The future is very bright for us from a product perspective. You could expect to see more product lines coming out this year.