What are you looking for ?
Infinidat
Articles_top

Exclusive Interview With Adi Gelvan, CEO and Co-Founder, SpeeDB

Developing next gen storage engine for modern databases

Adi Gelvan, 49, is CEO and co-founder of SpeeDB, a company he co-founded in November 2020 in Israel with Hilik Yochai and Mike Dorfman, 2 other former Infinidat employees. Previously, he spent some time at SQream, Spot, Infinidat, Actifio, IBM, XIV, and EMC, and continues to advise various companies like Weka or Statehub. Passionate by sport, he tries to run 4-5 times a week to continue to believe he’s in his 30’s.

StorageNewsletter: Speedb is a young company. What was the trigger to start such an adventure?
Adi Gelvan: I was working as a CRO at storage vendor Infinidat where we had experienced metadata storage scalability issues. We were experimenting with various settings and performance tuning in the RocksDB key value storage engine, but we quickly found that when the volume of metadata reached more than 100GB, RocksDB could not keep up. My 2 co-founders, who are some of the most brilliant technologists I know, tried sharding to break down the dataset into smaller slices that are more manageable – but this proved to be a cumbersome approach that resulted in increased complexity. Metadata storage is overlooked, and most people think of it as small and insignificant, but as data volumes grow, metadata often becomes larger than the data volumes it describes. 

Furthermore, we could see how the scalability challenge was especially intractable in applications such as analytics, AI and ML – high-growth sectors where cloud-scale ramp ups were common. Yet not one vendor was addressing this very horizontal pain point. In Israel, a large unaddressed market for which there are no competitors is called a hot startup.

You raised so far $4 million. What is the current financial situation and do you plan to raise a new round to accelerate your market footprint? We understand that his first round was essentially to bootstrap the company and develop a MVP and a prototype but finally you deliver more than that.
We believe that to be successful today you need more than an MVP and a sizable TAM (total addressable market) – you need to demonstrate rapid market acknowledgement that you’re onto something big. You have to show that the doggies are eating the dog food.) We did that, and more. Within a year of our founding we had signed a strategic partnership with Redis, the company behind the popular open-source database, and 2 other industry leaders. We deployed Speedb with several other customers such as XM Cyber for whom we delivered impressive results. And we deployed several other accounts that we can’t discuss publicly yet. That said, we do anticipate completing our Series A round in mid-2023.

You identified a special issue for databases who run and rely on RocksDB, could you elaborate on that? And how do users and partners address these limitations without Speedb?
As the amount of metadata increases, database performance issues become more frequent. Metadata is often managed using key-value stores such as RocksDB which use architectures that were not designed to support the scalability requirements of modern datasets. Due to its inherent weaknesses, RocksDB cannot effectively deal with ever-increasing volumes of metadata. This means no matter what, at some point RocksDB will hit a wall and users will experience slower access to the underlying media, and consequently application performance degradation.

The common solution is to add more memory and CPU resources, and limit the scale of a single database in specific environments. Another option is to split the dataset into smaller, more logical pieces (sharding), but as mentioned earlier, sharding inherently brings increased complexity and management overhead due to the need to manage multiple datasets. Either way, these practices require developers to trade-off between capacity, scale and performance, and spend more time and effort on maintenance instead of innovating and delivering real business value.

How do you solve that and what do you bring to partners and users to address these limitations? Any technical details to really understand your value?
We completely redesigned the most basic components of RocksDB from the ground up to improve its performance, scalability, and resource utilization. For example, we developed a new compaction method that dramatically reduces the write amplification factor for large datasets, which allows running applications on a single database. Doing this allows us to eliminate the need for sharding and free developers to focus on what matters more. 

Furthermore, we created our offering as a drop-in replacement solution for RocksDB. With Speedb,  developers don’t need to change a single line of code to support petabyte scaling of datasets with billions of objects – while maintaining high performance and low hardware requirements. 

What are the use cases? Could you illustrate some gain with already deployed configurations?
Our goal is to provide organizations with a data architecture that can address the performance and scalability required for modern, data-intensive workloads in a cost-effective manner. The Speedb designed data engine can support a variety of use cases. For example, in our partnership with Redis, we support metadata storage  in large scale Redis on flash deployments. Redis on flash is offered to customers that want to extend RAM capacity with SSD and persistent memory to store more data with less resources and thus reduce costs. The challenge was that when larger portions of the dataset were moved outside of memory to slower flash drives, performance was compromised, resulting in latency and user-side stalls when running large datasets. By replacing RocksDB as the underlying data engine of Redis on flash, we were able to significantly reduce the performance gaps from RAM to flash so Redis’ customers can allocate more data to flash and save costs.

With another partner, XM Cyber, a provider of a breach and attack simulation software, we were able to reduce memory consumption in Apache Flink streaming applications – so more metadata and more tasks can be handled on the same hardware while boosting RW response times.

Key value stores as storage engines are everywhere, so we are addressing the tip of the iceberg today. Wherever data volumes – and metadata in particular – continue to explode, there’s a data engine that just can’t keep up, and where Speedb can dramatically improve the user experience. These use cases include boosting the performance of a wide range of databases, improving metadata access and utilization to unlock new metadata analytics opportunities, enhancing software capabilities so that developers can tap into recent hardware innovations, and much more. 

What about your cloud strategy?
Certainly, we support cloud deployments. As an embedded library that resides in a parent software architecture, Speedb is like Switzerland. We’re agnostic, so we can, and do, run on a variety of environments. For example, you can take data written on primary storage, and put it into the cloud in software that runs Speedb.

… And pretty connected to it nowadays Kubernetes? How do you fit in this environment?
That’s the beauty of being software-agnostic – as an embedded library Speedb can work inside just about any application, including Kubernetes and cloud-native solutions – as well as more traditional solutions.

Let’s speak about your go to market strategy, how do you sell your product and to whom?
We are pursuing an OEM strategy where we look to partner with major database and application providers as the underlying data engine. We are planning to offer a SaaS universal data platform that supports multiple cloud databases.

At the same time, we are currently focused on building an open-source community where Speedb/RocksDB users and contributors can collaborate on the development of new data engine capabilities. We are doing this because until now, RocksDB developers and users haven’t really had a place where they can raise issues, offer improvements, share ideas, consult and collaborate with fellow developers.

RocksDB was developed by Facebook and was open sourced in 2013. Since then, RocksDB has been widely adopted. However, due to RocksDB’s complexity and the lack of adequate support from Facebook, developers struggle to customize RocksDB to their needs. We believe that by sharing Speedb features and functionalities and working with the open-source community, RocksDB/Speedb developers could finally leverage the knowledge of subject matter experts, create better solutions that are tailored to their needs, and unlock new data engines use cases.

Your pricing model?
For the OEM strategy, we are using a price per node/monthly subscription or a revenue sharing model. For the cloud model we will be using pricing models such as per storage kept or storage read/write, per query, as well as other models based on quality of service and performance.

To summarize, what are your unique differentiators?
We are building a new data architecture that will take data engines to the next stage. Every modern database including MySQL Mongo, Cockroach, Arango and the like has a pluggable storage engine, and can work with pluggable storage engine – so we know we’re addressing an untapped and massive opportunity. We offer unmatched capabilities that allow our users to address the most demanding data processing and management needs and achieve high performance and cost efficiency at cloud scale. We’re taking the lead in fostering data engine innovation and delivering new capabilities to markets that will struggle without them.

What are your key roadmap items for the near future?
We’ll be debuting an open source version of Speedb soon, along with an enterprise version. On top of that we are building the first global RocksDB/Speedb open source community to foster data engine innovation and deliver new capabilities to the growing audience of data engine developers. The yin and yang of developer communities with their multiple great ideas on features and functionality, and thoughtful supportive vendors ready to build solutions to address them, is a beautiful phenomenon where both sides benefit greatly. We’re happy to take the lead in cultivating a thriving developer community while offering solutions that save time and free developers to focus on their core offerings.

Read also :
Articles_bottom
AIC
ATTO
OPEN-E