What are you looking for ?
Infinidat
Articles_top

Exclusive Interview With Renen Hallak, CEO, Vast Data

Unique company trajectory in storage industry

Renen Hallak, 38, is the CEO and co-founder of Vast Data, a company he launched in 2016 with Jeff Denworth and Shachar Fienblit. The team revealed the product at the time they unveiled the company in February 2019. Before that, Hallak was VP of R&D at XtremIO, a company acquired by EMC in 2012, leading a group of several hundreds engineers. He joined XtremIO at its inception in 2009 as the first engineer. He is based in New-York where his company is located.

StorageNewsletter: Founded in 2016, your firm grows rapidly shaking the file storage space, could you give us a company snapshot with some financial figures, employees, offices… with founders background as well?
Hallak:
We closed our fiscal year at the end of January with 175 people. Most of which are part of our R&D organization in Tel Aviv, Israel. HQ is in NYC, where our sales and marketing team is based, and our operations team is based in San Jose, CA. We have sales teams across the US, and now also in Europe and APAC. We recently announced hitting a $150 million run rate within less than 2 years of launching the product, to further solidify our position as the fastest growing infrastructure start-up in history. In addition to our fast growth in revenue, we are particularly proud of the efficiency of our company which allows for sustainable growth. While driving this growth, we were able to achieve cash flow positivity. The combination of high revenue numbers, high growth multiples and not burning cash is a unique achievement. Our backgrounds are in infrastructure and storage in particular, ranging from parallel file systems to all flash block devices to analytics and database infrastructure. We are lucky to have world class leaders in all key positions.

You already raised $180 million and your last round promoted you as an unicorn, being one of the fastest story, does it change something on how the company is considered on the market by partners, prospects or clients? Next step is the IPO, when?
Of the $180 million raised, we still did not touch any of the money from the last $100 million round or the previous $40 million round. The reason we raised it is to give our customers and partners the confidence required to make large investments in our technology. This is evidenced by several double digit (in $ million) customers we have recently made public. While the strong balance sheet proves we will be here to serve and support our customers in the long term, the high valuation gives our customers the confidence that we will not be acquired by one of the companies who’s product we are displacing. While an IPO may be inevitable down the road, we are currently focused on building the best product and providing the best service for our customers, and are enjoying the liberty of long term thinking that public companies, and companies who are in need of further capital, sometimes lack.

You developed software but you sell hardware, did the Covid-19 impact your business?
We have seen continued exponential growth throughout this pandemic year, and have exceeded our very aggressive plan every quarter in terms of bookings and revenue, while spending much less than our budget allowed for. Covid affected the mix of customers across verticals, but did not have a material impact other than that. We enjoy the ability of having our software run in containerized environments, across any compute platform. This gives us the flexibility, alongside our channel partners, to offer both hardware and software based solutions based on each customer’s preference.

Your company addresses high performance and scalable file storage needs for high demanding applications, how do you define what you do? And what are the use cases you target?
Our system breaks age old architectural tradeoffs and provides data driven applications with fast access to very large data sets, in a cost effective manner, and without needing to tier information off to slower archives. Naturally, we started by tackling the most demanding use cases, which require both high performance and high capacity, like analytics, AI, ML, etc. Over the past 2 years, we have added many enterprise features, like snapshots, encryption, replication, Windows and Mac support, etc. We now find our customers using our platform for many use cases, ranging from VMware to backup targets. When we inquire into the reason, the short answer is ‘it just works’. The longer answer is that once you have a single storage system that is faster than anything else, more scalable than anything else, easier to use than anything else, more resilient than anything else and extremely cost effective, there is no reason to place data on anything else.

Many scalable NAS use a shared-nothing model but you preach a new architecture named DASE for Disaggregated Shared Everything Architecture, could you describe it?
This new architecture is what allows us to break the tradeoff between price and performance, between latency and scale, and between resilience and ease of use. It is based on new technologies that only became available in 2018 and some of which are just now becoming available, and allows storage to accelerate next generation applications rather than being the obstacle to their adoption. The idea behind the architecture is to disaggregate storage logic from state, and allowing all data to be accessible from any CPU in the data center, while eliminating all east-west communication between nodes. This opens the door to exabyte scale, unprecedented resilience, and most importantly – fast access to all your data in an affordable model.

We understand your form wants to kill the HDD model with a better $/TB with a solution that can be deployed for primary and/or secondary storage, beyond the architecture described previously, Optane and QLC Flash are 2 key components to support that model. How do you use them?
Optane is key to many aspects of the system. It acts as a large, fast, persistent distributed write buffer and metadata store. Having this write buffer buys us time, between the point a write is performed by the application and the point in which it needs to be placed on flash. That allows us to place the data based on when we expect it to be overwritten, and eliminate any type of write amplification – a key enabler in the use of the lowest cost flash. It also allows us to perform efficient global data protection codes and aggressive global data reduction algorithms, that wouldn’t be possible otherwise. As a metadata store, Optane allows us to have very granular byte addressable, self-describing metadata. This enables a new set of advanced functionality to be run within the storage system, while still maintaining high performance and resilience, and while eliminating historic hot spots and bottlenecks such as metadata and lock managers. QLC is the lowest cost flash available today, and allows us to meet the low $/TB price points. As new forms of PLC and ZNS flash devices become available, we will be the first to adopt them.

Could you share key differentiators vs. competition?
When you have a single system that is faster, more scalable, more resilient, easier to use and less expensive, organizations very quickly understand that they can throw all their data at us. When you have a single system that is big enough and affordable enough to store all your data, and provides sub-millisecond access to all of it, you don’t need to think about storage any more. But more importantly, you can start transforming your business in ways that were not possible before. Faster access to data means faster business processes and decision making. Fast access to all your data means you can get much better insight from it by leveraging new analytics and deep learning capabilities.

Recent IO500 lists confirm that HPC and enterprise storage needs converge as some HPC sites adopt NFS and large corporations consider parallel file system. Your appearance in this HPC ranking validates your approach and confirms that NFS can scale and delivers really high performance, can you share more details about these configurations?
We are not big believers in benchmarks and synthetic hero numbers. We find that many systems are able to be tuned to show great numbers under these benchmarks while performing terribly in the real world. Caching and tiering is a good example of ways to achieve that. We advocate against these methods that introduce inconsistent behaviors, and instead believe in a single tier that provides fast deterministic access to all your data, while solving the cost challenges in smart software rather than pushing that problem up to the user. We found that breaking tradeoffs extends to the realms of parallel file systems. The combination of stateless containers that can run on the same physical servers as the application, and parallel NVMf communications that we leverage between these containers and the storage media, provides the speed and parallelism of parallel file systems, while maintaining the resilience and simplicity of a NAS appliance. Moreover, there is nothing inherently slow in the NFS protocol. When you write the protocol stacks yourself rather than leaning on open source implementations, you can achieve amazing results without requiring proprietary APIs.

AI is popping up in different places and you introduced Lightspeed recently, could you give us the pitch about it?
AI is a very interesting workload. You write something, keep it forever, and read it over and over again as you train your model and re-infer your information using these improved models. It shifts the focus from traditionally write intensive high performance workloads, that did not get a chance to read the data before it was relegated to a lower more cost effective tier, to very heavy random read workloads. These new workloads require random access media like Optane and flash, but are very large in capacity because data is now kept forever. They are also large in capacity because they are usually run on natural data such as pictures, genomes, sound, etc. which tend to be much larger than structured records and tables. This poses a huge challenge for legacy storage systems, which were not built with these use cases in mind, and a huge opportunity for Vast as a result.

You’re one of the few players officially validated by Nvidia for GPU Direct Storage, any details on this, how do you leverage this and what benefits it brings for application and users?
GPU Direct allows the direct connectivity between very fast GPUs from Nvidia and very fast storage from Vast, without going through a slow CPU in between. I often give the analogy of a multi lane highway with a toll booth in the middle slowing everyone down. GPU Direct eliminates that toll booth and allows traffic to flow freely between storage and GPU. Surprisingly, we did not need to write a single line of code to support this new protocol. Nvidia made sure to support the standard NFS client, and because we don’t require a proprietary protocol or non-standard client side code, no further work was required on our end. The simplicity of using NFS while leveraging direct connect to its fullest is a combination that no other storage architecture can offer.

What you about your cloud strategy?
Vast is a SW solution that was built by leveraging cloud concepts of disaggregation, stateless containers acting as cattle (not pets), fail in place HW that eliminates the need to go into a data center, infinite scale and independent scale of capacity and performance, of storage and compute, etc. We can run our SW on dedicated HW, on shared on prem HW, as well as in private and public clouds while leveraging containerized orchestration frameworks. We find that most customers running multi petabyte workloads are currently not doing so in the public cloud, and we have the flexibility to adapt to their requirements.

From a business perspective, could you share some figures?
We recently announced reaching a $150 million run rate in less than 2 years since launching the product and coming out of stealth. We were also able to achieve cash flow positivity while driving this record breaking growth rate. It is a testament to our customers and partners realizing the value of this new technology and making large investments in it. We have several double digit customers (in $million) and several dozen customers who have purchased more than $1 million of our systems.

How do sell your product? Is it channel only? Or a mixed especially for very large deal? And what about OEM, is it a potential additional direction?
We lean heavily on our channel partners, and could not have achieved these numbers with only 8 salespeople if it wasn’t for these partners. We have not begun any OEM activity to date, because we want to have a direct relationship with every single customer. Their feedback is what allows us to continue improving the product to their needs, and having that direct relationship also allows us to provide the highest levels of customer satisfaction possible.

How do you price your solution?
The solution is priced per capacity. This allows our customers to get all the performance they need out of the system without paying a premium for it. We sell multi petabyte stores and, as a result, we almost always displace legacy hard drive based systems.

How many customers do you have? What is the split between geos? And between different uses cases? File and object access? Total capacity  deployed?
We were focused on the US market to start, and have now expanded into both EMEA and APAC, with teams in the UK, France, Germany, Australia and South Korea. Customers include large enterprises and governments alongside smaller data intensive organizations. The use cases vary between new workloads such as analytics, AI, ML and DL, alongside more traditional workloads such as media and entertainment, VM storage and backup targets. The amount of capacity deployed is in the exabyte range, which is more than I originally expected considering it is all Optane and flash-based storage.

What can we expect for 2021?
As we take over more and more of the storage market, we are beginning to lay the foundation for our second act, which is beyond what is considered a traditional storage system. We will continue to enable our customers to make the most of their data, developing the infrastructure required to run next generation data intensive application.

Articles_bottom
AIC
ATTO
OPEN-E