What are you looking for ?
Infinidat
Articles_top

Twitter Acquired DriveScale

Covid-19 effect or anticipation of other difficulties?

Covid-19 has probably impacted once again a company trajectory and, since the beginning of the pandemic, we count more than 10 M&As that may have seen earlier than expected.

Nevertheless, effect or not, we can’t really attribute the root cause to Covid-19 exclusively but probably it comes from a mixed combined symptoms and Covid-19 serves as a catalyst finally.

Again the reality is that a normal company became fragile and a fragile one disappeared, got acquired or had the opportunity – or luck – to raise a new round.

At the same time, the situation presents real opportunities: VCs gain more and faster control on companies at reduced valuation and of course some bargains for acquisitions. Recently, in addition to DriveScale, Igneous and Caringo are 2 good examples respectively acquired by Rubrik and DataCore. The game is not over for everyone, corporate development teams are more active than ever.

The first surprise is the absence of press release. We didn’t find on one of DriveScale site, just a new image shown above and a series of 3 tweets from Nick Tornow, platform lead at Twitter. In this tweet, it’s a surprise to read the development of a “persistent block-level storage” project and that the DriveScale team joins the compute group. As a true structured model, message has limited size and metadata are well defined, Twitter uses MySQL as primary storage (yes you read correctly, see this page) and this is again an example of the datastore term used by many people, it’s a question of abstraction and “where” do you speak. We understand that Twitter doesn’t have a storage group or at least the storage team belongs to the compute one. It’s a model… We didn’t find any tweets from the CTO, CFO or CEO Jack Dorsey, confirming that DriveScale is a technology acquisition for their internal usage. But without any noise about it, it gives the feeling that it is insignificant and acquired for a penny. It reminds what happened to SwiftStack last year in March 2020, before the real business impact of Covid-19, when Nvidia swallowed SwiftStack. There was and is still no any mention of this on Nvidia web site, but of course SwiftStack promoted the failure.

As of today, the DriveScale web site is down reporting an error 404, meaning that the pointed resource is empty. On Crunchbase, DriveScale is declared closed.

Brian Pawlowski, CTO at DriveScale, left the company a few weeks ago and joined Quantum as chief development officer. Is it a sign of strategy divergence and that the deal was well engaged? At that time, this departure invites us to think so.

For an agile company on the technology edge, another indicator is the absence of any press release. In fact, DriveScale didn’t issue any of them since May 2020 almost 9 months ago.

For DriveScale, the landing field is not the one the market could have expected. The company was founded almost 8 years ago in March 2013 and raised so far just $26 million.

The demonstrated frugality in this current pandemic crisis has finally weakened the enterprise and we invite the reader to find other similar patterns as their destiny could be implicitly linked to that behavior. The company has also received a debt line April 2020 and last September we unveiled a list of companies who got helped by the Small Business Administration and Treasury Department. DriveScale was granted between $350,000 to $1 million to cover 13 jobs.

Round Date Amount  in $ million Investors
A (A1 later stage) May 2018 8  
A (early stage) May 2016 15 Ingrasys, Nautilus Venture Partners, Pelion Venture Partners
Seed February 2015 2.5
Pre-Seed March 2013 0.5

Individual investors are not listed in this table. 

In terms of market segment, this episode doesn’t give confidence on disaggregation architecture initiated by Internet giants like Facebook, Google, AWS or Azure and actively promoted for the enterprise by Liqid, DriveScale, Silk, Dell, HPE, Lightbits Labs and Western Digital to name a few. Every vendor develops its own approach centered on a different element.

In fact, when we dig a bit it turns out that the architecture model is well received by the market, of course not suited for everyone we mean it’s not a must have for the vast majority of users. The independent vendor approach doesn’t represent the right market path. As a consequence, having the logic embedded in more comprehensive product via OEM or other technology agreements could have better footprint and future.

The interesting part is the composable architecture and what a company like Twitter can do with it. Historically you have systems and you grow them to obtain a scale-up model or group them and build a scale-out model. This is classic, you start with a basic system and you incrementally grow it in various direction. But it’s very rigid, good for some use cases, no doubt, but suffer from a coarse granularity of resources.

The second approach has been to divide a system and create VMs and potentially cluster these VMs across chassis. We also saw the emergence of containers that bring agility, flexibility and better resource consumption. But when you need system bigger than the physical machine you group them. Remember some development direction in the past with single system image.

Now the story is different, you have various resources, CPU (Intel x86, AMD EPYC, ARM, etc.), GPU, memory, PM, PCIe, NVMe, NVMe-oF, disks, network switch… and you can group some of them to build a system without any boundaries finally. Of course they exist but you get the idea, the chassis barrier is broken. In other terms you disaggregate infrastructure to compose an entire system – compute+storage+network or just compute+storage or just storage or compute – pick what you prefer. Some people in the industry preach actively this model with the mantra compose, decompose and recompose, meaning that dynamically you build your system on demand and you release resources to build a new for other needs.

Imagine last few days of a quarter, your finance department require more power to book deals and you dynamically bring new system to that service and release when the quarter is finished. Imagine now a rendering farm for a movie and producers need to generate all special effects, all resolutions… and same thing you start dynamically new systems for such need and decompose later. So you get the point you anticipate a peak you build dynamically a system and even if you don’t anticipate the sudden need, you’re able to build a new system with all the catalog of resources you track. And you did this very rapidly, in a matter of minutes.

What is important here is the granularity of managed resources, the nature of resources, if some players don’t support GPU the model is less comprehensive, the protocol and technology used…

Check some technology direction that will play a key role in that space: CXL, Gen-Z, CCIX and OpenCAPI.

Twitter won’t communicate on how they will use DriveScale but we understand it will be for block storage, we’ll see if some information will evaporate from that team to learn more.

Articles_bottom
AIC
ATTO
OPEN-E