First to Adopt SMR, What Dropbox Learned in First Year
Everything simply worked with SSD cache.
This is a Press Release edited by StorageNewsletter.com on August 30, 2019 at 2:20 pmThis blog was posted by Dropbox, Inc.‘s Refugio Fernandez, Preslev Le, Victor Li, and Alexander Sosa from the Magic Pocket & Hardware engineering team on August 21, 2019.
A year ago, we became the first major tech company to adopt high-density SMR (Shingled Magnetic Recording) technology with 14TB HDDs from Western Digital Corp. for our storage drives.
At the time, we faced a challenge: while SMR offers major cost savings over conventional PMR (Perpendicular Magnetic Recording) drives, the technology is slower to write than conventional drives. We set out on a journey to reap the cost-saving benefit of SMR without giving up on performance. One year later, here’s the story of how we achieved just that.
The Best Surprise Is No Surprise
When the first production machines started arriving in September, it was only natural to have a little apprehension and think, “Does this actually work? What if it breaks?” It was raw, completely new technology-it was terrifying. And then … nothing happened. That is, everything simply worked.
Not that this was much of a surprise. The lead-up to this event was the classic Dropbox story: sweat the details, build confidence by relentlessly bashing out the bugs-and then roll the project out on a large scale.
For the past three years, the Dropbox teams involved in the change over to SMR have been doing a lot of innovation-and more testing than any of us care to remember. The hardware team worked very tightly with the Magic Pocket software team, envisioning each possible edge case and running through every imaginable scenario. That diligent work helped ensure that the migration to the SMRs went off without a hitch.
Software Updates
To prepare for running SMR, we had to make substantial changes to the software layer. We optimized the software by improving the throughput of the networking stack to the disk. We also added an SSD cache: since you can only write to the SMR disks in fixed-size zones, and you can only write to them sequentially, we knew we needed an area to stage the writes. Adding support for this SSD staging layer to the software was specifically targeted for our transition to SMR, but it has helped latency in other cases, as well.
Fortunately, we worked on a significant portion of the necessary software changes required for sequential writing ahead of time, testing our existing fleet of PMR disks.
Before we even began to build the new architecture, we made sure it would support both PMR and SMR. This meant that the whole stack was thoroughly tested by the hundreds of thousands of installed disks before we even started bringing the SMR machines online. This removed a considerable amount of risk from the equation. All we had to do once we received the new machines was change the actual disk we were writing to.
In the end, one aspect of our original design helped smooth the transition to SMR. The generic Dropbox infrastructure handles data in immutable blocks up to 4MB in size, which was convenient for SMR, since it allows random writes onto the disk sequentially into a new block. And the size of the write zones we’ve set up in Magic Pocket, with 1GB extents of data, fit perfectly with the 256MB zones used to split up SMR drives.
Hardware
Initially, SMR was a PoC case: can we actually make it function the way we want? From a hardware point of view, turning to SMR would help us build storage density quicker than with PMR. What we found was that the use-case for SMR matched up very well with the way we’ve already architected Magic Pocket.
But in comparison to the control we have over our own software stack for SMR, the hardware team had a massive hill to climb in terms of learning everything that went on in the background. One of the biggest challenges in enabling SMR for Dropbox was that it is a new technology in the data-center context. It was the healthy working relationship between the hardware team and the Magic Pocket team that allowed the project to be as successful as it turned out to be.
Dropbox is a large tech company that’s working on calibrating and fine-tuning their software for SMR, but also the use-case is so natural for us that we’ve been motivated to move quickly.
Still, being first had its challenges-not least being the sheer amount of data we already manage. Our hardware team had very limited support when it came to preparing for SMR. The vendors selling the drives didn’t have the chassis configuration that we have – our current test cluster is about six racks, and there are 48 systems, or close to 5,000 drives. So when we iterated through our revisions, we were able to obtain a far better signal, which led to a stronger test process. And that helped put us at the bleeding edge of the technology: few companies have really invested in SMR, so we often ended up doing a lot of the testing for our vendors, which kept us a step ahead.
Increasing Our Density
One of our goals when embarking on the SMR initiative last year was to have 25% of our storage capacity on SMR drives in 2019. Since September, all new drives for our storage servers are now SMR. Meanwhile, we’ve been able to continuously increase the density of our disk capacity faster than the growth of the data itself. At this rate, close to 40% of Dropbox’s data will be on SMR by the end of 2019, surpassing our predicted goal.
Cost Savings
Much like our storage goals, the actual cost savings of switching to SMR have met our expectations. We’re able to store roughly 10 to 20% more data on an SMR drive than on a PMR drive of the same capacity at little to no cost difference. But we also found that moving to the high-capacity SMR drives we’re using now has resulted in more than a 20%% savings overall compared to the last generation storage design. And we’re also realizing savings in part due to other new lower cost hardware. Meanwhile, our efforts to work with multiple vendors for the SMR drives will further benefit the entire Dropbox supply chain and company’s future storage cost structure.
Energy Savings
The transition to SMR has also made the firm a much more efficient energy consumer. SMR drives have a lower power footprint, so we’re realizing savings by using the new 14TB drives compared to the previous 8TB units. In essence, we are working with much denser racks, but our power draw has increased only marginally. And we have been able to increase the number of storage disks from 60 to 100 on a single machine while maintaining the same CPU and memory. Thanks to these efficiencies, we expect to realize even further energy savings as we eventually move to 18, 21 and 24TB drives.
Open Source
The library we use to write to the SMR drives is the open-sourced libzbc, and through the process of working with it and running into the occasional issue, we’ve made 13 contributions to the library. What’s more, we developed our own testing tool called SMRtest which generates synthetic workloads to write/read verify, and benchmark R/W throughput performance on SMR drives. We’ve already shared this tool with our ecosystem partners, suppliers, and vendors, to ensure they have what they need to enable SMR. And we’ll deploy SMRtest as open-source software in the coming months to benefit the wider community.
Specific Challenges
On the software side of things, we opted for more capacity, performance and flexibility by writing directly to the disks without a filesystem. Some OSs are adding support for that, but when we were working with it, it wasn’t an option. So in order to talk to the disk, we used Libzbc – which is essentially a wrapper to send commands directly to the disk, without going through the Linux device or a block device stack.
But during testing, we ran into the issue of the disk simply failing, over and over. It turned out the failures were due to a hardcoded loop-since we weren’t using Linux, whose kernel code includes retry logic, we had to implement our own retry logic around accessing the disk.
Firmware was also another issue when it came to getting the SMR drive technology to work on the existing platform, largely because the components came from various vendors. We work with multiple HDD vendors, as well as various kinds of intermediary technologies, such as the HBA, to connect multiple drives to a system. Each one of these vendors – as well as the server chassis itself-operated with its own firmware.
There were a lot of moving pieces, so the first initiative on the hardware side was to get our various partners and vendors to talk to each other. We then worked with each individual vendor to identify and resolve any issues early on, and all of the vendors have come forward and engaged with us.
But we are convinced that this will pay dividends in the long term. Opting to be multi-source across all the components, for example, insulates us against any single points of failure or too much reliance on a single supplier from a supply chain perspective.
Cold Storage and SMR
One of the latest developments at Dropbox is the incorporation of a new cold storage tier, designed for less frequently accessed data. Depending on the replication scheme, we’ve managed to cut down on disk usage by 25 to 33% with no noticeable change to the end-user experience. Similarly, our cold storage system uses our current mix of SMR and PMR drives, which translates to additional cost-savings without any difference in performance.
To learn more about how we set up the cold tier, read Preslav Le’s recent blog post.
What Future Holds
The simplicity of our infrastructure has also set us up to take advantage of all future innovations in storage technology. The beauty of this approach, we believe, is that future technologies will likely use the same or similar API as SMR, where disks will be able to achieve greater densities by writing continuously to a limited number of zones of the disk at any given time.
They may be microwave-assisted magnetic recording (MAMR) drives or heat assisted magnetic recording (HAMR) drives-but they will have the same interface and we’ll be able to use the same software architecture.
We’re already working on further improving densities with our 2020 storage designs. By jumping to SMR, we’ve opened the door for whatever emerging technologies are coming. For Dropbox, the end result is more cost-efficient storage with a smaller energy footprint without sacrifice in reliability or performance. For the industry overall, our efforts will pay dividends as the underlying architecture is adopted for future enhancements in HDD technology.
Comments
One of our reader, Tom Gartner, email us:
"Your article, First to Adopt SMR, What Dropbox Learned in First Year is an interesting usage case but fundamentally wrong in its assertion that Dropbox was in any way first to use SMR drives. Seagate began shipping SMR drives in 2013 and well before Dropbox received its first drive from WD in September 2018, Seagate has shipped 35 million SMR drives, see: Seagate Ships 35th Millionth SMR HDD, May 3, 2017. Somebody used those 35 million drives."