R&D: CacheSack, Theory and Experience of Google Admission Optimization for Datacenter Flash Caches
Can outperforms prior static admission policies for 7.7% improvement of the TCO, as well as improvements in disk reads (9.5% reduction) and flash wearout (17.8% reduction).
This is a Press Release edited by StorageNewsletter.com on May 9, 2023 at 2:00 pmACM Transactions on Storage has published an article written by Tzu-Wei Yang, Google, Mountain View, CA USA, Seth Pollen, Google, Madison, WI USA, Mustafa Uysal, Arif Merchant, Google, Mountain View, CA USA, Homer Wolfmeister, and Junaid Khalid, Google, Madison, WI USA.
Abstract: “This paper describes the algorithm, implementation, and deployment experience of CacheSack, the admission algorithm for Google datacenter flash caches. CacheSack minimizes the dominant costs of Google’s datacenter flash caches: disk IO and flash footprint. CacheSack partitions cache traffic into disjoint categories, analyzes the observed cache benefit of each subset, and formulates a knapsack problem to assign the optimal admission policy to each subset. Prior to this work, Google datacenter flash cache admission policies were optimized manually, with most caches using the Lazy Adaptive Replacement Cache (LARC) algorithm. Production experiments showed that CacheSack significantly outperforms the prior static admission policies for a 7.7% improvement of the total cost of ownership, as well as significant improvements in disk reads (9.5% reduction) and flash wearout (17.8% reduction).“