R&D : Analog Architectures for Neural Network Acceleration Based on Non-Volatile Memory
Explore and consolidate various approaches that have been proposed to address critical challenges faced by analog accelerators, for both neural network inference and training, and highlight key design trade-offs underlying these techniques.
This is a Press Release edited by StorageNewsletter.com on September 21, 2020 at 2:16 pmApplied Physics Reviews has published an article written by T. Patrick Xiao, Christopher H. Bennett, Ben Feinberg, Sapan Agarwal, and Matthew J. Marinella, Sandia National Laboratories, Albuquerque, New Mexico 87185-1084, USA
Abstract: “Analog hardware accelerators, which perform computation within a dense memory array, have the potential to overcome the major bottlenecks faced by digital hardware for data-heavy workloads such as deep learning. Exploiting the intrinsic computational advantages of memory arrays, however, has proven to be challenging principally due to the overhead imposed by the peripheral circuitry and due to the non-ideal properties of memory devices that play the role of the synapse. We review the existing implementations of these accelerators for deep supervised learning, organizing our discussion around the different levels of the accelerator design hierarchy, with an emphasis on circuits and architecture. We explore and consolidate the various approaches that have been proposed to address the critical challenges faced by analog accelerators, for both neural network inference and training, and highlight the key design trade-offs underlying these techniques.“