R&D: Read Reference Voltage Adaptation for NAND Flash Memories with Neural Networks Based on Sparse Histograms
Proposing ML approach with neural networks to estimate read reference voltages, proposing method utilizing sparse histogram data for threshold voltage distributions
This is a Press Release edited by StorageNewsletter.com on October 26, 2023 at 2:00 pmIEEE Access has published an article written by Daniel Nicolas Bailon,Institute for System Dynamics (ISD), HTWG Konstanz, University of Applied Sciences, Konstanz, Germany, Sergo Shavgulidze, Faculty of Informatics and Control Systems, Georgian Technical University, Tbilisi, Georgia, and Jürgen Freudenberger,Agentur für Innovation in der Cybersicherheit GmbH (Cyberagentur), Halle (Saale), Germany.
Abstract: “Non-volatile NAND flash memories store information as an electrical charge. Different read reference voltages are applied to read the data. However, the threshold voltage distributions vary due to aging effects like program erase cycling and data retention time. It is necessary to adapt the read reference voltages for different life-cycle conditions to minimize the error probability during readout. In the past, methods based on pilot data or high-resolution threshold voltage histograms were proposed to estimate the changes in voltage distributions. In this work, we propose a machine learning approach with neural networks to estimate the read reference voltages. The proposed method utilizes sparse histogram data for the threshold voltage distributions. For reading the information from triple-level cell (TLC) memories, several read reference voltages are applied in sequence. We consider two histogram resolutions. The simplest histogram consists of the zero-and-one ratios for the hard decision read operation, whereas a higher resolution is obtained by considering the quantization levels for soft-input decoding. This approach does not require pilot data for the voltage adaptation. Furthermore, only a few measurements of extreme points of the threshold voltage distributions are required as training data. Measurements with different conditions verify the proposed approach. The resulting neural networks perform well under other life-cycle conditions.“