What are you looking for ?
Advertise with us
RAIDON

R&D: CMOS-Integrated Compute-in-Memory Macro-Based on Resistive RAM for AI Edge Devices

Achieving latencies between 9.2 and 18.3ns, and energy efficiencies between 146.21 and 36.61 tera-ops/s/W, for binary and multibit input–weight–output configurations, respectively

Nature Electronics has published an article written by Cheng-Xin Xue, Yen-Cheng Chiu, and Meng-Fan Chang, National Tsing Hua University (NTHU), Hsinchu City, Taiwan, Republic of China.

Abstract: “The development of small, energy-efficient artificial intelligence edge devices is limited in conventional computing architectures by the need to transfer data between the processor and memory. Non-volatile compute-in-memory (nvCIM) architectures have the potential to overcome such issues, but the development of high-bit-precision configurations required for dot-product operations remains challenging. In particular, input–output parallelism and cell-area limitations, as well as signal margin degradation, computing latency in multibit analogue readout operations and manufacturing challenges, still need to be addressed. Here we report a 2Mb nvCIM macro (which combines memory cells and related peripheral circuitry) that is based on single-level cell resistive random-access memory devices and is fabricated in a 22nm complementary metal–oxide–semiconductor foundry process. Compared with previous nvCIM schemes, our macro can perform multibit dot-product operations with increased input–output parallelism, reduced cell-array area, improved accuracy, and reduced computing latency and energy consumption. The macro can, in particular, achieve latencies between 9.2 and 18.3ns, and energy efficiencies between 146.21 and 36.61 tera-operations per second per watt, for binary and multibit input–weight–output configurations, respectively.

Articles_bottom
ExaGrid
AIC
ATTOtarget="_blank"
OPEN-E