What are you looking for ?
facts 2025 and predictions 2026
RAIDON

French Team Led by CEA-Leti Develops Hybrid Memory Technology

Combines best traits of 2 previously incompatible technologies - ferroelectric capacitors and memristors - into single CMOS-compatible memory stack

Breaking through a technological roadblock that has long limited efficient edge-AI learning, a team of French scientists developed the first hybrid memory technology to support adaptive local training and inference of artificial neural networks.Logo Cea Leti RougeIn a paper titled A Ferroelectric-Memristor Memory for Both Training and Inference published in Nature Electronics, (see below) the team presents a new hybrid memory system that combines the best traits of 2 previously incompatible technologies – ferroelectric capacitors and memristors – into a single, CMOS-compatible memory stack. This novel architecture delivers a long-sought solution to one of edge AI’s most vexing challenges: how to perform both learning and inference on a chip without burning through energy budgets or challenging hardware constraints.

Led by CEA-Leti, and including scientists from several French microelectronic research centers, the project demonstrated that it is possible to perform on-chip training with competitive accuracy, sidestepping the need for off-chip updates and complex external systems. The team’s innovation enables edge systems and devices like autonomous vehicles, medical sensors, and industrial monitors to learn from real-world data as it arrives – adapting models on the fly while keeping energy consumption and hardware wear under tight control.

Single memory, which functions as both memristor and FeCAP, for neural network inference and training
(Photo credit: © E.Vianello-M.Plousey Dupouy/CEA​)

Hybrid Memory Technology Memristor Sd

Challenge: No-Win Tradeoff
Edge AI demands both inference (reading data to make decisions) and learning (updating models based on new data). But until now, memory technologies could only do one ​well:

  • Memristors (resistive random access memories) excel at inference because they can store analog weights, are energy-efficient during read operations, and the support in-memory computing.

  • Ferroelectric capacitors (FeCAPs) allow rapid, low-energy updates, but their read operations are destructive – making them unsuitable for inference.

As a result, hardware designers faced the choice of favoring inference and outsourcing training to the cloud, or attempt training with high costs and limited endurance.

Training at Edge
The team’s guiding idea was that while the analog precision of memristors suffices for inference, it falls short for learning, which demands small, progressive weight adjustments.

Inspired by quantized neural networks, we adopted a hybrid approach: Forward and backward passes use low-precision weights stored in analog in memristors, while updates are achieved using higher-precision FeCAPs. Memristors are periodically reprogrammed based on the most-significant bits stored in FeCAPs, ensuring efficient and accurate learning,” said Michele Martemucci, lead author of the paper.

Breakthrough: One Memory, Two Personalities
The team engineered a unified memory stack made of silicon-doped hafnium oxide with a titanium scavenging layer. This dual-mode device can operate as a FeCAP or a memristor, depending on how it’s electrically ‘formed.’

  • The same memory unit can be used for precise digital weight storage (training) and analog weight expression (inference), depending on its state.
  • A digital-to-analog transfer method, requiring no formal DAC, converts hidden weights in FeCAPs into conductance levels in memristors.

This hardware was fabricated and tested on an 18,432-device array using standard 130nm CMOS technology, integrating both memory types and their periphery circuits on a single chip.

In addition to CEA-Leti, the research team included scientists from Université Grenoble Alpes, CEA-List, the French National Centre for Scientific Research (CNRS), the University of Bordeaux, Bordeaux INP, IMS France, Université Paris-Saclay, and the Center for Nanosciences and Nanotechnologies (C2N).

Authors acknowledge funding support from the European Research Council (co​nsolidator grant DIVERSE: 101043854) and through a France 2030 government grant (ANR-22-PEEL-0010). ​

Article: A ferroelectric–memristor memory for both training and inference (Open access)

Nature Electronics has published an article written by Michele Martemucci, François Rummens, Yannick Malot, Tifenn Hirtzlin, Olivier Guille, Simon Martin, Catherine Carabasse, Université Grenoble Alpes, CEA-List, Grenoble, France, Adrien F. Vincent, Sylvain Saïghi, Université de Bordeaux, CNRS, Bordeaux INP, IMS, Talence, France, Laurent Grenouillet, Université Grenoble Alpes, CEA-Leti, Grenoble, France, Damien Querlioz, Université Paris-Saclay, CNRS, Centre de Nanosciences et de Nanotechnologies, Palaiseau, France,and Elisa Vianello, Université Grenoble Alpes, CEA-Leti, Grenoble, France.

Abstract: Developing artificial intelligence systems that are capable of learning at the edge of a network requires both energy-efficient inference and learning. However, current memory technology cannot provide the necessary combination of high endurance, low programming energy and non-destructive read processes. Here we report a unified memory stack that functions as a memristor as well as a ferroelectric capacitor. Memristors are ideal for inference but have limited endurance and high programming energy; ferroelectric capacitors are ideal for learning, but their destructive read process makes them unsuitable for inference. Our memory stack uses a silicon-doped hafnium oxide and titanium scavenging layer that are integrated into the back end of line of a complementary metal–oxide–semiconductor process. With this approach, we fabricate an 18,432-device hybrid array (consisting of 16,384 ferroelectric capacitors and 2,048 memristors) with on-chip complementary metal–oxide–semiconductor periphery circuits. Each weight is associated with an analogue value stored as conductance levels in the memristors and a high-precision hidden value stored as a signed integer in the ferroelectric capacitors. Weight transfers between the different memory technologies occur without a formal digital-to-analogue converter. We use the array to validate an on-chip learning solution that, without batching, performs competitively with floating-point-precision software models across several benchmarks.

Articles_bottom
ExaGrid
AIC
ATTO
OPEN-E