Hybrid system resolves edge AI’s on-chip memory conundrum

A hybrid memory system combines the best traits of ferroelectric capacitors and memristors in a single memory stack. The post Hybrid system resolves edge AI’s on-chip memory conundrum appeared first on EDN.

Hybrid system resolves edge AI’s on-chip memory conundrum

Edge AI—enabling autonomous vehicles, medical sensors, and industrial monitors to learn from real-world data as it arrives—can now adopt learning models on the fly while keeping energy consumption and hardware wear under tight control.

It’s made possible by a hybrid memory system that combines the best traits of two previously incompatible technologies—ferroelectric capacitors and memristors—into a single, CMOS-compatible memory stack. This novel architecture has been developed by scientists at CEA-Leti, in collaboration with scientists at French microelectronic research centers.

Their work has been published in a paper titled “A Ferroelectric-Memristor Memory for Both Training and Inference” in Nature Electronics. It explains how it’s possible to perform on-chip training with competitive accuracy, sidestepping the need for off-chip updates and complex external systems.

 

The on-chip memory conundrum

Edge AI requires both inference for reading data to make decisions and learning, a.k.a. training, for updating models based on new data on a chip without burning through energy budgets or challenging hardware constraints. However, for on-chip memory, while memristors are considered suitable for inference, ferroelectric capacitors (FeCAPs) are more suitable for learning tasks.

Resistive random-access memories or memristors excel at inference because they can store analog weights. Moreover, they are energy-efficient during read operations and better support in-memory computing. However, while the analog precision of memristors suffices for inference, it falls short for learning, which demands small, progressive weight adjustments.

On the other hand, ferroelectric capacitors allow rapid, low-energy updates, but their read operations are destructive, making them unsuitable for inference. Consequently, design engineers face the choice of either favoring inference and outsourcing training to the cloud or carrying out training with high costs and limited endurance.

This led French scientists to adopt a hybrid approach in which forward and backward passes use low-precision weights stored in analog form in memristors, while updates are achieved using higher-precision FeCAPs. “Memristors are periodically reprogrammed based on the most-significant bits stored in FeCAPs, ensuring efficient and accurate learning,” said Michele Martemucci, lead author of the paper on this new hybrid memory system.

How hybrid approach works

The CEA-Leti team developed this hybrid system by engineering a unified memory stack made of silicon-doped hafnium oxide with a titanium scavenging layer. This dual-mode memory device can operate as a FeCAP or a memristor, depending on its electrical formation.

In other words, the same memory unit can be used for precise digital weight storage (training) and analog weight expression (inference), depending on its state. Here, a digital-to-analog transfer method, requiring no formal DAC, converts hidden weights in FeCAPs into conductance levels in memristors.

The hardware for this hybrid system was fabricated and tested on an 18,432-device array using standard 130-nm CMOS technology, integrating both memory types and their periphery circuits on a single chip.

CEA-Leti has acknowledged funding support for this design undertaking from the European Research Council and the French Government’s France 2030 grant.

Related Content

The post Hybrid system resolves edge AI’s on-chip memory conundrum appeared first on EDN.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow