DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: In situ Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory

Journal Article · · Frontiers in Neuroscience (Online)
 [1];  [2];  [2];  [3];  [4];  [3];  [2];  [4];  [3];  [3]
  1. Sandia National Lab. (SNL-CA), Livermore, CA (United States); Univ. of Michigan, Ann Arbor, MI (United States)
  2. Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
  3. Sandia National Lab. (SNL-CA), Livermore, CA (United States)
  4. Stanford Univ., CA (United States)

In-memory computing based on non-volatile resistive memory can significantly improve the energy efficiency of artificial neural networks. However, accurate in situ training has been challenging due to the nonlinear and stochastic switching of the resistive memory elements. One promising analog memory is the electrochemical random-access memory (ECRAM), also known as the redox transistor. Its low write currents and linear switching properties across hundreds of analog states enable accurate and massively parallel updates of a full crossbar array, which yield rapid and energy-efficient training. While simulations predict that ECRAM based neural networks achieve high training accuracy at significantly higher energy efficiency than digital implementations, these predictions have not been experimentally achieved. In this work, we train a 3 × 3 array of ECRAM devices that learns to discriminate several elementary logic gates (AND, OR, NAND). We record the evolution of the network’s synaptic weights during parallel in situ (on-line) training, with outer product updates. Due to linear and reproducible device switching characteristics, our crossbar simulations not only accurately simulate the epochs to convergence, but also quantitatively capture the evolution of weights in individual devices. The implementation of the first in situ parallel training together with strong agreement with simulation results provides a significant advance toward developing ECRAM into larger crossbar arrays for artificial neural network accelerators, which could enable orders of magnitude improvements in energy efficiency of deep neural networks.

Research Organization:
Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Sponsoring Organization:
USDOE National Nuclear Security Administration (NNSA); National Science Foundation (NSF)
Grant/Contract Number:
AC04-94AL85000; NA0003525; ECCS-1739795; ECCS-1542152
OSTI ID:
1775154
Alternate ID(s):
OSTI ID: 1778038
Report Number(s):
SAND-2021-3754J; 695112
Journal Information:
Frontiers in Neuroscience (Online), Vol. 15; ISSN 1662-453X
Publisher:
Frontiers Research FoundationCopyright Statement
Country of Publication:
United States
Language:
English

References (59)

Mixed-Precision Deep Learning Based on Computational Memory journal May 2020
Alloying conducting channels for reliable neuromorphic computing journal June 2020
Oxide‐Based Electrolyte‐Gated Transistors for Spatiotemporal Information Processing journal October 2020
A fully integrated reprogrammable memristor–CMOS system for efficient multiply–accumulate operations journal July 2019
Deep learning journal May 2015
Memristive crossbar arrays for brain-inspired computing journal March 2019
Memristor-Based Analog Computation and Neural Network Classification with a Dot Product Engine journal January 2018
Analogue signal and image processing with large memristor crossbars journal December 2017
Low-Voltage, CMOS-Free Synaptic Memory Based on Li X TiO 2 Redox Transistors journal September 2019
Artificial Synapses with Short- and Long-Term Memory for Spiking Neural Networks Based on Renewable Materials journal August 2017
A Mini Review of Neuromorphic Architectures and Implementations journal October 2016
Implementation of multilayer perceptron network with highly uniform passive memristive crossbar circuits journal June 2018
Efficient and self-adaptive in-situ learning in multilayer memristor neural networks journal June 2018
Wafer-Scale TaO x Device Variability and Implications for Neuromorphic Computing Applications conference March 2019
First steps towards the realization of a double layer perceptron based on organic memristive devices journal November 2016
Memory devices and applications for in-memory computing journal March 2020
Impact of Non-Ideal Characteristics of Resistive Synaptic Devices on Implementing Convolutional Neural Networks journal September 2019
Pattern classification by memristive crossbar circuits using ex situ and in situ training journal June 2013
A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing journal February 2017
Redox transistors for neuromorphic computing journal November 2019
High speed and high density organic electrochemical transistor arrays journal October 2011
Impact of Linearity and Write Noise of Analog Resistive Memory Devices in a Neural Algorithm Accelerator conference November 2017
SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations journal January 2018
Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing journal April 2019
In-memory computing with resistive switching devices journal June 2018
Low-Power, Electrochemically Tunable Graphene Synapses for Neuromorphic Computing journal July 2018
Neuromorphic Computing Using NAND Flash Memory Architecture With Pulse Width Modulation Scheme journal September 2020
Efficient Processing of Deep Neural Networks: A Tutorial and Survey journal December 2017
Using Floating-Gate Memory to Train Ideal Accuracy Neural Networks journal June 2019
Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices: Design Considerations journal July 2016
A 42pJ/decision 3.12TOPS/W robust in-memory machine learning classifier with on-chip training
  • Gonugondla, Sujan Kumar; Kang, Mingu; Shanbhag, Naresh
  • 2018 IEEE International Solid-State Circuits Conference (ISSCC), 2018 IEEE International Solid - State Circuits Conference - (ISSCC) https://doi.org/10.1109/ISSCC.2018.8310398
conference February 2018
Unsupervised Learning by Spike Timing Dependent Plasticity in Phase Change Memory (PCM) Synapses journal March 2016
Organic neuromorphic devices: Past, present, and future challenges journal August 2020
Li-Ion Synaptic Transistor for Low Power Analog Computing journal November 2016
Stochastic learning in oxide binary synaptic device for neuromorphic computing journal January 2013
Approximating Back-propagation for a Biologically Plausible Local Learning Rule in Spiking Neural Networks
  • Shrestha, Amar; Fang, Haowen; Wu, Qing
  • ICONS '19: International Conference on Neuromorphic Systems, Proceedings of the International Conference on Neuromorphic Systems https://doi.org/10.1145/3354265.3354275
conference July 2019
Metal-oxide based, CMOS-compatible ECRAM for Deep Learning Accelerator conference December 2019
Training and operation of an integrated neuromorphic network based on metal-oxide memristors journal May 2015
Neuromorphic computing using non-volatile memory journal October 2016
Temperature-resilient solid-state organic artificial synapses for neuromorphic computing journal July 2020
Physical Realization of a Supervised Learning System Built with Organic Memristive Synapses journal September 2016
Mechanisms for Enhanced State Retention and Stability in Redox-Gated Organic Neuromorphic Devices journal November 2018
Integration and Co-design of Memristive Devices and Algorithms for Artificial Intelligence journal December 2020
Protonic solid-state electrochemical synapse for physical neural networks journal June 2020
Is a 4-Bit Synaptic Weight Resolution Enough? – Constraints on Enabling Spike-Timing Dependent Plasticity in Neuromorphic Hardware journal January 2012
A Methodology to Improve Linearity of Analog RRAM for Neuromorphic Computing conference June 2018
Contrasting Advantages of Learning With Random Weights and Backpropagation in Non-Volatile Memory Neural Networks journal January 2019
Bayesian Computation Emerges in Generic Cortical Microcircuits through Spike-Timing-Dependent Plasticity journal April 2013
ECRAM as Scalable Synaptic Cell for High-Speed, Low-Power Neuromorphic Computing conference December 2018
Filament‐Free Bulk Resistive Memory Enables Deterministic Analogue Switching journal September 2020
Synaptic Weight States in a Locally Competitive Algorithm for Neuromorphic Memristive Hardware journal November 2015
Synthesis of neural networks for spatio-temporal spike pattern recognition and processing journal January 2013
Face classification using electronic synapses journal May 2017
Multiscale Co-Design Analysis of Energy, Latency, Area, and Accuracy of a ReRAM Analog Neural Training Accelerator journal March 2018
Fully hardware-implemented memristor convolutional neural network journal January 2020
Immunity to Device Variations in a Spiking Neural Network With Memristive Nanodevices journal May 2013
Fast, energy-efficient, robust, and reproducible mixed-signal neuromorphic classifier based on embedded NOR flash memory technology conference December 2017
Resistive memory device requirements for a neural algorithm accelerator conference July 2016
Equivalent-accuracy accelerated neural-network training using analogue memory journal June 2018