skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Analysis and mitigation of parasitic resistance effects for analog in-memory neural network acceleration

Journal Article · · Semiconductor Science and Technology

To support the increasing demands for efficient deep neural network processing, accelerators based on analog in-memory computation of matrix multiplication have recently gained significant attention for reducing the energy of neural network inference. However, analog processing within memory arrays must contend with the issue of parasitic voltage drops across the metal interconnects, which distort the results of the computation and limit the array size. This work analyzes how parasitic resistance affects the end-to-end inference accuracy of state-of-the-art convolutional neural networks, and comprehensively studies how various design decisions at the device, circuit, architecture, and algorithm levels affect the system's sensitivity to parasitic resistance effects. Here, a set of guidelines are provided for how to design analog accelerator hardware that is intrinsically robust to parasitic resistance, without any explicit compensation or re-training of the network parameters.

Research Organization:
Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
Sponsoring Organization:
USDOE National Nuclear Security Administration (NNSA)
Grant/Contract Number:
NA0003525
OSTI ID:
1828786
Report Number(s):
SAND-2021-12073J; 700487
Journal Information:
Semiconductor Science and Technology, Vol. 36, Issue 11; ISSN 0268-1242
Publisher:
IOP PublishingCopyright Statement
Country of Publication:
United States
Language:
English

References (25)

Neuro-Inspired Computing With Emerging Nonvolatile Memorys journal February 2018
ISAAC: a convolutional neural network accelerator with in-situ analog arithmetic in crossbars journal June 2016
Analogue signal and image processing with large memristor crossbars journal December 2017
Analog architectures for neural network acceleration based on non-volatile memory journal September 2020
Training and operation of an integrated neuromorphic network based on metal-oxide memristors journal May 2015
Access devices for 3D crosspoint memory
  • Burr, Geoffrey W.; Shenoy, Rohit S.; Virwani, Kumar
  • Journal of Vacuum Science & Technology B, Nanotechnology and Microelectronics: Materials, Processing, Measurement, and Phenomena, Vol. 32, Issue 4 https://doi.org/10.1116/1.4889999
journal July 2014
MLPerf Inference Benchmark conference May 2020
Mitigate Parasitic Resistance in Resistive Crossbar-based Convolutional Neural Networks journal July 2020
Recent progress in analog memory-based accelerators for deep learning journal June 2018
Memristive Boltzmann machine: A hardware accelerator for combinatorial optimization and deep learning conference March 2016
Memory devices and applications for in-memory computing journal March 2020
ImageNet: A large-scale hierarchical image database
  • Deng, Jia; Dong, Wei; Socher, Richard
  • 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops), 2009 IEEE Conference on Computer Vision and Pattern Recognition https://doi.org/10.1109/CVPR.2009.5206848
conference June 2009
Circuit-Level Benchmarking of Access Devices for Resistive Nonvolatile Memory Arrays journal September 2016
A Floating-Gate-Based Field-Programmable Analog Array journal September 2010
CxDNN: Hardware-software Compensation Methods for Deep Neural Networks on Resistive Crossbar Systems journal January 2020
Multiscale Co-Design Analysis of Energy, Latency, Area, and Accuracy of a ReRAM Analog Neural Training Accelerator journal March 2018
Accurate deep neural network inference using computational phase-change memory journal May 2020
Fully hardware-implemented memristor convolutional neural network journal January 2020
Parallel programming of an ionic floating-gate memory array for scalable neuromorphic computing journal April 2019
Efficient Mixed-Signal Neurocomputing Via Successive Integration and Rescaling journal March 2020
Efficient Processing of Deep Neural Networks: A Tutorial and Survey journal December 2017
Exploring the Design Space for Crossbar Arrays Built With Mixed-Ionic-Electronic-Conduction (MIEC) Access Devices journal September 2015
Parasitic Effect Analysis in Memristor-Array-Based Neuromorphic Systems journal January 2018
RxNN: A Framework for Evaluating Deep Neural Networks on Resistive Crossbars journal February 2021
Charge-mode parallel architecture for matrix-vector multiplication conference January 2000

Similar Records

An Accurate, Error-Tolerant, and Energy-Efficient Neural Network Inference Engine Based on SONOS Analog Memory
Journal Article · Tue Jan 04 00:00:00 EST 2022 · IEEE Transactions on Circuits and Systems I: Regular Papers · OSTI ID:1828786

Analog architectures for neural network acceleration based on non-volatile memory
Journal Article · Thu Jul 09 00:00:00 EDT 2020 · Applied Physics Reviews · OSTI ID:1828786

In situ Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory
Journal Article · Thu Apr 08 00:00:00 EDT 2021 · Frontiers in Neuroscience (Online) · OSTI ID:1828786

Related Subjects