Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

PANTHER: A Programmable Architecture for Neural Network Training Harnessing Energy-Efficient ReRAM

Journal Article · · IEEE Transactions on Computers
The wide adoption of deep neural networks has been accompanied by ever-increasing energy and performance demands due to the expensive nature of training them. Additionally, numerous special-purpose architectures have been proposed to accelerate training: both digital and hybrid digital-analog using resistive RAM (ReRAM) crossbars. ReRAM-based accelerators have demonstrated the effectiveness of ReRAM crossbars at performing matrix-vector multiplication operations that are prevalent in training. However, they still suffer from inefficiency due to the use of serial reads and writes for performing the weight gradient and update step. A few works have demonstrated the possibility of performing outer products in crossbars, which can be used to realize the weight gradient and update step without the use of serial reads and writes. However, these works have been limited to low precision operations which are not sufficient for typical training workloads. Moreover, they have been confined to a limited set of training algorithms for fully-connected layers only. To address these limitations, we propose a bit-slicing technique for enhancing the precision of ReRAM-based outer products, which is substantially different from bit-slicing for matrix-vector multiplication only. We incorporate this technique into a crossbar architecture with three variants catered to different training algorithms. To evaluate our design on different types of layers in neural networks (fully-connected, convolutional, etc.) and training algorithms, we develop PANTHER, an ISA-programmable training accelerator with compiler support. Our design can also be integrated into other accelerators in the literature to enhance their efficiency. Our evaluation shows that PANTHER achieves up to 8.02×, 54.21×, and 103× energy reductions as well as 7.16×, 4.02×, and 16× execution time reductions compared to digital accelerators, ReRAM-based accelerators, and GPUs, respectively.
Research Organization:
Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
Sponsoring Organization:
USDOE National Nuclear Security Administration (NNSA)
Grant/Contract Number:
AC04-94AL85000; NA0003525
OSTI ID:
1778028
Report Number(s):
SAND--2021-3551J; 695050
Journal Information:
IEEE Transactions on Computers, Journal Name: IEEE Transactions on Computers Journal Issue: 8 Vol. 69; ISSN 0018-9340
Publisher:
IEEECopyright Statement
Country of Publication:
United States
Language:
English

Similar Records

ReSpike: A Co-Design Framework for Evaluating SNNs on ReRAM-Based Neuromorphic Processors
Conference · Fri Aug 01 00:00:00 EDT 2025 · OSTI ID:3002193

Energy scaling advantages of resistive memory crossbar based computation and its application to sparse coding
Journal Article · Tue Jan 05 19:00:00 EST 2016 · Frontiers in Neuroscience (Online) · OSTI ID:1236485

Impact of Linearity and Write Noise of Analog Resistive Memory Devices in a Neural Algorithm Accelerator
Journal Article · Thu Nov 30 19:00:00 EST 2017 · Conference Proceedings - IEEE International Conference on Rebooting Computing (ICRC) · OSTI ID:1429781