Materials Science Division Argonne National Laboratory Lemont IL 60439 USA
Department of Electrical Engineering Purdue University West Lafayette IN 47907 USA
Department of Electrical Engineering Indian Institute of Technology Madras Chennai Tamil Nadu 600036 India
Materials Science Division Argonne National Laboratory Lemont IL 60439 USA, Pritzker School of Molecular Engineering University of Chicago Chicago IL 60637 USA
Deep learning has become ubiquitous, touching daily lives across the globe. Today, traditional computer architectures are stressed to their limits in efficiently executing the growing complexity of data and models. Compute‐in‐memory (CIM) can potentially play an important role in developing efficient hardware solutions that reduce data movement from compute‐unit to memory, known as the von Neumann bottleneck. At its heart is a cross‐bar architecture with nodal non‐volatile‐memory elements that performs an analog multiply‐and‐accumulate operation, enabling the matrix‐vector‐multiplications repeatedly used in all neural network workloads. The memory materials can significantly influence final system‐level characteristics and chip performance, including speed, power, and classification accuracy. With an over‐arching co‐design viewpoint, this review assesses the use of cross‐bar based CIM for neural networks, connecting the material properties and the associated design constraints and demands to application, architecture, and performance. Both digital and analog memory are considered, assessing the status for training and inference, and providing metrics for the collective set of properties non‐volatile memory materials will need to demonstrate for a successful CIM technology.
Haensch, Wilfried, et al. "Compute in‐Memory with Non‐Volatile Elements for Neural Networks: A Review from a Co‐Design Perspective." Advanced Materials, vol. 35, no. 37, Mar. 2023. https://doi.org/10.1002/adma.202204944
Haensch, Wilfried, Raghunathan, Anand, Roy, Kaushik, Chakrabarti, Bhaswar, Phatak, Charudatta M., Wang, Cheng, & Guha, Supratik (2023). Compute in‐Memory with Non‐Volatile Elements for Neural Networks: A Review from a Co‐Design Perspective. Advanced Materials, 35(37). https://doi.org/10.1002/adma.202204944
Haensch, Wilfried, Raghunathan, Anand, Roy, Kaushik, et al., "Compute in‐Memory with Non‐Volatile Elements for Neural Networks: A Review from a Co‐Design Perspective," Advanced Materials 35, no. 37 (2023), https://doi.org/10.1002/adma.202204944
@article{osti_1959387,
author = {Haensch, Wilfried and Raghunathan, Anand and Roy, Kaushik and Chakrabarti, Bhaswar and Phatak, Charudatta M. and Wang, Cheng and Guha, Supratik},
title = {Compute in‐Memory with Non‐Volatile Elements for Neural Networks: A Review from a Co‐Design Perspective},
annote = {Abstract Deep learning has become ubiquitous, touching daily lives across the globe. Today, traditional computer architectures are stressed to their limits in efficiently executing the growing complexity of data and models. Compute‐in‐memory (CIM) can potentially play an important role in developing efficient hardware solutions that reduce data movement from compute‐unit to memory, known as the von Neumann bottleneck. At its heart is a cross‐bar architecture with nodal non‐volatile‐memory elements that performs an analog multiply‐and‐accumulate operation, enabling the matrix‐vector‐multiplications repeatedly used in all neural network workloads. The memory materials can significantly influence final system‐level characteristics and chip performance, including speed, power, and classification accuracy. With an over‐arching co‐design viewpoint, this review assesses the use of cross‐bar based CIM for neural networks, connecting the material properties and the associated design constraints and demands to application, architecture, and performance. Both digital and analog memory are considered, assessing the status for training and inference, and providing metrics for the collective set of properties non‐volatile memory materials will need to demonstrate for a successful CIM technology.},
doi = {10.1002/adma.202204944},
url = {https://www.osti.gov/biblio/1959387},
journal = {Advanced Materials},
issn = {ISSN 0935-9648},
number = {37},
volume = {35},
place = {Germany},
publisher = {Wiley Blackwell (John Wiley & Sons)},
year = {2023},
month = {03}}
2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPR Workshops), 2009 IEEE Conference on Computer Vision and Pattern Recognitionhttps://doi.org/10.1109/CVPR.2009.5206848