Optimised weight programming for analogue memory-based deep neural networks
Device-level complexity represents a big shortcoming for the hardware realization of analogue memory-based deep neural networks. Mackin et al. report a generalized computational framework, translating software-trained weights into analogue hardware weights, to minimise inference accuracy degradation...
Main Authors: | Charles Mackin, Malte J. Rasch, An Chen, Jonathan Timcheck, Robert L. Bruce, Ning Li, Pritish Narayanan, Stefano Ambrogio, Manuel Le Gallo, S. R. Nandakumar, Andrea Fasoli, Jose Luquin, Alexander Friz, Abu Sebastian, Hsinyu Tsai, Geoffrey W. Burr |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2022-06-01
|
Series: | Nature Communications |
Online Access: | https://doi.org/10.1038/s41467-022-31405-1 |
Similar Items
-
Toward Software-Equivalent Accuracy on Transformer-Based Deep Neural Networks With Analog Memory Devices
by: Katie Spoon, et al.
Published: (2021-07-01) -
Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
by: Malte J. Rasch, et al.
Published: (2023-08-01) -
Impact of analog memory device failure on in-memory computing inference accuracy
by: Ning Li, et al.
Published: (2023-03-01) -
Using the IBM analog in-memory hardware acceleration kit for neural network training and inference
by: Manuel Le Gallo, et al.
Published: (2023-12-01) -
Optimization of Projected Phase Change Memory for Analog In‐Memory Computing Inference
by: Ning Li, et al.
Published: (2023-06-01)