Using the IBM analog in-memory hardware acceleration kit for neural network training and inference
Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics and the non-ideal peripheral circuitry in AIMC chips require adapting DNNs to be deploy...
Main Authors: | Manuel Le Gallo, Corey Lammie, Julian Büchel, Fabio Carta, Omobayode Fagbohungbe, Charles Mackin, Hsinyu Tsai, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui, Malte J. Rasch |
---|---|
Format: | Article |
Language: | English |
Published: |
AIP Publishing LLC
2023-12-01
|
Series: | APL Machine Learning |
Online Access: | http://dx.doi.org/10.1063/5.0168089 |
Similar Items
-
Impact of analog memory device failure on in-memory computing inference accuracy
by: Ning Li, et al.
Published: (2023-03-01) -
Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
by: Malte J. Rasch, et al.
Published: (2023-08-01) -
The Effect of Batch Normalization on Noise Resistant Property of Deep Learning Models
by: Omobayode Fagbohungbe, et al.
Published: (2022-01-01) -
Optimization of Projected Phase Change Memory for Analog In‐Memory Computing Inference
by: Ning Li, et al.
Published: (2023-06-01) -
Impact of Phase‐Change Memory Flicker Noise and Weight Drift on Analog Hardware Inference for Large‐Scale Deep Learning Networks
by: Jin-Ping Han, et al.
Published: (2022-05-01)