Using the IBM analog in-memory hardware acceleration kit for neural network training and inference

Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics and the non-ideal peripheral circuitry in AIMC chips require adapting DNNs to be deploy...

Full description

Bibliographic Details
Main Authors: Manuel Le Gallo, Corey Lammie, Julian Büchel, Fabio Carta, Omobayode Fagbohungbe, Charles Mackin, Hsinyu Tsai, Vijay Narayanan, Abu Sebastian, Kaoutar El Maghraoui, Malte J. Rasch
Format: Article
Language:English
Published: AIP Publishing LLC 2023-12-01
Series:APL Machine Learning
Online Access:http://dx.doi.org/10.1063/5.0168089
_version_ 1797367098795622400
author Manuel Le Gallo
Corey Lammie
Julian Büchel
Fabio Carta
Omobayode Fagbohungbe
Charles Mackin
Hsinyu Tsai
Vijay Narayanan
Abu Sebastian
Kaoutar El Maghraoui
Malte J. Rasch
author_facet Manuel Le Gallo
Corey Lammie
Julian Büchel
Fabio Carta
Omobayode Fagbohungbe
Charles Mackin
Hsinyu Tsai
Vijay Narayanan
Abu Sebastian
Kaoutar El Maghraoui
Malte J. Rasch
author_sort Manuel Le Gallo
collection DOAJ
description Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics and the non-ideal peripheral circuitry in AIMC chips require adapting DNNs to be deployed on such hardware to achieve equivalent accuracy to digital computing. In this Tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github.com/IBM/aihwkit. AIHWKit is a Python library that simulates inference and training of DNNs using AIMC. We present an in-depth description of the AIHWKit design, functionality, and best practices to properly perform inference and training. We also present an overview of the Analog AI Cloud Composer, a platform that provides the benefits of using the AIHWKit simulation in a fully managed cloud setting along with physical AIMC hardware access, freely available at https://aihw-composer.draco.res.ibm.com. Finally, we show examples of how users can expand and customize AIHWKit for their own needs. This Tutorial is accompanied by comprehensive Jupyter Notebook code examples that can be run using AIHWKit, which can be downloaded from https://github.com/IBM/aihwkit/tree/master/notebooks/tutorial.
first_indexed 2024-03-08T17:12:26Z
format Article
id doaj.art-fa2ca9179db74df2aabd9c81cc0a6597
institution Directory Open Access Journal
issn 2770-9019
language English
last_indexed 2024-03-08T17:12:26Z
publishDate 2023-12-01
publisher AIP Publishing LLC
record_format Article
series APL Machine Learning
spelling doaj.art-fa2ca9179db74df2aabd9c81cc0a65972024-01-03T19:54:29ZengAIP Publishing LLCAPL Machine Learning2770-90192023-12-0114041102041102-3610.1063/5.0168089Using the IBM analog in-memory hardware acceleration kit for neural network training and inferenceManuel Le Gallo0Corey Lammie1Julian Büchel2Fabio Carta3Omobayode Fagbohungbe4Charles Mackin5Hsinyu Tsai6Vijay Narayanan7Abu Sebastian8Kaoutar El Maghraoui9Malte J. Rasch10IBM Research Europe, 8803 Rüschlikon, SwitzerlandIBM Research Europe, 8803 Rüschlikon, SwitzerlandIBM Research Europe, 8803 Rüschlikon, SwitzerlandIBM Research - Yorktown Heights, Yorktown Heights, New York 10598, USAIBM Research - Yorktown Heights, Yorktown Heights, New York 10598, USAIBM Research - Almaden, San Jose, California 95120, USAIBM Research - Almaden, San Jose, California 95120, USAIBM Research - Yorktown Heights, Yorktown Heights, New York 10598, USAIBM Research Europe, 8803 Rüschlikon, SwitzerlandIBM Research - Yorktown Heights, Yorktown Heights, New York 10598, USAIBM Research - Yorktown Heights, Yorktown Heights, New York 10598, USAAnalog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics and the non-ideal peripheral circuitry in AIMC chips require adapting DNNs to be deployed on such hardware to achieve equivalent accuracy to digital computing. In this Tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github.com/IBM/aihwkit. AIHWKit is a Python library that simulates inference and training of DNNs using AIMC. We present an in-depth description of the AIHWKit design, functionality, and best practices to properly perform inference and training. We also present an overview of the Analog AI Cloud Composer, a platform that provides the benefits of using the AIHWKit simulation in a fully managed cloud setting along with physical AIMC hardware access, freely available at https://aihw-composer.draco.res.ibm.com. Finally, we show examples of how users can expand and customize AIHWKit for their own needs. This Tutorial is accompanied by comprehensive Jupyter Notebook code examples that can be run using AIHWKit, which can be downloaded from https://github.com/IBM/aihwkit/tree/master/notebooks/tutorial.http://dx.doi.org/10.1063/5.0168089
spellingShingle Manuel Le Gallo
Corey Lammie
Julian Büchel
Fabio Carta
Omobayode Fagbohungbe
Charles Mackin
Hsinyu Tsai
Vijay Narayanan
Abu Sebastian
Kaoutar El Maghraoui
Malte J. Rasch
Using the IBM analog in-memory hardware acceleration kit for neural network training and inference
APL Machine Learning
title Using the IBM analog in-memory hardware acceleration kit for neural network training and inference
title_full Using the IBM analog in-memory hardware acceleration kit for neural network training and inference
title_fullStr Using the IBM analog in-memory hardware acceleration kit for neural network training and inference
title_full_unstemmed Using the IBM analog in-memory hardware acceleration kit for neural network training and inference
title_short Using the IBM analog in-memory hardware acceleration kit for neural network training and inference
title_sort using the ibm analog in memory hardware acceleration kit for neural network training and inference
url http://dx.doi.org/10.1063/5.0168089
work_keys_str_mv AT manuellegallo usingtheibmanaloginmemoryhardwareaccelerationkitforneuralnetworktrainingandinference
AT coreylammie usingtheibmanaloginmemoryhardwareaccelerationkitforneuralnetworktrainingandinference
AT julianbuchel usingtheibmanaloginmemoryhardwareaccelerationkitforneuralnetworktrainingandinference
AT fabiocarta usingtheibmanaloginmemoryhardwareaccelerationkitforneuralnetworktrainingandinference
AT omobayodefagbohungbe usingtheibmanaloginmemoryhardwareaccelerationkitforneuralnetworktrainingandinference
AT charlesmackin usingtheibmanaloginmemoryhardwareaccelerationkitforneuralnetworktrainingandinference
AT hsinyutsai usingtheibmanaloginmemoryhardwareaccelerationkitforneuralnetworktrainingandinference
AT vijaynarayanan usingtheibmanaloginmemoryhardwareaccelerationkitforneuralnetworktrainingandinference
AT abusebastian usingtheibmanaloginmemoryhardwareaccelerationkitforneuralnetworktrainingandinference
AT kaoutarelmaghraoui usingtheibmanaloginmemoryhardwareaccelerationkitforneuralnetworktrainingandinference
AT maltejrasch usingtheibmanaloginmemoryhardwareaccelerationkitforneuralnetworktrainingandinference