Optimised weight programming for analogue memory-based deep neural networks
Device-level complexity represents a big shortcoming for the hardware realization of analogue memory-based deep neural networks. Mackin et al. report a generalized computational framework, translating software-trained weights into analogue hardware weights, to minimise inference accuracy degradation...
Main Authors: | , , , , , , , , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2022-06-01
|
Series: | Nature Communications |
Online Access: | https://doi.org/10.1038/s41467-022-31405-1 |
_version_ | 1828426708482523136 |
---|---|
author | Charles Mackin Malte J. Rasch An Chen Jonathan Timcheck Robert L. Bruce Ning Li Pritish Narayanan Stefano Ambrogio Manuel Le Gallo S. R. Nandakumar Andrea Fasoli Jose Luquin Alexander Friz Abu Sebastian Hsinyu Tsai Geoffrey W. Burr |
author_facet | Charles Mackin Malte J. Rasch An Chen Jonathan Timcheck Robert L. Bruce Ning Li Pritish Narayanan Stefano Ambrogio Manuel Le Gallo S. R. Nandakumar Andrea Fasoli Jose Luquin Alexander Friz Abu Sebastian Hsinyu Tsai Geoffrey W. Burr |
author_sort | Charles Mackin |
collection | DOAJ |
description | Device-level complexity represents a big shortcoming for the hardware realization of analogue memory-based deep neural networks. Mackin et al. report a generalized computational framework, translating software-trained weights into analogue hardware weights, to minimise inference accuracy degradation. |
first_indexed | 2024-12-10T16:50:21Z |
format | Article |
id | doaj.art-6ea0ca917cd64bd384362f7d3c473ddb |
institution | Directory Open Access Journal |
issn | 2041-1723 |
language | English |
last_indexed | 2024-12-10T16:50:21Z |
publishDate | 2022-06-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Nature Communications |
spelling | doaj.art-6ea0ca917cd64bd384362f7d3c473ddb2022-12-22T01:40:55ZengNature PortfolioNature Communications2041-17232022-06-0113111210.1038/s41467-022-31405-1Optimised weight programming for analogue memory-based deep neural networksCharles Mackin0Malte J. Rasch1An Chen2Jonathan Timcheck3Robert L. Bruce4Ning Li5Pritish Narayanan6Stefano Ambrogio7Manuel Le Gallo8S. R. Nandakumar9Andrea Fasoli10Jose Luquin11Alexander Friz12Abu Sebastian13Hsinyu Tsai14Geoffrey W. Burr15IBM Research–AlmadenIBM Research–Yorktown HeightsIBM Research–AlmadenStanford UniversityIBM Research–Yorktown HeightsIBM Research–Yorktown HeightsIBM Research–AlmadenIBM Research–AlmadenIBM Research–ZurichIBM Research–ZurichIBM Research–AlmadenIBM Research–AlmadenIBM Research–AlmadenIBM Research–ZurichIBM Research–AlmadenIBM Research–AlmadenDevice-level complexity represents a big shortcoming for the hardware realization of analogue memory-based deep neural networks. Mackin et al. report a generalized computational framework, translating software-trained weights into analogue hardware weights, to minimise inference accuracy degradation.https://doi.org/10.1038/s41467-022-31405-1 |
spellingShingle | Charles Mackin Malte J. Rasch An Chen Jonathan Timcheck Robert L. Bruce Ning Li Pritish Narayanan Stefano Ambrogio Manuel Le Gallo S. R. Nandakumar Andrea Fasoli Jose Luquin Alexander Friz Abu Sebastian Hsinyu Tsai Geoffrey W. Burr Optimised weight programming for analogue memory-based deep neural networks Nature Communications |
title | Optimised weight programming for analogue memory-based deep neural networks |
title_full | Optimised weight programming for analogue memory-based deep neural networks |
title_fullStr | Optimised weight programming for analogue memory-based deep neural networks |
title_full_unstemmed | Optimised weight programming for analogue memory-based deep neural networks |
title_short | Optimised weight programming for analogue memory-based deep neural networks |
title_sort | optimised weight programming for analogue memory based deep neural networks |
url | https://doi.org/10.1038/s41467-022-31405-1 |
work_keys_str_mv | AT charlesmackin optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT maltejrasch optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT anchen optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT jonathantimcheck optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT robertlbruce optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT ningli optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT pritishnarayanan optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT stefanoambrogio optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT manuellegallo optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT srnandakumar optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT andreafasoli optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT joseluquin optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT alexanderfriz optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT abusebastian optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT hsinyutsai optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks AT geoffreywburr optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks |