Enabling Training of Neural Networks on Noisy Hardware
Deep neural networks (DNNs) are typically trained using the conventional stochastic gradient descent (SGD) algorithm. However, SGD performs poorly when applied to train networks on non-ideal analog hardware composed of resistive device arrays with non-symmetric conductance modulation characteristics...
Main Author: | Tayfun Gokmen |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2021-09-01
|
Series: | Frontiers in Artificial Intelligence |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/frai.2021.699148/full |
Similar Items
-
Impact of Asymmetric Weight Update on Neural Network Training With Tiki-Taka Algorithm
by: Chaeun Lee, et al.
Published: (2022-01-01) -
Design of Power-Efficient Training Accelerator for Convolution Neural Networks
by: JiUn Hong, et al.
Published: (2021-03-01) -
Neural Network Training With Asymmetric Crosspoint Elements
by: Murat Onen, et al.
Published: (2022-05-01) -
Ex Situ Transfer of Bayesian Neural Networks to Resistive Memory‐Based Inference Hardware
by: Thomas Dalgaty, et al.
Published: (2021-08-01) -
Neural network learning using non-ideal resistive memory devices
by: Youngseok Kim, et al.
Published: (2022-10-01)