Framework for In-Memory Computing Based on Memristor and Memcapacitor for On-Chip Training
Memristive crossbar arrays have gained considerable attention from researchers to perform analog in-memory vector-matrix multiplications in machine learning accelerators with low power and constant computational time. This work introduces a comprehensive framework for co-designing the software and h...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10285060/ |
_version_ | 1827398967188520960 |
---|---|
author | Ankur Singh Byung-Geun Lee |
author_facet | Ankur Singh Byung-Geun Lee |
author_sort | Ankur Singh |
collection | DOAJ |
description | Memristive crossbar arrays have gained considerable attention from researchers to perform analog in-memory vector-matrix multiplications in machine learning accelerators with low power and constant computational time. This work introduces a comprehensive framework for co-designing the software and hardware for deep neural networks (DNN) based on memristive and memcapacitive crossbars while considering various non-idealities. The model takes into account device-level factors, including conductance variation, cycle-to-cycle variation, device-to-device variation, peripheral circuits for error/weight gradient computation, and high tolerance. The overall neural network performance is thoroughly assessed by integrating these elements into a unified DNN training process. The proposed framework is implemented using a hybrid approach with Python and PyTorch. Performance evaluation was conducted using a simplified 8-layer VGG network on a measured <inline-formula> <tex-math notation="LaTeX">$128\times 128$ </tex-math></inline-formula> array with weight resolution. Remarkably, the memristive and memcapacitive crossbar arrays achieved outstanding training accuracies of 90.02% and 91.03%, respectively, for the CIFAR-10 dataset. Additionally, detailed hardware estimation for both mem-elements devices is provided, enabling meaningful comparisons with prior works. |
first_indexed | 2024-03-08T19:37:30Z |
format | Article |
id | doaj.art-97206c9e7d5d460cbaf641348a44d48b |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-03-08T19:37:30Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-97206c9e7d5d460cbaf641348a44d48b2023-12-26T00:06:40ZengIEEEIEEE Access2169-35362023-01-011111259011259910.1109/ACCESS.2023.332437510285060Framework for In-Memory Computing Based on Memristor and Memcapacitor for On-Chip TrainingAnkur Singh0Byung-Geun Lee1https://orcid.org/0000-0002-1599-690XGwangju Institute of Science and Technology, Gwangju, South KoreaGwangju Institute of Science and Technology, Gwangju, South KoreaMemristive crossbar arrays have gained considerable attention from researchers to perform analog in-memory vector-matrix multiplications in machine learning accelerators with low power and constant computational time. This work introduces a comprehensive framework for co-designing the software and hardware for deep neural networks (DNN) based on memristive and memcapacitive crossbars while considering various non-idealities. The model takes into account device-level factors, including conductance variation, cycle-to-cycle variation, device-to-device variation, peripheral circuits for error/weight gradient computation, and high tolerance. The overall neural network performance is thoroughly assessed by integrating these elements into a unified DNN training process. The proposed framework is implemented using a hybrid approach with Python and PyTorch. Performance evaluation was conducted using a simplified 8-layer VGG network on a measured <inline-formula> <tex-math notation="LaTeX">$128\times 128$ </tex-math></inline-formula> array with weight resolution. Remarkably, the memristive and memcapacitive crossbar arrays achieved outstanding training accuracies of 90.02% and 91.03%, respectively, for the CIFAR-10 dataset. Additionally, detailed hardware estimation for both mem-elements devices is provided, enabling meaningful comparisons with prior works.https://ieeexplore.ieee.org/document/10285060/Compute-in-memorymemristorTIOXmemcapacitordeep neural networkneuromorphic system |
spellingShingle | Ankur Singh Byung-Geun Lee Framework for In-Memory Computing Based on Memristor and Memcapacitor for On-Chip Training IEEE Access Compute-in-memory memristor TIOX memcapacitor deep neural network neuromorphic system |
title | Framework for In-Memory Computing Based on Memristor and Memcapacitor for On-Chip Training |
title_full | Framework for In-Memory Computing Based on Memristor and Memcapacitor for On-Chip Training |
title_fullStr | Framework for In-Memory Computing Based on Memristor and Memcapacitor for On-Chip Training |
title_full_unstemmed | Framework for In-Memory Computing Based on Memristor and Memcapacitor for On-Chip Training |
title_short | Framework for In-Memory Computing Based on Memristor and Memcapacitor for On-Chip Training |
title_sort | framework for in memory computing based on memristor and memcapacitor for on chip training |
topic | Compute-in-memory memristor TIOX memcapacitor deep neural network neuromorphic system |
url | https://ieeexplore.ieee.org/document/10285060/ |
work_keys_str_mv | AT ankursingh frameworkforinmemorycomputingbasedonmemristorandmemcapacitorforonchiptraining AT byunggeunlee frameworkforinmemorycomputingbasedonmemristorandmemcapacitorforonchiptraining |