16-Bit Fixed-Point Number Multiplication With CNT Transistor Dot-Product Engine
Resistive crossbar arrays can carry out energy-efficient vector-matrix multiplication, which is a crucial operation in most machine learning applications. However, practical computing tasks that require high precision remain challenging to implement in such arrays because of intrinsic device variabi...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9142231/ |
_version_ | 1818853535706513408 |
---|---|
author | Sungho Kim Yongwoo Lee Hee-Dong Kim Sung-Jin Choi |
author_facet | Sungho Kim Yongwoo Lee Hee-Dong Kim Sung-Jin Choi |
author_sort | Sungho Kim |
collection | DOAJ |
description | Resistive crossbar arrays can carry out energy-efficient vector-matrix multiplication, which is a crucial operation in most machine learning applications. However, practical computing tasks that require high precision remain challenging to implement in such arrays because of intrinsic device variability. Herein, we experimentally demonstrate a precision-extension technique whereby high precision can be attained through the combined operation of multiple devices, each of which stores a portion of the required bit width. Additionally, designed analog-to-digital converters are used to remove the unpredictable effects from noise sources. An 8 × 15 carbon nanotube transistor array can perform multiplication operation, where operands have up to 16 valid bits, without any error, making in-memory computing approaches attractive for high-throughput energy-efficient machine learning accelerators. |
first_indexed | 2024-12-19T07:38:22Z |
format | Article |
id | doaj.art-6e2deaab75c54125b2736b0da289af33 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-19T07:38:22Z |
publishDate | 2020-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-6e2deaab75c54125b2736b0da289af332022-12-21T20:30:32ZengIEEEIEEE Access2169-35362020-01-01813359713360410.1109/ACCESS.2020.3009637914223116-Bit Fixed-Point Number Multiplication With CNT Transistor Dot-Product EngineSungho Kim0https://orcid.org/0000-0002-7004-3482Yongwoo Lee1https://orcid.org/0000-0003-3224-1960Hee-Dong Kim2Sung-Jin Choi3https://orcid.org/0000-0003-1301-2847Department of Electrical Engineering, Sejong University, Seoul, South KoreaSchool of Electrical Engineering, Kookmin University, Seoul, South KoreaDepartment of Electrical Engineering, Sejong University, Seoul, South KoreaSchool of Electrical Engineering, Kookmin University, Seoul, South KoreaResistive crossbar arrays can carry out energy-efficient vector-matrix multiplication, which is a crucial operation in most machine learning applications. However, practical computing tasks that require high precision remain challenging to implement in such arrays because of intrinsic device variability. Herein, we experimentally demonstrate a precision-extension technique whereby high precision can be attained through the combined operation of multiple devices, each of which stores a portion of the required bit width. Additionally, designed analog-to-digital converters are used to remove the unpredictable effects from noise sources. An 8 × 15 carbon nanotube transistor array can perform multiplication operation, where operands have up to 16 valid bits, without any error, making in-memory computing approaches attractive for high-throughput energy-efficient machine learning accelerators.https://ieeexplore.ieee.org/document/9142231/Crossbar arraydot productmatrix multiplicationprecision extension |
spellingShingle | Sungho Kim Yongwoo Lee Hee-Dong Kim Sung-Jin Choi 16-Bit Fixed-Point Number Multiplication With CNT Transistor Dot-Product Engine IEEE Access Crossbar array dot product matrix multiplication precision extension |
title | 16-Bit Fixed-Point Number Multiplication With CNT Transistor Dot-Product Engine |
title_full | 16-Bit Fixed-Point Number Multiplication With CNT Transistor Dot-Product Engine |
title_fullStr | 16-Bit Fixed-Point Number Multiplication With CNT Transistor Dot-Product Engine |
title_full_unstemmed | 16-Bit Fixed-Point Number Multiplication With CNT Transistor Dot-Product Engine |
title_short | 16-Bit Fixed-Point Number Multiplication With CNT Transistor Dot-Product Engine |
title_sort | 16 bit fixed point number multiplication with cnt transistor dot product engine |
topic | Crossbar array dot product matrix multiplication precision extension |
url | https://ieeexplore.ieee.org/document/9142231/ |
work_keys_str_mv | AT sunghokim 16bitfixedpointnumbermultiplicationwithcnttransistordotproductengine AT yongwoolee 16bitfixedpointnumbermultiplicationwithcnttransistordotproductengine AT heedongkim 16bitfixedpointnumbermultiplicationwithcnttransistordotproductengine AT sungjinchoi 16bitfixedpointnumbermultiplicationwithcnttransistordotproductengine |