iMAD: an in-memory accelerator for AdderNet with efficient 8-bit addition and subtraction operations

Adder Neural Network (AdderNet) is a new type of Convolutional Neural Networks (CNNs) that replaces the computational-intensive multiplications in convolution layers with lightweight additions and subtractions. As a result, AdderNet preserves high accuracy with adder convolution kernels and achieves...

Full description

Bibliographic Details
Main Authors: Zhu, Shien, Li, Shiqing, Liu, Weichen
Other Authors: School of Computer Science and Engineering
Format: Conference Paper
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/156404
_version_ 1811687588245798912
author Zhu, Shien
Li, Shiqing
Liu, Weichen
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Zhu, Shien
Li, Shiqing
Liu, Weichen
author_sort Zhu, Shien
collection NTU
description Adder Neural Network (AdderNet) is a new type of Convolutional Neural Networks (CNNs) that replaces the computational-intensive multiplications in convolution layers with lightweight additions and subtractions. As a result, AdderNet preserves high accuracy with adder convolution kernels and achieves high speed and power efficiency. In-Memory Computing (IMC) is known as the next-generation artificial-intelligence computing paradigm that has been widely adopted for accelerating binary and ternary CNNs. As AdderNet has much higher accuracy than binary and ternary CNNs, accelerating AdderNet using IMC can obtain both performance and accuracy benefits. However, existing IMC devices have no dedicated subtraction function, and adding subtraction logic may bring larger area, higher power, and degraded addition performance. In this paper, we propose iMAD as an in-memory accelerator for AdderNet with efficient addition and subtraction operations. First, we propose an efficient in-memory subtraction operator at the circuit level and co-optimize the addition performance to reduce the latency and power. Second, we propose an accelerator architecture for AdderNet with high parallelism based on the optimized operators. Third, we propose an IMC-friendly computation pipeline for AdderNet convolution at the algorithm level to further boost the performance. Evaluation results show that our accelerator iMAD achieves 3.25X speedup and 3.55X energy efficiency compared with a state-of-the-art in-memory accelerator.
first_indexed 2024-10-01T05:18:42Z
format Conference Paper
id ntu-10356/156404
institution Nanyang Technological University
language English
last_indexed 2024-10-01T05:18:42Z
publishDate 2022
record_format dspace
spelling ntu-10356/1564042022-10-19T06:40:37Z iMAD: an in-memory accelerator for AdderNet with efficient 8-bit addition and subtraction operations Zhu, Shien Li, Shiqing Liu, Weichen School of Computer Science and Engineering 32nd Great Lakes Symposium on VLSI 2022 (GLSVLSI '22) Parallel and Distributed Computing Centre Engineering::Computer science and engineering::Computer systems organization::Special-purpose and application-based systems Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence In-Memory Computing Convolutional Neural Network AdderNet Adder Neural Network (AdderNet) is a new type of Convolutional Neural Networks (CNNs) that replaces the computational-intensive multiplications in convolution layers with lightweight additions and subtractions. As a result, AdderNet preserves high accuracy with adder convolution kernels and achieves high speed and power efficiency. In-Memory Computing (IMC) is known as the next-generation artificial-intelligence computing paradigm that has been widely adopted for accelerating binary and ternary CNNs. As AdderNet has much higher accuracy than binary and ternary CNNs, accelerating AdderNet using IMC can obtain both performance and accuracy benefits. However, existing IMC devices have no dedicated subtraction function, and adding subtraction logic may bring larger area, higher power, and degraded addition performance. In this paper, we propose iMAD as an in-memory accelerator for AdderNet with efficient addition and subtraction operations. First, we propose an efficient in-memory subtraction operator at the circuit level and co-optimize the addition performance to reduce the latency and power. Second, we propose an accelerator architecture for AdderNet with high parallelism based on the optimized operators. Third, we propose an IMC-friendly computation pipeline for AdderNet convolution at the algorithm level to further boost the performance. Evaluation results show that our accelerator iMAD achieves 3.25X speedup and 3.55X energy efficiency compared with a state-of-the-art in-memory accelerator. Ministry of Education (MOE) Nanyang Technological University Submitted/Accepted version This work is partially supported by the Ministry of Education, Singapore, under its Academic Research Fund Tier 2 (MOE2019-T2-1-071) and Tier 1 (MOE2019-T1-001-072), and partially supported by Nanyang Technological University, Singapore, under its NAP (M4082282) and SUG (M4082087). 2022-06-22T08:18:35Z 2022-06-22T08:18:35Z 2022 Conference Paper Zhu, S., Li, S. & Liu, W. (2022). iMAD: an in-memory accelerator for AdderNet with efficient 8-bit addition and subtraction operations. 32nd Great Lakes Symposium on VLSI 2022 (GLSVLSI '22), June 2022, 65-70. https://dx.doi.org/10.1145/3526241.3530313 978-1-4503-9322-5 https://hdl.handle.net/10356/156404 10.1145/3526241.3530313 June 2022 65 70 en MOE2019-T2-1-071 MOE2019-T1-001-072 M4082282 M4082087 10.21979/N9/JNFW9P © 2022 Association for Computing Machinery. All rights reserved. This paper was published in Proceedings of 32nd Great Lakes Symposium on VLSI 2022 (GLSVLSI '22) and is made available with permission of Association for Computing Machinery. application/pdf
spellingShingle Engineering::Computer science and engineering::Computer systems organization::Special-purpose and application-based systems
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
In-Memory Computing
Convolutional Neural Network
AdderNet
Zhu, Shien
Li, Shiqing
Liu, Weichen
iMAD: an in-memory accelerator for AdderNet with efficient 8-bit addition and subtraction operations
title iMAD: an in-memory accelerator for AdderNet with efficient 8-bit addition and subtraction operations
title_full iMAD: an in-memory accelerator for AdderNet with efficient 8-bit addition and subtraction operations
title_fullStr iMAD: an in-memory accelerator for AdderNet with efficient 8-bit addition and subtraction operations
title_full_unstemmed iMAD: an in-memory accelerator for AdderNet with efficient 8-bit addition and subtraction operations
title_short iMAD: an in-memory accelerator for AdderNet with efficient 8-bit addition and subtraction operations
title_sort imad an in memory accelerator for addernet with efficient 8 bit addition and subtraction operations
topic Engineering::Computer science and engineering::Computer systems organization::Special-purpose and application-based systems
Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence
In-Memory Computing
Convolutional Neural Network
AdderNet
url https://hdl.handle.net/10356/156404
work_keys_str_mv AT zhushien imadaninmemoryacceleratorforaddernetwithefficient8bitadditionandsubtractionoperations
AT lishiqing imadaninmemoryacceleratorforaddernetwithefficient8bitadditionandsubtractionoperations
AT liuweichen imadaninmemoryacceleratorforaddernetwithefficient8bitadditionandsubtractionoperations