Conv-RAM: An Energy-efficient SRAM with Embedded Convolution Computation for Low-power CNN based Machine Learning Applications

Convolutional neural networks (CNN) provide state-of-the-art results in a wide variety of machine learning (ML) applications, ranging from image classification to speech recognition. However, they are very computationally intensive and require huge amounts of storage. Recent work strived towards red...

Full description

Bibliographic Details
Main Authors: Biswas, Avishek, Chandrakasan, Anantha P
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Article
Published: Institute of Electrical and Electronics Engineers (IEEE) 2019
Online Access:https://hdl.handle.net/1721.1/122467
_version_ 1826213784033689600
author Biswas, Avishek
Chandrakasan, Anantha P
author2 Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
author_facet Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Biswas, Avishek
Chandrakasan, Anantha P
author_sort Biswas, Avishek
collection MIT
description Convolutional neural networks (CNN) provide state-of-the-art results in a wide variety of machine learning (ML) applications, ranging from image classification to speech recognition. However, they are very computationally intensive and require huge amounts of storage. Recent work strived towards reducing the size of the CNNs: [1] proposes a binary-weight-network (BWN), where the filter weights (w i 's) are ±1 (with a common scaling factor per filter: α). This leads to a significant reduction in the amount of storage required for the W i 's, making it possible to store them entirely on-chip. However, in a conventional all-digital implementation [2, 3], reading the wj i s and the partial sums from the embedded SRAMs require a lot of data movement per computation, which is energy-hungry. To reduce data-movement, and associated energy, we present an SRAM-embedded convolution architecture (Fig. 31.1.1), which does not require reading the w i 's explicitly from the memory. Prior work on embedded ML classifiers have focused on 1b outputs [4] or a small number of output classes [5], both of which are not sufficient for CNNs. This work uses 7b inputs/outputs, which is sufficient to maintain good accuracy for most of the popular CNNs [1]. The convolution operation is implemented as voltage averaging (Fig. 31.1.1), since the wj's are binary, while the averaging factor (1/N) implements the weight-coefficient α (with a new scaling factor, M, implemented off-chip).
first_indexed 2024-09-23T15:54:27Z
format Article
id mit-1721.1/122467
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T15:54:27Z
publishDate 2019
publisher Institute of Electrical and Electronics Engineers (IEEE)
record_format dspace
spelling mit-1721.1/1224672022-09-29T17:00:04Z Conv-RAM: An Energy-efficient SRAM with Embedded Convolution Computation for Low-power CNN based Machine Learning Applications Biswas, Avishek Chandrakasan, Anantha P Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Convolutional neural networks (CNN) provide state-of-the-art results in a wide variety of machine learning (ML) applications, ranging from image classification to speech recognition. However, they are very computationally intensive and require huge amounts of storage. Recent work strived towards reducing the size of the CNNs: [1] proposes a binary-weight-network (BWN), where the filter weights (w i 's) are ±1 (with a common scaling factor per filter: α). This leads to a significant reduction in the amount of storage required for the W i 's, making it possible to store them entirely on-chip. However, in a conventional all-digital implementation [2, 3], reading the wj i s and the partial sums from the embedded SRAMs require a lot of data movement per computation, which is energy-hungry. To reduce data-movement, and associated energy, we present an SRAM-embedded convolution architecture (Fig. 31.1.1), which does not require reading the w i 's explicitly from the memory. Prior work on embedded ML classifiers have focused on 1b outputs [4] or a small number of output classes [5], both of which are not sufficient for CNNs. This work uses 7b inputs/outputs, which is sufficient to maintain good accuracy for most of the popular CNNs [1]. The convolution operation is implemented as voltage averaging (Fig. 31.1.1), since the wj's are binary, while the averaging factor (1/N) implements the weight-coefficient α (with a new scaling factor, M, implemented off-chip). 2019-10-08T15:43:44Z 2019-10-08T15:43:44Z 2018-03 2018-02 Article http://purl.org/eprint/type/ConferencePaper 978-1-5090-4940-0 2376-8606 https://hdl.handle.net/1721.1/122467 Biswas, Avishek and Anantha P. Chandrakasan. "Conv-RAM: An Energy-efficient SRAM with Embedded Convolution Computation for Low-power CNN based Machine Learning Applications." 2018 IEEE International Solid-State Circuits Conference (ISSCC), February 2018, San Francisco, California, USA, Institute of Electrical and Electronics Engineers (IEEE) March 2018 © 2018 IEEE http://dx.doi.org/10.1109/ISSCC.2018.8310397 2018 IEEE International Solid-State Circuits Conference (ISSCC) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) Prof. Chandrakasan via Phoebe Ayers
spellingShingle Biswas, Avishek
Chandrakasan, Anantha P
Conv-RAM: An Energy-efficient SRAM with Embedded Convolution Computation for Low-power CNN based Machine Learning Applications
title Conv-RAM: An Energy-efficient SRAM with Embedded Convolution Computation for Low-power CNN based Machine Learning Applications
title_full Conv-RAM: An Energy-efficient SRAM with Embedded Convolution Computation for Low-power CNN based Machine Learning Applications
title_fullStr Conv-RAM: An Energy-efficient SRAM with Embedded Convolution Computation for Low-power CNN based Machine Learning Applications
title_full_unstemmed Conv-RAM: An Energy-efficient SRAM with Embedded Convolution Computation for Low-power CNN based Machine Learning Applications
title_short Conv-RAM: An Energy-efficient SRAM with Embedded Convolution Computation for Low-power CNN based Machine Learning Applications
title_sort conv ram an energy efficient sram with embedded convolution computation for low power cnn based machine learning applications
url https://hdl.handle.net/1721.1/122467
work_keys_str_mv AT biswasavishek convramanenergyefficientsramwithembeddedconvolutioncomputationforlowpowercnnbasedmachinelearningapplications
AT chandrakasanananthap convramanenergyefficientsramwithembeddedconvolutioncomputationforlowpowercnnbasedmachinelearningapplications