Efficient Algorithms, Hardware Architectures and Circuits for Deep Learning Accelerators
Deep learning has permeated many industries due to its state-of-the-art ability to process complex data and uncover intricate patterns. However, it is computationally expensive. Researchers have shown in theory and practice that the progress of deep learning in many applications is heavily relian...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2023
|
Online Access: | https://hdl.handle.net/1721.1/152734 https://orcid.org/0009-0000-5896-5014 |
_version_ | 1826207791210037248 |
---|---|
author | Wang, Miaorong |
author2 | Chandrakasan, Anantha P. |
author_facet | Chandrakasan, Anantha P. Wang, Miaorong |
author_sort | Wang, Miaorong |
collection | MIT |
description | Deep learning has permeated many industries due to its state-of-the-art ability to
process complex data and uncover intricate patterns. However, it is computationally
expensive. Researchers have shown in theory and practice that the progress of deep
learning in many applications is heavily reliant on increases in computing power, and
thus leads to increasing energy demand. That may impede further advancement in
the field. To tackle that challenge, this thesis presents several techniques to improve
the energy efficiency of deep learning accelerators while adhering to the accuracy and
throughput requirements of the desired application.
First, we develop hybrid dataflows and co-design the memory hierarchy. That
enables designers to trade off the reuse between different data types across different
storage elements provided by the technology for higher energy efficiency. Second, we
propose a weight tuning algorithm and accelerator co-design, which optimizes the
bit representation of weights for energy reduction. Last, we present VideoTime3, an
algorithm and accelerator co-design for efficient real-time video understanding with
temporal redundancy reduction and temporal modeling. Our proposed techniques
enrich accelerator designers’ toolkits, pushing the boundaries of energy efficiency for
sustainable advances in deep learning. |
first_indexed | 2024-09-23T13:55:01Z |
format | Thesis |
id | mit-1721.1/152734 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T13:55:01Z |
publishDate | 2023 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/1527342023-11-03T03:27:48Z Efficient Algorithms, Hardware Architectures and Circuits for Deep Learning Accelerators Wang, Miaorong Chandrakasan, Anantha P. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Deep learning has permeated many industries due to its state-of-the-art ability to process complex data and uncover intricate patterns. However, it is computationally expensive. Researchers have shown in theory and practice that the progress of deep learning in many applications is heavily reliant on increases in computing power, and thus leads to increasing energy demand. That may impede further advancement in the field. To tackle that challenge, this thesis presents several techniques to improve the energy efficiency of deep learning accelerators while adhering to the accuracy and throughput requirements of the desired application. First, we develop hybrid dataflows and co-design the memory hierarchy. That enables designers to trade off the reuse between different data types across different storage elements provided by the technology for higher energy efficiency. Second, we propose a weight tuning algorithm and accelerator co-design, which optimizes the bit representation of weights for energy reduction. Last, we present VideoTime3, an algorithm and accelerator co-design for efficient real-time video understanding with temporal redundancy reduction and temporal modeling. Our proposed techniques enrich accelerator designers’ toolkits, pushing the boundaries of energy efficiency for sustainable advances in deep learning. Ph.D. 2023-11-02T20:11:59Z 2023-11-02T20:11:59Z 2023-09 2023-09-21T14:26:29.131Z Thesis https://hdl.handle.net/1721.1/152734 https://orcid.org/0009-0000-5896-5014 In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology |
spellingShingle | Wang, Miaorong Efficient Algorithms, Hardware Architectures and Circuits for Deep Learning Accelerators |
title | Efficient Algorithms, Hardware Architectures and Circuits for Deep Learning Accelerators |
title_full | Efficient Algorithms, Hardware Architectures and Circuits for Deep Learning Accelerators |
title_fullStr | Efficient Algorithms, Hardware Architectures and Circuits for Deep Learning Accelerators |
title_full_unstemmed | Efficient Algorithms, Hardware Architectures and Circuits for Deep Learning Accelerators |
title_short | Efficient Algorithms, Hardware Architectures and Circuits for Deep Learning Accelerators |
title_sort | efficient algorithms hardware architectures and circuits for deep learning accelerators |
url | https://hdl.handle.net/1721.1/152734 https://orcid.org/0009-0000-5896-5014 |
work_keys_str_mv | AT wangmiaorong efficientalgorithmshardwarearchitecturesandcircuitsfordeeplearningaccelerators |