Efficient Algorithms, Hardware Architectures and Circuits for Deep Learning Accelerators
Deep learning has permeated many industries due to its state-of-the-art ability to process complex data and uncover intricate patterns. However, it is computationally expensive. Researchers have shown in theory and practice that the progress of deep learning in many applications is heavily relian...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2023
|
Online Access: | https://hdl.handle.net/1721.1/152734 https://orcid.org/0009-0000-5896-5014 |
Summary: | Deep learning has permeated many industries due to its state-of-the-art ability to
process complex data and uncover intricate patterns. However, it is computationally
expensive. Researchers have shown in theory and practice that the progress of deep
learning in many applications is heavily reliant on increases in computing power, and
thus leads to increasing energy demand. That may impede further advancement in
the field. To tackle that challenge, this thesis presents several techniques to improve
the energy efficiency of deep learning accelerators while adhering to the accuracy and
throughput requirements of the desired application.
First, we develop hybrid dataflows and co-design the memory hierarchy. That
enables designers to trade off the reuse between different data types across different
storage elements provided by the technology for higher energy efficiency. Second, we
propose a weight tuning algorithm and accelerator co-design, which optimizes the
bit representation of weights for energy reduction. Last, we present VideoTime3, an
algorithm and accelerator co-design for efficient real-time video understanding with
temporal redundancy reduction and temporal modeling. Our proposed techniques
enrich accelerator designers’ toolkits, pushing the boundaries of energy efficiency for
sustainable advances in deep learning. |
---|