Architecture design for highly flexible and energy-efficient deep neural network accelerators

Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.

Bibliographic Details
Main Author: Chen, Yu-Hsin, Ph. D. Massachusetts Institute of Technology
Other Authors: Vivienne Sze and Joel Emer.
Format: Thesis
Language:eng
Published: Massachusetts Institute of Technology 2018
Subjects:
Online Access:http://hdl.handle.net/1721.1/117838
_version_ 1826188699946188800
author Chen, Yu-Hsin, Ph. D. Massachusetts Institute of Technology
author2 Vivienne Sze and Joel Emer.
author_facet Vivienne Sze and Joel Emer.
Chen, Yu-Hsin, Ph. D. Massachusetts Institute of Technology
author_sort Chen, Yu-Hsin, Ph. D. Massachusetts Institute of Technology
collection MIT
description Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018.
first_indexed 2024-09-23T08:03:21Z
format Thesis
id mit-1721.1/117838
institution Massachusetts Institute of Technology
language eng
last_indexed 2024-09-23T08:03:21Z
publishDate 2018
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1178382019-04-09T16:24:48Z Architecture design for highly flexible and energy-efficient deep neural network accelerators Chen, Yu-Hsin, Ph. D. Massachusetts Institute of Technology Vivienne Sze and Joel Emer. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Electrical Engineering and Computer Science. Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2018. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 141-147). Deep neural networks (DNNs) are the backbone of modern artificial intelligence (AI). However, due to their high computational complexity and diverse shapes and sizes, dedicated accelerators that can achieve high performance and energy efficiency across a wide range of DNNs are critical for enabling AI in real-world applications. To address this, we present Eyeriss, a co-design of software and hardware architecture for DNN processing that is optimized for performance, energy efficiency and flexibility. Eyeriss features a novel Row-Stationary (RS) dataflow to minimize data movement when processing a DNN, which is the bottleneck of both performance and energy efficiency. The RS dataflow supports highly-parallel processing while fully exploiting data reuse in a multi-level memory hierarchy to optimize for the overall system energy efficiency given any DNN shape and size. It achieves 1.4x to 2.5x higher energy efficiency than other existing dataflows. To support the RS dataflow, we present two versions of the Eyeriss architecture. Eyeriss v1 targets large DNNs that have plenty of data reuse. It features a flexible mapping strategy for high performance and a multicast on-chip network (NoC) for high data reuse, and further exploits data sparsity to reduce processing element (PE) power by 45% and off-chip bandwidth by up to 1.9x. Fabricated in a 65nm CMOS, Eyeriss v1 consumes 278 mW at 34.7 fps for the CONV layers of AlexNet, which is 10x more efficient than a mobile GPU. Eyeriss v2 addresses support for the emerging compact DNNs that introduce higher variation in data reuse. It features a RS+ dataflow that improves PE utilization, and a flexible and scalable NoC that adapts to the bandwidth requirement while also exploiting available data reuse. Together, they provide over 10x higher throughput than Eyeriss v1 at 256 PEs. Eyeriss v2 also exploits sparsity and SIMD for an additional 6x increase in throughput. by Yu-Hsin Chen. Ph. D. 2018-09-17T14:51:48Z 2018-09-17T14:51:48Z 2018 2018 Thesis http://hdl.handle.net/1721.1/117838 1052123991 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 147 pages application/pdf Massachusetts Institute of Technology
spellingShingle Electrical Engineering and Computer Science.
Chen, Yu-Hsin, Ph. D. Massachusetts Institute of Technology
Architecture design for highly flexible and energy-efficient deep neural network accelerators
title Architecture design for highly flexible and energy-efficient deep neural network accelerators
title_full Architecture design for highly flexible and energy-efficient deep neural network accelerators
title_fullStr Architecture design for highly flexible and energy-efficient deep neural network accelerators
title_full_unstemmed Architecture design for highly flexible and energy-efficient deep neural network accelerators
title_short Architecture design for highly flexible and energy-efficient deep neural network accelerators
title_sort architecture design for highly flexible and energy efficient deep neural network accelerators
topic Electrical Engineering and Computer Science.
url http://hdl.handle.net/1721.1/117838
work_keys_str_mv AT chenyuhsinphdmassachusettsinstituteoftechnology architecturedesignforhighlyflexibleandenergyefficientdeepneuralnetworkaccelerators