Mixed-precision architecture for flexible neural network accelerators

This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.

Bibliographic Details
Main Author: Hafdi, Driss.
Other Authors: Song Han.
Format: Thesis
Language:eng
Published: Massachusetts Institute of Technology 2020
Subjects:
Online Access:https://hdl.handle.net/1721.1/124247
_version_ 1826199952859070464
author Hafdi, Driss.
author2 Song Han.
author_facet Song Han.
Hafdi, Driss.
author_sort Hafdi, Driss.
collection MIT
description This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.
first_indexed 2024-09-23T11:28:26Z
format Thesis
id mit-1721.1/124247
institution Massachusetts Institute of Technology
language eng
last_indexed 2024-09-23T11:28:26Z
publishDate 2020
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1242472020-03-25T03:26:38Z Mixed-precision architecture for flexible neural network accelerators Hafdi, Driss. Song Han. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Electrical Engineering and Computer Science. This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections. Thesis: M. Eng., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2019 Cataloged from student-submitted PDF version of thesis. Includes bibliographical references (pages 89-91). Model quantization provides considerable latency and energy consumption reductions while preserving accuracy. However, the optimal bitwidth reduction varies on a layer by layer basis. This thesis suggests a novel neural network accelerator architecture that handles multiple bit precisions for both weights and activations. The architecture is based on a fused spatial and temporal micro-architecture that maximizes both bandwidth eciency and computational ability. Furthermore, this thesis presents an FPGA implementation of this new mixed precision architecture and it discusses the ISA and its associated bitcode compiler. Finally, the performance of the system is evaluated on a Virtex-9 UltraScale FPGA by running state-of-the-art neural networks. by Driss Hafdi. M. Eng. M.Eng. Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science 2020-03-24T15:36:16Z 2020-03-24T15:36:16Z 2019 2019 Thesis https://hdl.handle.net/1721.1/124247 1145118397 eng MIT theses are protected by copyright. They may be viewed, downloaded, or printed from this source but further reproduction or distribution in any format is prohibited without written permission. http://dspace.mit.edu/handle/1721.1/7582 91 pages application/pdf Massachusetts Institute of Technology
spellingShingle Electrical Engineering and Computer Science.
Hafdi, Driss.
Mixed-precision architecture for flexible neural network accelerators
title Mixed-precision architecture for flexible neural network accelerators
title_full Mixed-precision architecture for flexible neural network accelerators
title_fullStr Mixed-precision architecture for flexible neural network accelerators
title_full_unstemmed Mixed-precision architecture for flexible neural network accelerators
title_short Mixed-precision architecture for flexible neural network accelerators
title_sort mixed precision architecture for flexible neural network accelerators
topic Electrical Engineering and Computer Science.
url https://hdl.handle.net/1721.1/124247
work_keys_str_mv AT hafdidriss mixedprecisionarchitectureforflexibleneuralnetworkaccelerators