Efficient, Accurate, and Flexible PIM Inference through Adaptable Low-Resolution Arithmetic

Processing-In-Memory (PIM) accelerators have the potential to efficiently run Deep Neural Network (DNN) inference by reducing costly data movement and by using resistive RAM (ReRAM) for efficient analog compute. Unfortunately, overall PIM accelerator efficiency and throughput are limited by area/ene...

Full description

Bibliographic Details
Main Author: Andrulis, Tanner
Other Authors: Emer, Joel S.
Format: Thesis
Published: Massachusetts Institute of Technology 2023
Online Access:https://hdl.handle.net/1721.1/151461