A 64-TOPS Energy-Efficient Tensor Accelerator in 14nm With Reconfigurable Fetch Network and Processing Fusion for Maximal Data Reuse
For energy-efficient accelerators in data centers that leverage advances in the performance and energy efficiency of recent algorithms, flexible architectures are critical to support state-of-the-art algorithms for various deep learning tasks. Due to the matrix multiplication units at the core of te...
Main Authors: | Sang Min Lee, Hanjoon Kim, Jeseung Yeon, Juyun Lee, Younggeun Choi, Minho Kim, Changjae Park, Kiseok Jang, Youngsik Kim, Yongseung Kim, Changman Lee, Hyuck Han, Won Eung Kim, Rui Tang, Joon Ho Baek |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2022-01-01
|
Series: | IEEE Open Journal of the Solid-State Circuits Society |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9927346/ |
Similar Items
-
FPGA-Based Convolutional Neural Network Accelerator with Resource-Optimized Approximate Multiply-Accumulate Unit
by: Mannhee Cho, et al.
Published: (2021-11-01) -
AoCStream: All-on-Chip CNN Accelerator with Stream-Based Line-Buffer Architecture and Accelerator-Aware Pruning
by: Hyeong-Ju Kang, et al.
Published: (2023-09-01) -
Predicting beam transmission using 2-dimensional phase space projections of hadron accelerators
by: Anthony Tran, et al.
Published: (2022-10-01) -
A Parameterized Parallel Design Approach to Efficient Mapping of CNNs onto FPGA
by: Ning Mao, et al.
Published: (2023-02-01) -
A Lightweight Convolutional Neural Network Based on Hierarchical-Wise Convolution Fusion for Remote-Sensing Scene Image Classification
by: Cuiping Shi, et al.
Published: (2022-07-01)