Hardware-Aware Design for Edge Intelligence
With the rapid growth of the number of devices connected to the Internet, there is a trend to move intelligent processing of the generated data with deep neural networks (DNNs) from cloud servers to the network edge. Performing inference and training of DNNs in edge hardware is motivated by latency...
Main Authors: | Warren J. Gross, Brett H. Meyer, Arash Ardakani |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Open Journal of Circuits and Systems |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9311412/ |
Similar Items
-
A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration
by: Deepak Ghimire, et al.
Published: (2022-03-01) -
Differentiable Neural Architecture, Mixed Precision and Accelerator Co-Search
by: Krishna Teja Chitty-Venkata, et al.
Published: (2023-01-01) -
Ps and Qs: Quantization-Aware Pruning for Efficient Low Latency Neural Network Inference
by: Benjamin Hawks, et al.
Published: (2021-07-01) -
SurgeNAS: a comprehensive surgery on hardware-aware differentiable neural architecture search
by: Luo, Xiangzhong, et al.
Published: (2023) -
Neural Architecture Search and Hardware Accelerator Co-Search: A Survey
by: Lukas Sekanina
Published: (2021-01-01)