Distillation Sparsity Training Algorithm for Accelerating Convolutional Neural Networks in Embedded Systems
The rapid development of neural networks has come at the cost of increased computational complexity. Neural networks are both computationally intensive and memory intensive; as such, the minimal energy and computing power of satellites pose a challenge for automatic target recognition (ATR). Knowled...
Main Authors: | Penghao Xiao, Teng Xu, Xiayang Xiao, Weisong Li, Haipeng Wang |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-05-01
|
Series: | Remote Sensing |
Subjects: | |
Online Access: | https://www.mdpi.com/2072-4292/15/10/2609 |
Similar Items
-
A Survey on Sparsity Exploration in Transformer-Based Accelerators
by: Kazi Ahmed Asif Fuad, et al.
Published: (2023-05-01) -
Building a Compact Convolutional Neural Network for Embedded Intelligent Sensor Systems Using Group Sparsity and Knowledge Distillation
by: Jungchan Cho, et al.
Published: (2019-10-01) -
Multi-Step Training Framework Using Sparsity Training for Efficient Utilization of Accumulated New Data in Convolutional Neural Networks
by: Jeong Jun Lee, et al.
Published: (2023-01-01) -
MEGS: A Penalty for Mutually Exclusive Group Sparsity
by: Charles Saunders, et al.
Published: (2023-01-01) -
Representative Selection with Structured Sparsity
by: Wang, Hongxing, et al.
Published: (2017)