HighLight: Efficient and Flexible DNN Acceleration with Hierarchical Structured Sparsity
Due to complex interactions among various deep neural network (DNN) optimization techniques, modern DNNs can have weights and activations that are dense or sparse with diverse sparsity degrees. To offer a good trade-off between accuracy and hardware performance, an ideal DNN accelerator should have...
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
ACM|56th Annual IEEE/ACM International Symposium on Microarchitecture
2024
|
Online Access: | https://hdl.handle.net/1721.1/153277 |