A compression strategy to accelerate LSTM meta-learning on FPGA
Driven by edge computing, how to efficiently deploy the meta-learner LSTM in the resource constrained FPGA terminal equipment has become a big problem. This paper proposes a compression strategy based on LSTM meta-learning model, which combined the structured pruning of the weight matrix and the mix...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2022-09-01
|
Series: | ICT Express |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2405959522000558 |
_version_ | 1828149251845128192 |
---|---|
author | NianYi Wang Jing Nie JingBin Li Kang Wang ShunKang Ling |
author_facet | NianYi Wang Jing Nie JingBin Li Kang Wang ShunKang Ling |
author_sort | NianYi Wang |
collection | DOAJ |
description | Driven by edge computing, how to efficiently deploy the meta-learner LSTM in the resource constrained FPGA terminal equipment has become a big problem. This paper proposes a compression strategy based on LSTM meta-learning model, which combined the structured pruning of the weight matrix and the mixed precision quantization. The weight matrix was pruned into a sparse matrix, then the weight was quantified to reduce resource consumption. Finally, a LSTM meta-learning accelerator was designed based on the idea of hardware–software cooperation. Experiments show that compared with mainstream hardware platforms, the proposed accelerator achieves at least 50.14 times increase in energy efficiency. |
first_indexed | 2024-04-11T21:26:51Z |
format | Article |
id | doaj.art-86d33a93b74f4812848ac6e40d3ac4a2 |
institution | Directory Open Access Journal |
issn | 2405-9595 |
language | English |
last_indexed | 2024-04-11T21:26:51Z |
publishDate | 2022-09-01 |
publisher | Elsevier |
record_format | Article |
series | ICT Express |
spelling | doaj.art-86d33a93b74f4812848ac6e40d3ac4a22022-12-22T04:02:21ZengElsevierICT Express2405-95952022-09-0183322327A compression strategy to accelerate LSTM meta-learning on FPGANianYi Wang0Jing Nie1JingBin Li2Kang Wang3ShunKang Ling4College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, ChinaCollege of Mechanical and Electrical Engineering, Shihezi University, Shihezi, China; Xinjiang Production and Construction Corps Key Laboratory of Modern Agricultural Machinery, Shihezi, China; Corresponding author at: College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, China.College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, China; Xinjiang Production and Construction Corps Key Laboratory of Modern Agricultural Machinery, Shihezi, ChinaCollege of Mechanical and Electrical Engineering, Shihezi University, Shihezi, ChinaCollege of Mechanical and Electrical Engineering, Shihezi University, Shihezi, ChinaDriven by edge computing, how to efficiently deploy the meta-learner LSTM in the resource constrained FPGA terminal equipment has become a big problem. This paper proposes a compression strategy based on LSTM meta-learning model, which combined the structured pruning of the weight matrix and the mixed precision quantization. The weight matrix was pruned into a sparse matrix, then the weight was quantified to reduce resource consumption. Finally, a LSTM meta-learning accelerator was designed based on the idea of hardware–software cooperation. Experiments show that compared with mainstream hardware platforms, the proposed accelerator achieves at least 50.14 times increase in energy efficiency.http://www.sciencedirect.com/science/article/pii/S2405959522000558Edge calculationFPGALSTM Meta-Learning AcceleratorStructural pruningMixed precision quantization |
spellingShingle | NianYi Wang Jing Nie JingBin Li Kang Wang ShunKang Ling A compression strategy to accelerate LSTM meta-learning on FPGA ICT Express Edge calculation FPGA LSTM Meta-Learning Accelerator Structural pruning Mixed precision quantization |
title | A compression strategy to accelerate LSTM meta-learning on FPGA |
title_full | A compression strategy to accelerate LSTM meta-learning on FPGA |
title_fullStr | A compression strategy to accelerate LSTM meta-learning on FPGA |
title_full_unstemmed | A compression strategy to accelerate LSTM meta-learning on FPGA |
title_short | A compression strategy to accelerate LSTM meta-learning on FPGA |
title_sort | compression strategy to accelerate lstm meta learning on fpga |
topic | Edge calculation FPGA LSTM Meta-Learning Accelerator Structural pruning Mixed precision quantization |
url | http://www.sciencedirect.com/science/article/pii/S2405959522000558 |
work_keys_str_mv | AT nianyiwang acompressionstrategytoacceleratelstmmetalearningonfpga AT jingnie acompressionstrategytoacceleratelstmmetalearningonfpga AT jingbinli acompressionstrategytoacceleratelstmmetalearningonfpga AT kangwang acompressionstrategytoacceleratelstmmetalearningonfpga AT shunkangling acompressionstrategytoacceleratelstmmetalearningonfpga AT nianyiwang compressionstrategytoacceleratelstmmetalearningonfpga AT jingnie compressionstrategytoacceleratelstmmetalearningonfpga AT jingbinli compressionstrategytoacceleratelstmmetalearningonfpga AT kangwang compressionstrategytoacceleratelstmmetalearningonfpga AT shunkangling compressionstrategytoacceleratelstmmetalearningonfpga |