Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI

Over the past few years, several applications have been extensively exploiting the advantages of deep learning, in particular when using convolutional neural networks (CNNs). The intrinsic flexibility of such models makes them widely adopted in a variety of practical applications, from medical to in...

Full description

Bibliographic Details
Main Authors: Mara Pistellato, Filippo Bergamasco, Gianluca Bigaglia, Andrea Gasparetto, Andrea Albarelli, Marco Boschetti, Roberto Passerone
Format: Article
Language:English
Published: MDPI AG 2023-05-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/23/10/4667
_version_ 1797598370246688768
author Mara Pistellato
Filippo Bergamasco
Gianluca Bigaglia
Andrea Gasparetto
Andrea Albarelli
Marco Boschetti
Roberto Passerone
author_facet Mara Pistellato
Filippo Bergamasco
Gianluca Bigaglia
Andrea Gasparetto
Andrea Albarelli
Marco Boschetti
Roberto Passerone
author_sort Mara Pistellato
collection DOAJ
description Over the past few years, several applications have been extensively exploiting the advantages of deep learning, in particular when using convolutional neural networks (CNNs). The intrinsic flexibility of such models makes them widely adopted in a variety of practical applications, from medical to industrial. In this latter scenario, however, using consumer Personal Computer (PC) hardware is not always suitable for the potential harsh conditions of the working environment and the strict timing that industrial applications typically have. Therefore, the design of custom FPGA (Field Programmable Gate Array) solutions for network inference is gaining massive attention from researchers and companies as well. In this paper, we propose a family of network architectures composed of three kinds of custom layers working with integer arithmetic with a customizable precision (down to just two bits). Such layers are designed to be effectively trained on classical GPUs (Graphics Processing Units) and then synthesized to FPGA hardware for real-time inference. The idea is to provide a trainable quantization layer, called <i>Requantizer</i>, acting both as a non-linear activation for neurons and a value rescaler to match the desired bit precision. This way, the training is not only <i>quantization-aware</i>, but also capable of estimating the optimal scaling coefficients to accommodate both the non-linear nature of the activations and the constraints imposed by the limited precision. In the experimental section, we test the performance of this kind of model while working both on classical PC hardware and a case-study implementation of a signal peak detection device running on a real FPGA. We employ TensorFlow Lite for training and comparison, and use Xilinx FPGAs and Vivado for synthesis and implementation. The results show an accuracy of the quantized networks close to the floating point version, without the need for representative data for calibration as in other approaches, and performance that is better than dedicated peak detection algorithms. The FPGA implementation is able to run in real time at a rate of four gigapixels per second with moderate hardware resources, while achieving a sustained efficiency of 0.5 TOPS/W (tera operations per second per watt), in line with custom integrated hardware accelerators.
first_indexed 2024-03-11T03:21:16Z
format Article
id doaj.art-42a4e4b1a0c9408eb6eb3cdc2fc0cf1b
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-11T03:21:16Z
publishDate 2023-05-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-42a4e4b1a0c9408eb6eb3cdc2fc0cf1b2023-11-18T03:10:47ZengMDPI AGSensors1424-82202023-05-012310466710.3390/s23104667Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AIMara Pistellato0Filippo Bergamasco1Gianluca Bigaglia2Andrea Gasparetto3Andrea Albarelli4Marco Boschetti5Roberto Passerone6Dipartimento di Scienze Ambientali, Informatica e Statistica (DAIS), Università Ca’Foscari di Venezia, Via Torino 155, 30170 Venezia, ItalyDipartimento di Scienze Ambientali, Informatica e Statistica (DAIS), Università Ca’Foscari di Venezia, Via Torino 155, 30170 Venezia, ItalyDipartimento di Management, Università Ca’Foscari di Venezia, Cannaregio 873, 30121 Venezia, ItalyDipartimento di Management, Università Ca’Foscari di Venezia, Cannaregio 873, 30121 Venezia, ItalyDipartimento di Scienze Ambientali, Informatica e Statistica (DAIS), Università Ca’Foscari di Venezia, Via Torino 155, 30170 Venezia, ItalyCovision Lab SCARL, Via Durst 4, 39042 Bressanone, ItalyDipartimento di Ingegneria e Scienza dell’Informazione (DISI), University of Trento, Via Sommarive 9, 38123 Trento, ItalyOver the past few years, several applications have been extensively exploiting the advantages of deep learning, in particular when using convolutional neural networks (CNNs). The intrinsic flexibility of such models makes them widely adopted in a variety of practical applications, from medical to industrial. In this latter scenario, however, using consumer Personal Computer (PC) hardware is not always suitable for the potential harsh conditions of the working environment and the strict timing that industrial applications typically have. Therefore, the design of custom FPGA (Field Programmable Gate Array) solutions for network inference is gaining massive attention from researchers and companies as well. In this paper, we propose a family of network architectures composed of three kinds of custom layers working with integer arithmetic with a customizable precision (down to just two bits). Such layers are designed to be effectively trained on classical GPUs (Graphics Processing Units) and then synthesized to FPGA hardware for real-time inference. The idea is to provide a trainable quantization layer, called <i>Requantizer</i>, acting both as a non-linear activation for neurons and a value rescaler to match the desired bit precision. This way, the training is not only <i>quantization-aware</i>, but also capable of estimating the optimal scaling coefficients to accommodate both the non-linear nature of the activations and the constraints imposed by the limited precision. In the experimental section, we test the performance of this kind of model while working both on classical PC hardware and a case-study implementation of a signal peak detection device running on a real FPGA. We employ TensorFlow Lite for training and comparison, and use Xilinx FPGAs and Vivado for synthesis and implementation. The results show an accuracy of the quantized networks close to the floating point version, without the need for representative data for calibration as in other approaches, and performance that is better than dedicated peak detection algorithms. The FPGA implementation is able to run in real time at a rate of four gigapixels per second with moderate hardware resources, while achieving a sustained efficiency of 0.5 TOPS/W (tera operations per second per watt), in line with custom integrated hardware accelerators.https://www.mdpi.com/1424-8220/23/10/4667quantized CNNquantization-aware trainingFPGAedge AIpeak-detection
spellingShingle Mara Pistellato
Filippo Bergamasco
Gianluca Bigaglia
Andrea Gasparetto
Andrea Albarelli
Marco Boschetti
Roberto Passerone
Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI
Sensors
quantized CNN
quantization-aware training
FPGA
edge AI
peak-detection
title Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI
title_full Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI
title_fullStr Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI
title_full_unstemmed Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI
title_short Quantization-Aware NN Layers with High-throughput FPGA Implementation for Edge AI
title_sort quantization aware nn layers with high throughput fpga implementation for edge ai
topic quantized CNN
quantization-aware training
FPGA
edge AI
peak-detection
url https://www.mdpi.com/1424-8220/23/10/4667
work_keys_str_mv AT marapistellato quantizationawarennlayerswithhighthroughputfpgaimplementationforedgeai
AT filippobergamasco quantizationawarennlayerswithhighthroughputfpgaimplementationforedgeai
AT gianlucabigaglia quantizationawarennlayerswithhighthroughputfpgaimplementationforedgeai
AT andreagasparetto quantizationawarennlayerswithhighthroughputfpgaimplementationforedgeai
AT andreaalbarelli quantizationawarennlayerswithhighthroughputfpgaimplementationforedgeai
AT marcoboschetti quantizationawarennlayerswithhighthroughputfpgaimplementationforedgeai
AT robertopasserone quantizationawarennlayerswithhighthroughputfpgaimplementationforedgeai