Model compression and simplification pipelines for fast deep neural network inference in FPGAs in HEP
Abstract Resource utilization plays a crucial role for successful implementation of fast real-time inference for deep neural networks (DNNs) and convolutional neural networks (CNNs) on latest generation of hardware accelerators (FPGAs, SoCs, ACAPs, GPUs). To fulfil the needs of the triggers that are...
Main Authors: | Simone Francescato, Stefano Giagu, Federica Riti, Graziella Russo, Luigi Sabetta, Federico Tortonesi |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2021-11-01
|
Series: | European Physical Journal C: Particles and Fields |
Online Access: | https://doi.org/10.1140/epjc/s10052-021-09770-w |
Similar Items
-
Erratum to: Model compression and simplification pipelines for fast deep neural network inference in FPGAs in HEP
by: Simone Francescato, et al.
Published: (2021-12-01) -
Fast neural network inference on FPGAs for triggering on long-lived particles at colliders
by: Andrea Coccaro, et al.
Published: (2023-01-01) -
Fast inference using FPGAs for DUNE data reconstruction
by: Rodriguez Manuel J.
Published: (2020-01-01) -
Fast inference of deep neural networks in FPGAs for particle physics
by: Han, Song, et al.
Published: (2020) -
Fast inference of Boosted Decision Trees in FPGAs for particle physics
by: Summers, S, et al.
Published: (2021)