GPU coprocessors as a service for deep learning inference in high energy physics
<jats:title>Abstract</jats:title> <jats:p>In the next decade, the demands for computing in large scientific experiments are expected to grow tremendously. During the same time period, CPU performance increases will be limited. At the CERN Large Hadron Collider (LHC)...
Main Authors: | Krupa, Jeffrey, Lin, Kelvin, Acosta Flechas, Maria, Dinsmore, Jack, Duarte, Javier, Harris, Philip, Hauck, Scott, Holzman, Burt, Hsu, Shih-Chieh, Klijnsma, Thomas, Liu, Mia, Pedro, Kevin, Rankin, Dylan, Suaysom, Natchanon, Trahms, Matt, Tran, Nhan |
---|---|
Other Authors: | Massachusetts Institute of Technology. Department of Physics |
Format: | Article |
Language: | English |
Published: |
IOP Publishing
2022
|
Online Access: | https://hdl.handle.net/1721.1/142112 |
Similar Items
-
FPGAs-as-a-Service Toolkit (FaaST)
by: Rankin, Dylan, et al.
Published: (2022) -
GPU-Accelerated Machine Learning Inference as a Service for Computing in Neutrino Experiments
by: Wang, Michael, et al.
Published: (2022) -
Accelerating Machine Learning Inference with GPUs in ProtoDUNE Data Processing
by: Cai, Tejin, et al.
Published: (2023) -
FPGA-Accelerated Machine Learning Inference as a Service for Particle Physics Computing
by: Duarte, Javier, et al.
Published: (2021) -
Optimizing sparse matrix kernels on coprocessors
by: Lim, Wee Siong
Published: (2014)