Iteration Time Prediction for CNN in Multi-GPU Platform: Modeling and Analysis
Neural networks, as powerful models for many difficult learning tasks, have created an increasingly heavy computational burden. More and more researchers focus on how to optimize the training time, and one of the difficulties is to establish a general iteration time prediction model. However, the ex...
Main Authors: | Ziqian Pei, Chensheng Li, Xiaowei Qin, Xiaohui Chen, Guo Wei |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2019-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8713989/ |
Similar Items
-
Scalable multi-GPU implementation of the MAGFLOW simulator
by: Giovanni Gallo, et al.
Published: (2011-12-01) -
GPU-accelerated iterative reconstruction for limited-data tomography in CBCT systems
by: Claudia de Molina, et al.
Published: (2018-05-01) -
GPU-Based Embedded Intelligence Architectures and Applications
by: Li Minn Ang, et al.
Published: (2021-04-01) -
Optimization for Multi-Join Queries on the GPU
by: Xue-Xuan Hu, et al.
Published: (2020-01-01) -
Comparison of GPU and CPU efficiency while solving heat conduction problems
by: Julija Semenenko, et al.
Published: (2020-11-01)