Iteration Time Prediction for CNN in Multi-GPU Platform: Modeling and Analysis
Neural networks, as powerful models for many difficult learning tasks, have created an increasingly heavy computational burden. More and more researchers focus on how to optimize the training time, and one of the difficulties is to establish a general iteration time prediction model. However, the ex...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2019-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8713989/ |
_version_ | 1818323578251116544 |
---|---|
author | Ziqian Pei Chensheng Li Xiaowei Qin Xiaohui Chen Guo Wei |
author_facet | Ziqian Pei Chensheng Li Xiaowei Qin Xiaohui Chen Guo Wei |
author_sort | Ziqian Pei |
collection | DOAJ |
description | Neural networks, as powerful models for many difficult learning tasks, have created an increasingly heavy computational burden. More and more researchers focus on how to optimize the training time, and one of the difficulties is to establish a general iteration time prediction model. However, the existing models have high complexity or tedious build processes, and there is still space for improvement in prediction accuracy. Moreover, there is little systematic analysis of multi-GPU which is a special and widely used scenario. In this paper, we introduce a framework to analyze the training time for convolutional neural networks (CNNs) on multi-GPU platforms. Based on the analysis of GPU calculation principles and its special transmission mode, our framework decomposes the model and obtain accurate prediction results without long-term training or complex data collection. We start by extracting key feature parameters related to GPUs, CNNs, and networks. Then, we map CNN architectures to constraints, including software platforms, GPU platforms, parallel strategies, and communication strategies. At last, we provide the prediction model and give analysis results of training time from multiple perspectives. The proposed model is verified on four types of NVIDIA GPU platforms and six different CNN architectures. The experiment results show that the average error across varies scenarios is less than 15% and outperform the state-of-the-art results by 5%-30%, which corroborate our model an effective tool for artificial intelligence (AI) researchers. |
first_indexed | 2024-12-13T11:14:55Z |
format | Article |
id | doaj.art-47741b82a5a246eaa6b67b07fff9c3a1 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-13T11:14:55Z |
publishDate | 2019-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-47741b82a5a246eaa6b67b07fff9c3a12022-12-21T23:48:38ZengIEEEIEEE Access2169-35362019-01-017647886479710.1109/ACCESS.2019.29165508713989Iteration Time Prediction for CNN in Multi-GPU Platform: Modeling and AnalysisZiqian Pei0https://orcid.org/0000-0003-0443-9582Chensheng Li1Xiaowei Qin2Xiaohui Chen3Guo Wei4CAS Key Laboratory of Wireless-Optical Communications, University of Science and Technology of China, Hefei, ChinaCAS Key Laboratory of Wireless-Optical Communications, University of Science and Technology of China, Hefei, ChinaCAS Key Laboratory of Wireless-Optical Communications, University of Science and Technology of China, Hefei, ChinaCAS Key Laboratory of Wireless-Optical Communications, University of Science and Technology of China, Hefei, ChinaCAS Key Laboratory of Wireless-Optical Communications, University of Science and Technology of China, Hefei, ChinaNeural networks, as powerful models for many difficult learning tasks, have created an increasingly heavy computational burden. More and more researchers focus on how to optimize the training time, and one of the difficulties is to establish a general iteration time prediction model. However, the existing models have high complexity or tedious build processes, and there is still space for improvement in prediction accuracy. Moreover, there is little systematic analysis of multi-GPU which is a special and widely used scenario. In this paper, we introduce a framework to analyze the training time for convolutional neural networks (CNNs) on multi-GPU platforms. Based on the analysis of GPU calculation principles and its special transmission mode, our framework decomposes the model and obtain accurate prediction results without long-term training or complex data collection. We start by extracting key feature parameters related to GPUs, CNNs, and networks. Then, we map CNN architectures to constraints, including software platforms, GPU platforms, parallel strategies, and communication strategies. At last, we provide the prediction model and give analysis results of training time from multiple perspectives. The proposed model is verified on four types of NVIDIA GPU platforms and six different CNN architectures. The experiment results show that the average error across varies scenarios is less than 15% and outperform the state-of-the-art results by 5%-30%, which corroborate our model an effective tool for artificial intelligence (AI) researchers.https://ieeexplore.ieee.org/document/8713989/Convolutional neural networkmulti-GPU paralleliteration time |
spellingShingle | Ziqian Pei Chensheng Li Xiaowei Qin Xiaohui Chen Guo Wei Iteration Time Prediction for CNN in Multi-GPU Platform: Modeling and Analysis IEEE Access Convolutional neural network multi-GPU parallel iteration time |
title | Iteration Time Prediction for CNN in Multi-GPU Platform: Modeling and Analysis |
title_full | Iteration Time Prediction for CNN in Multi-GPU Platform: Modeling and Analysis |
title_fullStr | Iteration Time Prediction for CNN in Multi-GPU Platform: Modeling and Analysis |
title_full_unstemmed | Iteration Time Prediction for CNN in Multi-GPU Platform: Modeling and Analysis |
title_short | Iteration Time Prediction for CNN in Multi-GPU Platform: Modeling and Analysis |
title_sort | iteration time prediction for cnn in multi gpu platform modeling and analysis |
topic | Convolutional neural network multi-GPU parallel iteration time |
url | https://ieeexplore.ieee.org/document/8713989/ |
work_keys_str_mv | AT ziqianpei iterationtimepredictionforcnninmultigpuplatformmodelingandanalysis AT chenshengli iterationtimepredictionforcnninmultigpuplatformmodelingandanalysis AT xiaoweiqin iterationtimepredictionforcnninmultigpuplatformmodelingandanalysis AT xiaohuichen iterationtimepredictionforcnninmultigpuplatformmodelingandanalysis AT guowei iterationtimepredictionforcnninmultigpuplatformmodelingandanalysis |