Superneurons: dynamic GPU memory management for training deep neural networks

© 2018 ACM. Going deeper and wider in neural architectures improves their accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need to change to less desired network architectures, or nontrivially dissect a network...

Full description

Bibliographic Details
Main Authors: Wang, Linnan, Ye, Jinmian, Zhao, Yiyang, Wu, Wei, Li, Ang, Song, Shuaiwen Leon, Xu, Zenglin, Kraska, Tim
Format: Article
Language:English
Published: Association for Computing Machinery (ACM) 2021
Online Access:https://hdl.handle.net/1721.1/132270
_version_ 1826211760706682880
author Wang, Linnan
Ye, Jinmian
Zhao, Yiyang
Wu, Wei
Li, Ang
Song, Shuaiwen Leon
Xu, Zenglin
Kraska, Tim
author_facet Wang, Linnan
Ye, Jinmian
Zhao, Yiyang
Wu, Wei
Li, Ang
Song, Shuaiwen Leon
Xu, Zenglin
Kraska, Tim
author_sort Wang, Linnan
collection MIT
description © 2018 ACM. Going deeper and wider in neural architectures improves their accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need to change to less desired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far beyond the GPU DRAM capacity. SuperNeurons features 3 memory optimizations, Liveness Analysis, Unified Tensor Pool, and Cost-Aware Recomputation; together they effectively reduce the network-wide peak memory usage down to the maximal memory usage among layers. We also address the performance issues in these memory-saving techniques. Given the limited GPU DRAM, SuperNeurons not only provisions the necessary memory for the training, but also dynamically allocates the memory for convolution workspaces to achieve the high performance. Evaluations against Caffe, Torch, MXNet and TensorFlow have demonstrated that SuperNeurons trains at least 3.2432 deeper network than current ones with the leading performance. Particularly, SuperNeurons can train ResNet2500 that has 104 basic network layers on a 12GB K40c.
first_indexed 2024-09-23T15:11:01Z
format Article
id mit-1721.1/132270
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T15:11:01Z
publishDate 2021
publisher Association for Computing Machinery (ACM)
record_format dspace
spelling mit-1721.1/1322702021-09-21T03:13:41Z Superneurons: dynamic GPU memory management for training deep neural networks Wang, Linnan Ye, Jinmian Zhao, Yiyang Wu, Wei Li, Ang Song, Shuaiwen Leon Xu, Zenglin Kraska, Tim © 2018 ACM. Going deeper and wider in neural architectures improves their accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need to change to less desired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far beyond the GPU DRAM capacity. SuperNeurons features 3 memory optimizations, Liveness Analysis, Unified Tensor Pool, and Cost-Aware Recomputation; together they effectively reduce the network-wide peak memory usage down to the maximal memory usage among layers. We also address the performance issues in these memory-saving techniques. Given the limited GPU DRAM, SuperNeurons not only provisions the necessary memory for the training, but also dynamically allocates the memory for convolution workspaces to achieve the high performance. Evaluations against Caffe, Torch, MXNet and TensorFlow have demonstrated that SuperNeurons trains at least 3.2432 deeper network than current ones with the leading performance. Particularly, SuperNeurons can train ResNet2500 that has 104 basic network layers on a 12GB K40c. 2021-09-20T18:21:35Z 2021-09-20T18:21:35Z 2021-01-11T14:43:22Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/132270 en 10.1145/3178487.3178491 ACM SIGPLAN Notices Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Association for Computing Machinery (ACM) arXiv
spellingShingle Wang, Linnan
Ye, Jinmian
Zhao, Yiyang
Wu, Wei
Li, Ang
Song, Shuaiwen Leon
Xu, Zenglin
Kraska, Tim
Superneurons: dynamic GPU memory management for training deep neural networks
title Superneurons: dynamic GPU memory management for training deep neural networks
title_full Superneurons: dynamic GPU memory management for training deep neural networks
title_fullStr Superneurons: dynamic GPU memory management for training deep neural networks
title_full_unstemmed Superneurons: dynamic GPU memory management for training deep neural networks
title_short Superneurons: dynamic GPU memory management for training deep neural networks
title_sort superneurons dynamic gpu memory management for training deep neural networks
url https://hdl.handle.net/1721.1/132270
work_keys_str_mv AT wanglinnan superneuronsdynamicgpumemorymanagementfortrainingdeepneuralnetworks
AT yejinmian superneuronsdynamicgpumemorymanagementfortrainingdeepneuralnetworks
AT zhaoyiyang superneuronsdynamicgpumemorymanagementfortrainingdeepneuralnetworks
AT wuwei superneuronsdynamicgpumemorymanagementfortrainingdeepneuralnetworks
AT liang superneuronsdynamicgpumemorymanagementfortrainingdeepneuralnetworks
AT songshuaiwenleon superneuronsdynamicgpumemorymanagementfortrainingdeepneuralnetworks
AT xuzenglin superneuronsdynamicgpumemorymanagementfortrainingdeepneuralnetworks
AT kraskatim superneuronsdynamicgpumemorymanagementfortrainingdeepneuralnetworks