Accelerated CNN Training through Gradient Approximation
© 2019 IEEE. Training deep convolutional neural networks such as VGG and ResNet by gradient descent is an expensive exercise requiring specialized hardware such as GPUs. Recent works have examined the possibility of approximating the gradient computation while maintaining the same convergence proper...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021
|
Online Access: | https://hdl.handle.net/1721.1/132255 |
_version_ | 1826197719265312768 |
---|---|
author | Harsha, NS Wang, Z Amarasinghe, S |
author_facet | Harsha, NS Wang, Z Amarasinghe, S |
author_sort | Harsha, NS |
collection | MIT |
description | © 2019 IEEE. Training deep convolutional neural networks such as VGG and ResNet by gradient descent is an expensive exercise requiring specialized hardware such as GPUs. Recent works have examined the possibility of approximating the gradient computation while maintaining the same convergence properties. While promising, the approximations only work on relatively small datasets such as MNIST. They also fail to achieve real wall-clock speedups due to lack of efficient GPU implementations of the proposed approximation methods. In this work, we explore three alternative methods to approximate gradients, with an efficient GPU kernel implementation for one of them. We achieve wall-clock speedup with ResNet-20 and VGG-19 on the CIFAR-10 dataset upwards of 7 percent, with a minimal loss in validation accuracy. |
first_indexed | 2024-09-23T10:52:05Z |
format | Article |
id | mit-1721.1/132255 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T10:52:05Z |
publishDate | 2021 |
publisher | IEEE |
record_format | dspace |
spelling | mit-1721.1/1322552021-09-21T03:46:49Z Accelerated CNN Training through Gradient Approximation Harsha, NS Wang, Z Amarasinghe, S © 2019 IEEE. Training deep convolutional neural networks such as VGG and ResNet by gradient descent is an expensive exercise requiring specialized hardware such as GPUs. Recent works have examined the possibility of approximating the gradient computation while maintaining the same convergence properties. While promising, the approximations only work on relatively small datasets such as MNIST. They also fail to achieve real wall-clock speedups due to lack of efficient GPU implementations of the proposed approximation methods. In this work, we explore three alternative methods to approximate gradients, with an efficient GPU kernel implementation for one of them. We achieve wall-clock speedup with ResNet-20 and VGG-19 on the CIFAR-10 dataset upwards of 7 percent, with a minimal loss in validation accuracy. 2021-09-20T18:21:30Z 2021-09-20T18:21:30Z 2020-11-24T17:21:36Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/132255 en 10.1109/EMC249363.2019.00014 Proceedings - 2019 2nd Workshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications, EMC2 2019 Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf IEEE arXiv |
spellingShingle | Harsha, NS Wang, Z Amarasinghe, S Accelerated CNN Training through Gradient Approximation |
title | Accelerated CNN Training through Gradient Approximation |
title_full | Accelerated CNN Training through Gradient Approximation |
title_fullStr | Accelerated CNN Training through Gradient Approximation |
title_full_unstemmed | Accelerated CNN Training through Gradient Approximation |
title_short | Accelerated CNN Training through Gradient Approximation |
title_sort | accelerated cnn training through gradient approximation |
url | https://hdl.handle.net/1721.1/132255 |
work_keys_str_mv | AT harshans acceleratedcnntrainingthroughgradientapproximation AT wangz acceleratedcnntrainingthroughgradientapproximation AT amarasinghes acceleratedcnntrainingthroughgradientapproximation |