Efficient Deep Learning with Sparsity: Algorithms, Systems, and Applications
Deep learning has been used across a broad spectrum of applications, including computer vision, natural language processing, and scientific discovery. However, behind its remarkable performance lies an increasing gap between the demand for and supply of computation. On the demand side, the computati...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2024
|
Online Access: | https://hdl.handle.net/1721.1/156615 |
_version_ | 1826200732821356544 |
---|---|
author | Liu, Zhijian |
author2 | Han, Song |
author_facet | Han, Song Liu, Zhijian |
author_sort | Liu, Zhijian |
collection | MIT |
description | Deep learning has been used across a broad spectrum of applications, including computer vision, natural language processing, and scientific discovery. However, behind its remarkable performance lies an increasing gap between the demand for and supply of computation. On the demand side, the computational costs of deep neural networks have surged dramatically, driven by ever-larger input and model sizes. On the supply side, as Moore's Law slows down, hardware no longer delivers increasing performance within the same power budget.
In this dissertation, we present our solutions across the algorithm, system, and application stacks to address the demand-supply gap through the lens of sparsity. In Part I, we first develop algorithms, SparseViT and SparseRefine, which identify sparsity within dense input data. We then introduce new sparse primitives, PVCNN and FlatFormer, to efficiently process inputs with sparsity. In Part II, we introduce system libraries, TorchSparse, to optimize existing sparse primitives and effectively translate theoretical savings from sparsity into practical speedups on hardware. In Part III, we apply sparsity to accelerate a wide range of computation-intensive AI applications, such as autonomous driving and language modeling. We conclude this dissertation with a vision towards building more efficient and accessible AI. |
first_indexed | 2024-09-23T11:40:57Z |
format | Thesis |
id | mit-1721.1/156615 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T11:40:57Z |
publishDate | 2024 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/1566152024-09-04T03:37:38Z Efficient Deep Learning with Sparsity: Algorithms, Systems, and Applications Liu, Zhijian Han, Song Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Deep learning has been used across a broad spectrum of applications, including computer vision, natural language processing, and scientific discovery. However, behind its remarkable performance lies an increasing gap between the demand for and supply of computation. On the demand side, the computational costs of deep neural networks have surged dramatically, driven by ever-larger input and model sizes. On the supply side, as Moore's Law slows down, hardware no longer delivers increasing performance within the same power budget. In this dissertation, we present our solutions across the algorithm, system, and application stacks to address the demand-supply gap through the lens of sparsity. In Part I, we first develop algorithms, SparseViT and SparseRefine, which identify sparsity within dense input data. We then introduce new sparse primitives, PVCNN and FlatFormer, to efficiently process inputs with sparsity. In Part II, we introduce system libraries, TorchSparse, to optimize existing sparse primitives and effectively translate theoretical savings from sparsity into practical speedups on hardware. In Part III, we apply sparsity to accelerate a wide range of computation-intensive AI applications, such as autonomous driving and language modeling. We conclude this dissertation with a vision towards building more efficient and accessible AI. Ph.D. 2024-09-03T21:11:58Z 2024-09-03T21:11:58Z 2024-05 2024-07-10T13:01:46.714Z Thesis https://hdl.handle.net/1721.1/156615 Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) Copyright retained by author(s) https://creativecommons.org/licenses/by-nc-nd/4.0/ application/pdf Massachusetts Institute of Technology |
spellingShingle | Liu, Zhijian Efficient Deep Learning with Sparsity: Algorithms, Systems, and Applications |
title | Efficient Deep Learning with Sparsity: Algorithms, Systems, and Applications |
title_full | Efficient Deep Learning with Sparsity: Algorithms, Systems, and Applications |
title_fullStr | Efficient Deep Learning with Sparsity: Algorithms, Systems, and Applications |
title_full_unstemmed | Efficient Deep Learning with Sparsity: Algorithms, Systems, and Applications |
title_short | Efficient Deep Learning with Sparsity: Algorithms, Systems, and Applications |
title_sort | efficient deep learning with sparsity algorithms systems and applications |
url | https://hdl.handle.net/1721.1/156615 |
work_keys_str_mv | AT liuzhijian efficientdeeplearningwithsparsityalgorithmssystemsandapplications |