Regulating Orthogonality Of Feature Functions For Highly Compressed Deep Neural Networks
When designing deep neural networks (DNN), the number of nodes in hidden layers can have a profound impact on the performance of the model. The information carried by the nodes in each layer creates a subspace, whose dimensionality is determined by the number of nodes and their linear dependency. Th...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2022
|
Online Access: | https://hdl.handle.net/1721.1/144614 https://orcid.org/0000-0003-0824-5945 |
_version_ | 1811078511249588224 |
---|---|
author | Wei-Chen, Wang |
author2 | Lizhong, Zheng |
author_facet | Lizhong, Zheng Wei-Chen, Wang |
author_sort | Wei-Chen, Wang |
collection | MIT |
description | When designing deep neural networks (DNN), the number of nodes in hidden layers can have a profound impact on the performance of the model. The information carried by the nodes in each layer creates a subspace, whose dimensionality is determined by the number of nodes and their linear dependency. This paper focuses on highlycompressed DNN – network with significantly less nodes in the last hidden layer than in the output layer. Each node in the last hidden layer is considered a feature function, and we study how the orthogonality of feature functions changes throughout the training process. We first develop how information is learned, stored and updated in the DNN throughout training, and propose an algorithm which regulates the orthogonality before and during training. Our experiment on high-dimensional mixture Gaussian dataset reveals that the algorithm achieves higher orthogonality in feature functions, and accelerates network convergence. Orthogonalizing feature functions enable us to approximate Newton’s method via the gradient descent algorithm. We can take advantage of the superior convergence properties of the second-order optimization, without directly computing the Hessian matrix. |
first_indexed | 2024-09-23T11:01:20Z |
format | Thesis |
id | mit-1721.1/144614 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T11:01:20Z |
publishDate | 2022 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/1446142022-08-30T03:45:01Z Regulating Orthogonality Of Feature Functions For Highly Compressed Deep Neural Networks Wei-Chen, Wang Lizhong, Zheng Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science When designing deep neural networks (DNN), the number of nodes in hidden layers can have a profound impact on the performance of the model. The information carried by the nodes in each layer creates a subspace, whose dimensionality is determined by the number of nodes and their linear dependency. This paper focuses on highlycompressed DNN – network with significantly less nodes in the last hidden layer than in the output layer. Each node in the last hidden layer is considered a feature function, and we study how the orthogonality of feature functions changes throughout the training process. We first develop how information is learned, stored and updated in the DNN throughout training, and propose an algorithm which regulates the orthogonality before and during training. Our experiment on high-dimensional mixture Gaussian dataset reveals that the algorithm achieves higher orthogonality in feature functions, and accelerates network convergence. Orthogonalizing feature functions enable us to approximate Newton’s method via the gradient descent algorithm. We can take advantage of the superior convergence properties of the second-order optimization, without directly computing the Hessian matrix. S.M. 2022-08-29T15:59:42Z 2022-08-29T15:59:42Z 2022-05 2022-06-21T19:25:43.296Z Thesis https://hdl.handle.net/1721.1/144614 https://orcid.org/0000-0003-0824-5945 In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology |
spellingShingle | Wei-Chen, Wang Regulating Orthogonality Of Feature Functions For Highly Compressed Deep Neural Networks |
title | Regulating Orthogonality Of Feature Functions For Highly
Compressed Deep Neural Networks |
title_full | Regulating Orthogonality Of Feature Functions For Highly
Compressed Deep Neural Networks |
title_fullStr | Regulating Orthogonality Of Feature Functions For Highly
Compressed Deep Neural Networks |
title_full_unstemmed | Regulating Orthogonality Of Feature Functions For Highly
Compressed Deep Neural Networks |
title_short | Regulating Orthogonality Of Feature Functions For Highly
Compressed Deep Neural Networks |
title_sort | regulating orthogonality of feature functions for highly compressed deep neural networks |
url | https://hdl.handle.net/1721.1/144614 https://orcid.org/0000-0003-0824-5945 |
work_keys_str_mv | AT weichenwang regulatingorthogonalityoffeaturefunctionsforhighlycompresseddeepneuralnetworks |