Towards a universal mechanism for successful deep learning
Abstract Recently, the underlying mechanism for successful deep learning (DL) was presented based on a quantitative method that measures the quality of a single filter in each layer of a DL model, particularly VGG-16 trained on CIFAR-10. This method exemplifies that each filter identifies small clus...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Nature Portfolio
2024-03-01
|
Series: | Scientific Reports |
Online Access: | https://doi.org/10.1038/s41598-024-56609-x |
_version_ | 1827315986484690944 |
---|---|
author | Yuval Meir Yarden Tzach Shiri Hodassman Ofek Tevet Ido Kanter |
author_facet | Yuval Meir Yarden Tzach Shiri Hodassman Ofek Tevet Ido Kanter |
author_sort | Yuval Meir |
collection | DOAJ |
description | Abstract Recently, the underlying mechanism for successful deep learning (DL) was presented based on a quantitative method that measures the quality of a single filter in each layer of a DL model, particularly VGG-16 trained on CIFAR-10. This method exemplifies that each filter identifies small clusters of possible output labels, with additional noise selected as labels outside the clusters. This feature is progressively sharpened with each layer, resulting in an enhanced signal-to-noise ratio (SNR), which leads to an increase in the accuracy of the DL network. In this study, this mechanism is verified for VGG-16 and EfficientNet-B0 trained on the CIFAR-100 and ImageNet datasets, and the main results are as follows. First, the accuracy and SNR progressively increase with the layers. Second, for a given deep architecture, the maximal error rate increases approximately linearly with the number of output labels. Third, similar trends were obtained for dataset labels in the range [3, 1000], thus supporting the universality of this mechanism. Understanding the performance of a single filter and its dominating features paves the way to highly dilute the deep architecture without affecting its overall accuracy, and this can be achieved by applying the filter’s cluster connections (AFCC). |
first_indexed | 2024-04-24T23:07:00Z |
format | Article |
id | doaj.art-71007d1ff1c94840b85bf0bdcbf5cf54 |
institution | Directory Open Access Journal |
issn | 2045-2322 |
language | English |
last_indexed | 2024-04-24T23:07:00Z |
publishDate | 2024-03-01 |
publisher | Nature Portfolio |
record_format | Article |
series | Scientific Reports |
spelling | doaj.art-71007d1ff1c94840b85bf0bdcbf5cf542024-03-17T12:26:51ZengNature PortfolioScientific Reports2045-23222024-03-0114111110.1038/s41598-024-56609-xTowards a universal mechanism for successful deep learningYuval Meir0Yarden Tzach1Shiri Hodassman2Ofek Tevet3Ido Kanter4Department of Physics, Bar-Ilan UniversityDepartment of Physics, Bar-Ilan UniversityDepartment of Physics, Bar-Ilan UniversityDepartment of Physics, Bar-Ilan UniversityDepartment of Physics, Bar-Ilan UniversityAbstract Recently, the underlying mechanism for successful deep learning (DL) was presented based on a quantitative method that measures the quality of a single filter in each layer of a DL model, particularly VGG-16 trained on CIFAR-10. This method exemplifies that each filter identifies small clusters of possible output labels, with additional noise selected as labels outside the clusters. This feature is progressively sharpened with each layer, resulting in an enhanced signal-to-noise ratio (SNR), which leads to an increase in the accuracy of the DL network. In this study, this mechanism is verified for VGG-16 and EfficientNet-B0 trained on the CIFAR-100 and ImageNet datasets, and the main results are as follows. First, the accuracy and SNR progressively increase with the layers. Second, for a given deep architecture, the maximal error rate increases approximately linearly with the number of output labels. Third, similar trends were obtained for dataset labels in the range [3, 1000], thus supporting the universality of this mechanism. Understanding the performance of a single filter and its dominating features paves the way to highly dilute the deep architecture without affecting its overall accuracy, and this can be achieved by applying the filter’s cluster connections (AFCC).https://doi.org/10.1038/s41598-024-56609-x |
spellingShingle | Yuval Meir Yarden Tzach Shiri Hodassman Ofek Tevet Ido Kanter Towards a universal mechanism for successful deep learning Scientific Reports |
title | Towards a universal mechanism for successful deep learning |
title_full | Towards a universal mechanism for successful deep learning |
title_fullStr | Towards a universal mechanism for successful deep learning |
title_full_unstemmed | Towards a universal mechanism for successful deep learning |
title_short | Towards a universal mechanism for successful deep learning |
title_sort | towards a universal mechanism for successful deep learning |
url | https://doi.org/10.1038/s41598-024-56609-x |
work_keys_str_mv | AT yuvalmeir towardsauniversalmechanismforsuccessfuldeeplearning AT yardentzach towardsauniversalmechanismforsuccessfuldeeplearning AT shirihodassman towardsauniversalmechanismforsuccessfuldeeplearning AT ofektevet towardsauniversalmechanismforsuccessfuldeeplearning AT idokanter towardsauniversalmechanismforsuccessfuldeeplearning |