Efficient shallow learning as an alternative to deep learning

Abstract The realization of complex classification tasks requires training of deep learning (DL) architectures consisting of tens or even hundreds of convolutional and fully connected hidden layers, which is far from the reality of the human brain. According to the DL rationale, the first convolutio...

Full description

Bibliographic Details
Main Authors: Yuval Meir, Ofek Tevet, Yarden Tzach, Shiri Hodassman, Ronit D. Gross, Ido Kanter
Format: Article
Language:English
Published: Nature Portfolio 2023-04-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-023-32559-8
_version_ 1797840991213846528
author Yuval Meir
Ofek Tevet
Yarden Tzach
Shiri Hodassman
Ronit D. Gross
Ido Kanter
author_facet Yuval Meir
Ofek Tevet
Yarden Tzach
Shiri Hodassman
Ronit D. Gross
Ido Kanter
author_sort Yuval Meir
collection DOAJ
description Abstract The realization of complex classification tasks requires training of deep learning (DL) architectures consisting of tens or even hundreds of convolutional and fully connected hidden layers, which is far from the reality of the human brain. According to the DL rationale, the first convolutional layer reveals localized patterns in the input and large-scale patterns in the following layers, until it reliably characterizes a class of inputs. Here, we demonstrate that with a fixed ratio between the depths of the first and second convolutional layers, the error rates of the generalized shallow LeNet architecture, consisting of only five layers, decay as a power law with the number of filters in the first convolutional layer. The extrapolation of this power law indicates that the generalized LeNet can achieve small error rates that were previously obtained for the CIFAR-10 database using DL architectures. A power law with a similar exponent also characterizes the generalized VGG-16 architecture. However, this results in a significantly increased number of operations required to achieve a given error rate with respect to LeNet. This power law phenomenon governs various generalized LeNet and VGG-16 architectures, hinting at its universal behavior and suggesting a quantitative hierarchical time–space complexity among machine learning architectures. Additionally, the conservation law along the convolutional layers, which is the square-root of their size times their depth, is found to asymptotically minimize error rates. The efficient shallow learning that is demonstrated in this study calls for further quantitative examination using various databases and architectures and its accelerated implementation using future dedicated hardware developments.
first_indexed 2024-04-09T16:23:40Z
format Article
id doaj.art-300be2b85aec4c93b5d5055b0b71e259
institution Directory Open Access Journal
issn 2045-2322
language English
last_indexed 2024-04-09T16:23:40Z
publishDate 2023-04-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj.art-300be2b85aec4c93b5d5055b0b71e2592023-04-23T11:18:09ZengNature PortfolioScientific Reports2045-23222023-04-011311710.1038/s41598-023-32559-8Efficient shallow learning as an alternative to deep learningYuval Meir0Ofek Tevet1Yarden Tzach2Shiri Hodassman3Ronit D. Gross4Ido Kanter5Department of Physics, Bar-Ilan UniversityDepartment of Physics, Bar-Ilan UniversityDepartment of Physics, Bar-Ilan UniversityDepartment of Physics, Bar-Ilan UniversityDepartment of Physics, Bar-Ilan UniversityDepartment of Physics, Bar-Ilan UniversityAbstract The realization of complex classification tasks requires training of deep learning (DL) architectures consisting of tens or even hundreds of convolutional and fully connected hidden layers, which is far from the reality of the human brain. According to the DL rationale, the first convolutional layer reveals localized patterns in the input and large-scale patterns in the following layers, until it reliably characterizes a class of inputs. Here, we demonstrate that with a fixed ratio between the depths of the first and second convolutional layers, the error rates of the generalized shallow LeNet architecture, consisting of only five layers, decay as a power law with the number of filters in the first convolutional layer. The extrapolation of this power law indicates that the generalized LeNet can achieve small error rates that were previously obtained for the CIFAR-10 database using DL architectures. A power law with a similar exponent also characterizes the generalized VGG-16 architecture. However, this results in a significantly increased number of operations required to achieve a given error rate with respect to LeNet. This power law phenomenon governs various generalized LeNet and VGG-16 architectures, hinting at its universal behavior and suggesting a quantitative hierarchical time–space complexity among machine learning architectures. Additionally, the conservation law along the convolutional layers, which is the square-root of their size times their depth, is found to asymptotically minimize error rates. The efficient shallow learning that is demonstrated in this study calls for further quantitative examination using various databases and architectures and its accelerated implementation using future dedicated hardware developments.https://doi.org/10.1038/s41598-023-32559-8
spellingShingle Yuval Meir
Ofek Tevet
Yarden Tzach
Shiri Hodassman
Ronit D. Gross
Ido Kanter
Efficient shallow learning as an alternative to deep learning
Scientific Reports
title Efficient shallow learning as an alternative to deep learning
title_full Efficient shallow learning as an alternative to deep learning
title_fullStr Efficient shallow learning as an alternative to deep learning
title_full_unstemmed Efficient shallow learning as an alternative to deep learning
title_short Efficient shallow learning as an alternative to deep learning
title_sort efficient shallow learning as an alternative to deep learning
url https://doi.org/10.1038/s41598-023-32559-8
work_keys_str_mv AT yuvalmeir efficientshallowlearningasanalternativetodeeplearning
AT ofektevet efficientshallowlearningasanalternativetodeeplearning
AT yardentzach efficientshallowlearningasanalternativetodeeplearning
AT shirihodassman efficientshallowlearningasanalternativetodeeplearning
AT ronitdgross efficientshallowlearningasanalternativetodeeplearning
AT idokanter efficientshallowlearningasanalternativetodeeplearning