Towards Full Forward On-Tiny-Device Learning: A Guided Search for a Randomly Initialized Neural Network
In the context of TinyML, many research efforts have been devoted to designing forward topologies to support On-Device Learning. Reaching this target would bring numerous advantages, including reductions in latency and computational complexity, stronger privacy, data safety and robustness to adversa...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2024-01-01
|
Series: | Algorithms |
Subjects: | |
Online Access: | https://www.mdpi.com/1999-4893/17/1/22 |
_version_ | 1797340230510968832 |
---|---|
author | Danilo Pau Andrea Pisani Antonio Candelieri |
author_facet | Danilo Pau Andrea Pisani Antonio Candelieri |
author_sort | Danilo Pau |
collection | DOAJ |
description | In the context of TinyML, many research efforts have been devoted to designing forward topologies to support On-Device Learning. Reaching this target would bring numerous advantages, including reductions in latency and computational complexity, stronger privacy, data safety and robustness to adversarial attacks, higher resilience against concept drift, etc. However, On-Device Learning on resource constrained devices poses severe limitations to computational power and memory. Therefore, deploying Neural Networks on tiny devices appears to be prohibitive, since their backpropagation-based training is too memory demanding for their embedded assets. Using Extreme Learning Machines based on Convolutional Neural Networks might be feasible and very convenient, especially for Feature Extraction tasks. However, it requires searching for a randomly initialized topology that achieves results as good as those achieved by the backpropagated model. This work proposes a novel approach for automatically composing an Extreme Convolutional Feature Extractor, based on Neural Architecture Search and Bayesian Optimization. It was applied to the CIFAR-10 and MNIST datasets for evaluation. Two search spaces have been defined, as well as a search strategy that has been tested with two surrogate models, Gaussian Process and Random Forest. A performance estimation strategy was defined, keeping the feature set computed by the MLCommons-Tiny benchmark ResNet as a reference model. In as few as 1200 search iterations, the proposed strategy was able to achieve a topology whose extracted features scored a mean square error equal to 0.64 compared to the reference set. Further improvements are required, with a target of at least one order of magnitude decrease in mean square error for improved classification accuracy. The code is made available via GitHub to allow for the reproducibility of the results reported in this paper. |
first_indexed | 2024-03-08T10:00:00Z |
format | Article |
id | doaj.art-95e96f972b4343518097cc97edae7b27 |
institution | Directory Open Access Journal |
issn | 1999-4893 |
language | English |
last_indexed | 2024-03-08T10:00:00Z |
publishDate | 2024-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Algorithms |
spelling | doaj.art-95e96f972b4343518097cc97edae7b272024-01-29T13:41:23ZengMDPI AGAlgorithms1999-48932024-01-011712210.3390/a17010022Towards Full Forward On-Tiny-Device Learning: A Guided Search for a Randomly Initialized Neural NetworkDanilo Pau0Andrea Pisani1Antonio Candelieri2System Research and Applications, STMicroelectronics, via C. Olivetti 2, 20864 Agrate Brianza, MB, ItalySystem Research and Applications, STMicroelectronics, via C. Olivetti 2, 20864 Agrate Brianza, MB, ItalyDepartment of Economics, Management and Statistics, University of Milan-Bicocca, Piazza dell’Ateneo Nuovo 1, 20126 Milano, MI, ItalyIn the context of TinyML, many research efforts have been devoted to designing forward topologies to support On-Device Learning. Reaching this target would bring numerous advantages, including reductions in latency and computational complexity, stronger privacy, data safety and robustness to adversarial attacks, higher resilience against concept drift, etc. However, On-Device Learning on resource constrained devices poses severe limitations to computational power and memory. Therefore, deploying Neural Networks on tiny devices appears to be prohibitive, since their backpropagation-based training is too memory demanding for their embedded assets. Using Extreme Learning Machines based on Convolutional Neural Networks might be feasible and very convenient, especially for Feature Extraction tasks. However, it requires searching for a randomly initialized topology that achieves results as good as those achieved by the backpropagated model. This work proposes a novel approach for automatically composing an Extreme Convolutional Feature Extractor, based on Neural Architecture Search and Bayesian Optimization. It was applied to the CIFAR-10 and MNIST datasets for evaluation. Two search spaces have been defined, as well as a search strategy that has been tested with two surrogate models, Gaussian Process and Random Forest. A performance estimation strategy was defined, keeping the feature set computed by the MLCommons-Tiny benchmark ResNet as a reference model. In as few as 1200 search iterations, the proposed strategy was able to achieve a topology whose extracted features scored a mean square error equal to 0.64 compared to the reference set. Further improvements are required, with a target of at least one order of magnitude decrease in mean square error for improved classification accuracy. The code is made available via GitHub to allow for the reproducibility of the results reported in this paper.https://www.mdpi.com/1999-4893/17/1/22Bayesian optimizationextreme learning machinefeature extractionhyperparameter optimizationneural architecture searchon-tiny-device learning |
spellingShingle | Danilo Pau Andrea Pisani Antonio Candelieri Towards Full Forward On-Tiny-Device Learning: A Guided Search for a Randomly Initialized Neural Network Algorithms Bayesian optimization extreme learning machine feature extraction hyperparameter optimization neural architecture search on-tiny-device learning |
title | Towards Full Forward On-Tiny-Device Learning: A Guided Search for a Randomly Initialized Neural Network |
title_full | Towards Full Forward On-Tiny-Device Learning: A Guided Search for a Randomly Initialized Neural Network |
title_fullStr | Towards Full Forward On-Tiny-Device Learning: A Guided Search for a Randomly Initialized Neural Network |
title_full_unstemmed | Towards Full Forward On-Tiny-Device Learning: A Guided Search for a Randomly Initialized Neural Network |
title_short | Towards Full Forward On-Tiny-Device Learning: A Guided Search for a Randomly Initialized Neural Network |
title_sort | towards full forward on tiny device learning a guided search for a randomly initialized neural network |
topic | Bayesian optimization extreme learning machine feature extraction hyperparameter optimization neural architecture search on-tiny-device learning |
url | https://www.mdpi.com/1999-4893/17/1/22 |
work_keys_str_mv | AT danilopau towardsfullforwardontinydevicelearningaguidedsearchforarandomlyinitializedneuralnetwork AT andreapisani towardsfullforwardontinydevicelearningaguidedsearchforarandomlyinitializedneuralnetwork AT antoniocandelieri towardsfullforwardontinydevicelearningaguidedsearchforarandomlyinitializedneuralnetwork |