Parsimonious Optimization of Multitask Neural Network Hyperparameters

Neural networks are rapidly gaining popularity in chemical modeling and Quantitative Structure–Activity Relationship (QSAR) thanks to their ability to handle multitask problems. However, outcomes of neural networks depend on the tuning of several hyperparameters, whose small variations can often str...

Full description

Bibliographic Details
Main Authors: Cecile Valsecchi, Viviana Consonni, Roberto Todeschini, Marco Emilio Orlandi, Fabio Gosetti, Davide Ballabio
Format: Article
Language:English
Published: MDPI AG 2021-11-01
Series:Molecules
Subjects:
Online Access:https://www.mdpi.com/1420-3049/26/23/7254
_version_ 1797507359020417024
author Cecile Valsecchi
Viviana Consonni
Roberto Todeschini
Marco Emilio Orlandi
Fabio Gosetti
Davide Ballabio
author_facet Cecile Valsecchi
Viviana Consonni
Roberto Todeschini
Marco Emilio Orlandi
Fabio Gosetti
Davide Ballabio
author_sort Cecile Valsecchi
collection DOAJ
description Neural networks are rapidly gaining popularity in chemical modeling and Quantitative Structure–Activity Relationship (QSAR) thanks to their ability to handle multitask problems. However, outcomes of neural networks depend on the tuning of several hyperparameters, whose small variations can often strongly affect their performance. Hence, optimization is a fundamental step in training neural networks although, in many cases, it can be very expensive from a computational point of view. In this study, we compared four of the most widely used approaches for tuning hyperparameters, namely, grid search, random search, tree-structured Parzen estimator, and genetic algorithms on three multitask QSAR datasets. We mainly focused on parsimonious optimization and thus not only on the performance of neural networks, but also the computational time that was taken into account. Furthermore, since the optimization approaches do not directly provide information about the influence of hyperparameters, we applied experimental design strategies to determine their effects on the neural network performance. We found that genetic algorithms, tree-structured Parzen estimator, and random search require on average 0.08% of the hours required by grid search; in addition, tree-structured Parzen estimator and genetic algorithms provide better results than random search.
first_indexed 2024-03-10T04:47:23Z
format Article
id doaj.art-6efa002dff22447b824520ca9609aeec
institution Directory Open Access Journal
issn 1420-3049
language English
last_indexed 2024-03-10T04:47:23Z
publishDate 2021-11-01
publisher MDPI AG
record_format Article
series Molecules
spelling doaj.art-6efa002dff22447b824520ca9609aeec2023-11-23T02:49:44ZengMDPI AGMolecules1420-30492021-11-012623725410.3390/molecules26237254Parsimonious Optimization of Multitask Neural Network HyperparametersCecile Valsecchi0Viviana Consonni1Roberto Todeschini2Marco Emilio Orlandi3Fabio Gosetti4Davide Ballabio5Department of Earth and Environmental Sciences, University of Milano-Bicocca, Piazza della Scienza 1, 20126 Milano, ItalyDepartment of Earth and Environmental Sciences, University of Milano-Bicocca, Piazza della Scienza 1, 20126 Milano, ItalyDepartment of Earth and Environmental Sciences, University of Milano-Bicocca, Piazza della Scienza 1, 20126 Milano, ItalyDepartment of Earth and Environmental Sciences, University of Milano-Bicocca, Piazza della Scienza 1, 20126 Milano, ItalyDepartment of Earth and Environmental Sciences, University of Milano-Bicocca, Piazza della Scienza 1, 20126 Milano, ItalyDepartment of Earth and Environmental Sciences, University of Milano-Bicocca, Piazza della Scienza 1, 20126 Milano, ItalyNeural networks are rapidly gaining popularity in chemical modeling and Quantitative Structure–Activity Relationship (QSAR) thanks to their ability to handle multitask problems. However, outcomes of neural networks depend on the tuning of several hyperparameters, whose small variations can often strongly affect their performance. Hence, optimization is a fundamental step in training neural networks although, in many cases, it can be very expensive from a computational point of view. In this study, we compared four of the most widely used approaches for tuning hyperparameters, namely, grid search, random search, tree-structured Parzen estimator, and genetic algorithms on three multitask QSAR datasets. We mainly focused on parsimonious optimization and thus not only on the performance of neural networks, but also the computational time that was taken into account. Furthermore, since the optimization approaches do not directly provide information about the influence of hyperparameters, we applied experimental design strategies to determine their effects on the neural network performance. We found that genetic algorithms, tree-structured Parzen estimator, and random search require on average 0.08% of the hours required by grid search; in addition, tree-structured Parzen estimator and genetic algorithms provide better results than random search.https://www.mdpi.com/1420-3049/26/23/7254neural networksoptimizationgenetic algorithmsgrid searchrandom searchtree-structured Parzen estimator
spellingShingle Cecile Valsecchi
Viviana Consonni
Roberto Todeschini
Marco Emilio Orlandi
Fabio Gosetti
Davide Ballabio
Parsimonious Optimization of Multitask Neural Network Hyperparameters
Molecules
neural networks
optimization
genetic algorithms
grid search
random search
tree-structured Parzen estimator
title Parsimonious Optimization of Multitask Neural Network Hyperparameters
title_full Parsimonious Optimization of Multitask Neural Network Hyperparameters
title_fullStr Parsimonious Optimization of Multitask Neural Network Hyperparameters
title_full_unstemmed Parsimonious Optimization of Multitask Neural Network Hyperparameters
title_short Parsimonious Optimization of Multitask Neural Network Hyperparameters
title_sort parsimonious optimization of multitask neural network hyperparameters
topic neural networks
optimization
genetic algorithms
grid search
random search
tree-structured Parzen estimator
url https://www.mdpi.com/1420-3049/26/23/7254
work_keys_str_mv AT cecilevalsecchi parsimoniousoptimizationofmultitaskneuralnetworkhyperparameters
AT vivianaconsonni parsimoniousoptimizationofmultitaskneuralnetworkhyperparameters
AT robertotodeschini parsimoniousoptimizationofmultitaskneuralnetworkhyperparameters
AT marcoemilioorlandi parsimoniousoptimizationofmultitaskneuralnetworkhyperparameters
AT fabiogosetti parsimoniousoptimizationofmultitaskneuralnetworkhyperparameters
AT davideballabio parsimoniousoptimizationofmultitaskneuralnetworkhyperparameters