Deep Neural Network Models for the Prediction of the Aggregate Base Course Compaction Parameters
Laboratory tests for the estimation of the compaction parameters, namely the maximum dry density (MDD) and optimum moisture content (OMC) are time-consuming and costly. Thus, this paper employs the artificial neural network technique for the prediction of the OMC and MDD for the aggregate base cours...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-12-01
|
Series: | Designs |
Subjects: | |
Online Access: | https://www.mdpi.com/2411-9660/5/4/78 |
_version_ | 1797505531356643328 |
---|---|
author | Kareem Othman |
author_facet | Kareem Othman |
author_sort | Kareem Othman |
collection | DOAJ |
description | Laboratory tests for the estimation of the compaction parameters, namely the maximum dry density (MDD) and optimum moisture content (OMC) are time-consuming and costly. Thus, this paper employs the artificial neural network technique for the prediction of the OMC and MDD for the aggregate base course from relatively easier index properties tests. The grain size distribution, plastic limit, and liquid limits are used as the inputs for the development of the ANNs. In this study, multiple ANNs (240 ANNs) are tested to choose the optimum ANN that produces the best predictions. This paper focuses on studying the impact of three different activation functions: number of hidden layers, number of neurons per hidden layer on the predictions, and heatmaps are generated to compare the performance of every ANN with different settings. Results show that the optimum ANN hyperparameters change depending on the predicted parameter. Additionally, the hyperbolic tangent activation is the most efficient activation function as it outperforms the other two activation functions. Additionally, the simplest ANN architectures results in the best predictions, as the performance of the ANNs deteriorates with the increase in the number of hidden layers or the number of neurons per hidden layers. |
first_indexed | 2024-03-10T04:19:55Z |
format | Article |
id | doaj.art-58144b64bda645ada312560f05fbe1a0 |
institution | Directory Open Access Journal |
issn | 2411-9660 |
language | English |
last_indexed | 2024-03-10T04:19:55Z |
publishDate | 2021-12-01 |
publisher | MDPI AG |
record_format | Article |
series | Designs |
spelling | doaj.art-58144b64bda645ada312560f05fbe1a02023-11-23T07:52:21ZengMDPI AGDesigns2411-96602021-12-01547810.3390/designs5040078Deep Neural Network Models for the Prediction of the Aggregate Base Course Compaction ParametersKareem Othman0Civil Engineering Department, University of Toronto, 35 St. George, Toronto, ON M5S 1A4, CanadaLaboratory tests for the estimation of the compaction parameters, namely the maximum dry density (MDD) and optimum moisture content (OMC) are time-consuming and costly. Thus, this paper employs the artificial neural network technique for the prediction of the OMC and MDD for the aggregate base course from relatively easier index properties tests. The grain size distribution, plastic limit, and liquid limits are used as the inputs for the development of the ANNs. In this study, multiple ANNs (240 ANNs) are tested to choose the optimum ANN that produces the best predictions. This paper focuses on studying the impact of three different activation functions: number of hidden layers, number of neurons per hidden layer on the predictions, and heatmaps are generated to compare the performance of every ANN with different settings. Results show that the optimum ANN hyperparameters change depending on the predicted parameter. Additionally, the hyperbolic tangent activation is the most efficient activation function as it outperforms the other two activation functions. Additionally, the simplest ANN architectures results in the best predictions, as the performance of the ANNs deteriorates with the increase in the number of hidden layers or the number of neurons per hidden layers.https://www.mdpi.com/2411-9660/5/4/78artificial neural networksAtterberg limitscompaction parametersmachine learningstandard Proctor test |
spellingShingle | Kareem Othman Deep Neural Network Models for the Prediction of the Aggregate Base Course Compaction Parameters Designs artificial neural networks Atterberg limits compaction parameters machine learning standard Proctor test |
title | Deep Neural Network Models for the Prediction of the Aggregate Base Course Compaction Parameters |
title_full | Deep Neural Network Models for the Prediction of the Aggregate Base Course Compaction Parameters |
title_fullStr | Deep Neural Network Models for the Prediction of the Aggregate Base Course Compaction Parameters |
title_full_unstemmed | Deep Neural Network Models for the Prediction of the Aggregate Base Course Compaction Parameters |
title_short | Deep Neural Network Models for the Prediction of the Aggregate Base Course Compaction Parameters |
title_sort | deep neural network models for the prediction of the aggregate base course compaction parameters |
topic | artificial neural networks Atterberg limits compaction parameters machine learning standard Proctor test |
url | https://www.mdpi.com/2411-9660/5/4/78 |
work_keys_str_mv | AT kareemothman deepneuralnetworkmodelsforthepredictionoftheaggregatebasecoursecompactionparameters |