To what extent is tuned neural network pruning beneficial in software effort estimation?

Software effort estimation (SEE) is of great importance for planning the budgets of future projects. The models of SEE are developed depending on the enhancements of hardware technology. However, developing such models based on neural networks remarkably increases the burden of computation. Neural n...

Full description

Bibliographic Details
Main Author: Muhammed Maruf Ozturk
Format: Article
Language:English
Published: Vladimir Andrunachievici Institute of Mathematics and Computer Science 2021-12-01
Series:Computer Science Journal of Moldova
Subjects:
Online Access:http://www.math.md/files/csjm/v29-n3/v29-n3-(pp340-365).pdf
Description
Summary:Software effort estimation (SEE) is of great importance for planning the budgets of future projects. The models of SEE are developed depending on the enhancements of hardware technology. However, developing such models based on neural networks remarkably increases the burden of computation. Neural network pruning may provide a suitable alternative to alleviate that burden. By detecting the elements making insignificant contributions to the output of a trained neural network, it is thus possible to obtain a reliable model. Otherwise, valuable information extracted from a trained neural network may be lost in pruning. In this work, the effects of pruning multi-layer perceptron (MLP) are investigated on SEE. To experimentally evaluate those effects, eight SEE data sets are employed. To find the optimal configuration of MLP, four optimization methods are utilized along with two pruning techniques. The results show that each optimization method has a distinctive threshold to suspend pruning. The model established to reach a low error of SEE, the number of features having low standard deviations should be greater than that of the features having high standard deviations. If a tuning process is applied to the hyperparameters of the pruning algorithm, the genetic algorithm is recommended to obtain high accuracy in the classification. This work provides a guideline for researchers to understand the effectiveness of neural network pruning in SEE.
ISSN:1561-4042