Robust incremental growing multi-experts network
Most supervised neural networks are trained by minimizing the mean square error (MSE) of the training set. In the presence of outliers, the resulting neural network model can differ significantly from the underlying model that generates the data. This paper outlines two robust learning methods for a...
Main Authors: | , , |
---|---|
Format: | Article |
Published: |
2006
|
Subjects: |
_version_ | 1825719049818996736 |
---|---|
author | Loo, C.K. Rajeswari, M. Rao, M.V.C. |
author_facet | Loo, C.K. Rajeswari, M. Rao, M.V.C. |
author_sort | Loo, C.K. |
collection | UM |
description | Most supervised neural networks are trained by minimizing the mean square error (MSE) of the training set. In the presence of outliers, the resulting neural network model can differ significantly from the underlying model that generates the data. This paper outlines two robust learning methods for a dynamic structure neural network called incremental growing multi-experts network (IGMN). It is convincingly shown by simulation that by using a scaled robust objective function instead of the least squares function, the influence of the outliers in the training data can be completely eliminated. The network generates a much better approximation in the neighborhood of outliers. Thus, the two proposed robust learning methods namely robust least mean squares (RLMSs) and least mean log squares (LMLSs) are insensitive to the presence of outliers unlike the least mean squares (LMSs) cost function. Moreover, various types of supervised learning algorithms can easily adopt LMLS, which is a parameter-free method. |
first_indexed | 2024-03-06T05:13:43Z |
format | Article |
id | um.eprints-5180 |
institution | Universiti Malaya |
last_indexed | 2024-03-06T05:13:43Z |
publishDate | 2006 |
record_format | dspace |
spelling | um.eprints-51802013-03-19T00:31:21Z http://eprints.um.edu.my/5180/ Robust incremental growing multi-experts network Loo, C.K. Rajeswari, M. Rao, M.V.C. T Technology (General) Most supervised neural networks are trained by minimizing the mean square error (MSE) of the training set. In the presence of outliers, the resulting neural network model can differ significantly from the underlying model that generates the data. This paper outlines two robust learning methods for a dynamic structure neural network called incremental growing multi-experts network (IGMN). It is convincingly shown by simulation that by using a scaled robust objective function instead of the least squares function, the influence of the outliers in the training data can be completely eliminated. The network generates a much better approximation in the neighborhood of outliers. Thus, the two proposed robust learning methods namely robust least mean squares (RLMSs) and least mean log squares (LMLSs) are insensitive to the presence of outliers unlike the least mean squares (LMSs) cost function. Moreover, various types of supervised learning algorithms can easily adopt LMLS, which is a parameter-free method. 2006 Article PeerReviewed Loo, C.K. and Rajeswari, M. and Rao, M.V.C. (2006) Robust incremental growing multi-experts network. Applied Soft Computing, 6 (2). pp. 139-153. ISSN 1568-4946, http://www.sciencedirect.com/science/article/pii/S1568494605000050 |
spellingShingle | T Technology (General) Loo, C.K. Rajeswari, M. Rao, M.V.C. Robust incremental growing multi-experts network |
title | Robust incremental growing multi-experts network |
title_full | Robust incremental growing multi-experts network |
title_fullStr | Robust incremental growing multi-experts network |
title_full_unstemmed | Robust incremental growing multi-experts network |
title_short | Robust incremental growing multi-experts network |
title_sort | robust incremental growing multi experts network |
topic | T Technology (General) |
work_keys_str_mv | AT loock robustincrementalgrowingmultiexpertsnetwork AT rajeswarim robustincrementalgrowingmultiexpertsnetwork AT raomvc robustincrementalgrowingmultiexpertsnetwork |