Extreme Learning Regression for nu Regularization
Extreme learning machine for regression (ELR), though efficient, is not preferred in time-limited applications, due to the model selection time being large. To overcome this problem, we reformulate ELR to take a new regularization parameter nu (nu-ELR) which is inspired by Schölkopf et al. The regul...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Taylor & Francis Group
2020-04-01
|
Series: | Applied Artificial Intelligence |
Online Access: | http://dx.doi.org/10.1080/08839514.2020.1723863 |
_version_ | 1797684872757641216 |
---|---|
author | Xiao-Jian Ding Fan Yang Jian Liu Jie Cao |
author_facet | Xiao-Jian Ding Fan Yang Jian Liu Jie Cao |
author_sort | Xiao-Jian Ding |
collection | DOAJ |
description | Extreme learning machine for regression (ELR), though efficient, is not preferred in time-limited applications, due to the model selection time being large. To overcome this problem, we reformulate ELR to take a new regularization parameter nu (nu-ELR) which is inspired by Schölkopf et al. The regularization in terms of nu is bounded between 0 and 1, and is easier to interpret compared to C. In this paper, we propose using the active set algorithm to solve the quadratic programming optimization problem of nu-ELR. Experimental results on real regression problems show that nu-ELR performs better than ELM, ELR, and nu-SVR, and is computationally efficient compared to other iterative learning models. Additionally, the model selection time of nu-ELR can be significantly shortened. |
first_indexed | 2024-03-12T00:36:02Z |
format | Article |
id | doaj.art-446cb133301345f3b2565ca71b7c8553 |
institution | Directory Open Access Journal |
issn | 0883-9514 1087-6545 |
language | English |
last_indexed | 2024-03-12T00:36:02Z |
publishDate | 2020-04-01 |
publisher | Taylor & Francis Group |
record_format | Article |
series | Applied Artificial Intelligence |
spelling | doaj.art-446cb133301345f3b2565ca71b7c85532023-09-15T09:33:57ZengTaylor & Francis GroupApplied Artificial Intelligence0883-95141087-65452020-04-0134537839510.1080/08839514.2020.17238631723863Extreme Learning Regression for nu RegularizationXiao-Jian Ding0Fan Yang1Jian Liu2Jie Cao3Nanjing University of Finance and EconomicsNanjing University of Finance and EconomicsNanjing University of Finance and EconomicsNanjing University of Finance and EconomicsExtreme learning machine for regression (ELR), though efficient, is not preferred in time-limited applications, due to the model selection time being large. To overcome this problem, we reformulate ELR to take a new regularization parameter nu (nu-ELR) which is inspired by Schölkopf et al. The regularization in terms of nu is bounded between 0 and 1, and is easier to interpret compared to C. In this paper, we propose using the active set algorithm to solve the quadratic programming optimization problem of nu-ELR. Experimental results on real regression problems show that nu-ELR performs better than ELM, ELR, and nu-SVR, and is computationally efficient compared to other iterative learning models. Additionally, the model selection time of nu-ELR can be significantly shortened.http://dx.doi.org/10.1080/08839514.2020.1723863 |
spellingShingle | Xiao-Jian Ding Fan Yang Jian Liu Jie Cao Extreme Learning Regression for nu Regularization Applied Artificial Intelligence |
title | Extreme Learning Regression for nu Regularization |
title_full | Extreme Learning Regression for nu Regularization |
title_fullStr | Extreme Learning Regression for nu Regularization |
title_full_unstemmed | Extreme Learning Regression for nu Regularization |
title_short | Extreme Learning Regression for nu Regularization |
title_sort | extreme learning regression for nu regularization |
url | http://dx.doi.org/10.1080/08839514.2020.1723863 |
work_keys_str_mv | AT xiaojianding extremelearningregressionfornuregularization AT fanyang extremelearningregressionfornuregularization AT jianliu extremelearningregressionfornuregularization AT jiecao extremelearningregressionfornuregularization |