Extreme Learning Regression for nu Regularization

Extreme learning machine for regression (ELR), though efficient, is not preferred in time-limited applications, due to the model selection time being large. To overcome this problem, we reformulate ELR to take a new regularization parameter nu (nu-ELR) which is inspired by Schölkopf et al. The regul...

Full description

Bibliographic Details
Main Authors: Xiao-Jian Ding, Fan Yang, Jian Liu, Jie Cao
Format: Article
Language:English
Published: Taylor & Francis Group 2020-04-01
Series:Applied Artificial Intelligence
Online Access:http://dx.doi.org/10.1080/08839514.2020.1723863
Description
Summary:Extreme learning machine for regression (ELR), though efficient, is not preferred in time-limited applications, due to the model selection time being large. To overcome this problem, we reformulate ELR to take a new regularization parameter nu (nu-ELR) which is inspired by Schölkopf et al. The regularization in terms of nu is bounded between 0 and 1, and is easier to interpret compared to C. In this paper, we propose using the active set algorithm to solve the quadratic programming optimization problem of nu-ELR. Experimental results on real regression problems show that nu-ELR performs better than ELM, ELR, and nu-SVR, and is computationally efficient compared to other iterative learning models. Additionally, the model selection time of nu-ELR can be significantly shortened.
ISSN:0883-9514
1087-6545