Summary: | As a fast training algorithm of single hidden layer forward networks, extreme learning machine (ELM) randomly initializes the input layer weights and hidden layer biases, and gets the weights of output layer through the analysis method. It overcomes many shortcomings of gradient based learning algorithm, such as local minimum, inappropriate learning rate, slow learning speed, etc. However, ELM still inevitably has overfitting and poorly stable phenomenon, especially on large-scale datasets. This paper proposes the ensemble method of diverse regularized extreme learning machines (DRELM) to solve the above problems. First, its own random distribution weigthts are used to assure the diversity between each ELM base learner, then leave-one-out (LOO) cross validation method and M S E P R E S Smethod are used to find the optimal hidden node number of each base learner, calculate the optimal hidden layer output weights to train better and different base learners. Then the new penalty term about diversity is explicitly added to the objective function and the output matrix of each learner is updated iteratively. Finally, the final output of the whole network model is obtained by averaging the output of all base learners. This method can effectively realize the ensemble of regularized extreme learning machines (RELM) with both accuracy and diversity. Experimental results on 10 UCI datasets indicate the effectiveness of DRELM.
|