Semi-Supervised Minimum Error Entropy Principle with Distributed Method
The minimum error entropy principle (MEE) is an alternative of the classical least squares for its robustness to non-Gaussian noise. This paper studies the gradient descent algorithm for MEE with a semi-supervised approach and distributed method, and shows that using the additional information of un...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2018-12-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/20/12/968 |
_version_ | 1798025687554064384 |
---|---|
author | Baobin Wang Ting Hu |
author_facet | Baobin Wang Ting Hu |
author_sort | Baobin Wang |
collection | DOAJ |
description | The minimum error entropy principle (MEE) is an alternative of the classical least squares for its robustness to non-Gaussian noise. This paper studies the gradient descent algorithm for MEE with a semi-supervised approach and distributed method, and shows that using the additional information of unlabeled data can enhance the learning ability of the distributed MEE algorithm. Our result proves that the mean squared error of the distributed gradient descent MEE algorithm can be minimax optimal for regression if the number of local machines increases polynomially as the total datasize. |
first_indexed | 2024-04-11T18:23:56Z |
format | Article |
id | doaj.art-20d6bfde0fae4081b20fad5d1f1c9247 |
institution | Directory Open Access Journal |
issn | 1099-4300 |
language | English |
last_indexed | 2024-04-11T18:23:56Z |
publishDate | 2018-12-01 |
publisher | MDPI AG |
record_format | Article |
series | Entropy |
spelling | doaj.art-20d6bfde0fae4081b20fad5d1f1c92472022-12-22T04:09:42ZengMDPI AGEntropy1099-43002018-12-01201296810.3390/e20120968e20120968Semi-Supervised Minimum Error Entropy Principle with Distributed MethodBaobin Wang0Ting Hu1School of Mathematics and Statistics, South-Central University for Nationalities, Wuhan 430074, ChinaSchool of Mathematics and Statistics, Wuhan University, Wuhan 430072, ChinaThe minimum error entropy principle (MEE) is an alternative of the classical least squares for its robustness to non-Gaussian noise. This paper studies the gradient descent algorithm for MEE with a semi-supervised approach and distributed method, and shows that using the additional information of unlabeled data can enhance the learning ability of the distributed MEE algorithm. Our result proves that the mean squared error of the distributed gradient descent MEE algorithm can be minimax optimal for regression if the number of local machines increases polynomially as the total datasize.https://www.mdpi.com/1099-4300/20/12/968information theoretical learningdistributed methodMEE algorithmsemi-supervised approachgradient descentreproducing kernel Hilbert spaces |
spellingShingle | Baobin Wang Ting Hu Semi-Supervised Minimum Error Entropy Principle with Distributed Method Entropy information theoretical learning distributed method MEE algorithm semi-supervised approach gradient descent reproducing kernel Hilbert spaces |
title | Semi-Supervised Minimum Error Entropy Principle with Distributed Method |
title_full | Semi-Supervised Minimum Error Entropy Principle with Distributed Method |
title_fullStr | Semi-Supervised Minimum Error Entropy Principle with Distributed Method |
title_full_unstemmed | Semi-Supervised Minimum Error Entropy Principle with Distributed Method |
title_short | Semi-Supervised Minimum Error Entropy Principle with Distributed Method |
title_sort | semi supervised minimum error entropy principle with distributed method |
topic | information theoretical learning distributed method MEE algorithm semi-supervised approach gradient descent reproducing kernel Hilbert spaces |
url | https://www.mdpi.com/1099-4300/20/12/968 |
work_keys_str_mv | AT baobinwang semisupervisedminimumerrorentropyprinciplewithdistributedmethod AT tinghu semisupervisedminimumerrorentropyprinciplewithdistributedmethod |