Kernel Risk-Sensitive Mean <i>p</i>-Power Error Algorithms for Robust Learning
As a nonlinear similarity measure defined in the reproducing kernel Hilbert space (RKHS), the correntropic loss (C-Loss) has been widely applied in robust learning and signal processing. However, the highly non-convex nature of C-Loss results in performance degradation. To address this issue, a conv...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2019-06-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/21/6/588 |
_version_ | 1798025698923773952 |
---|---|
author | Tao Zhang Shiyuan Wang Haonan Zhang Kui Xiong Lin Wang |
author_facet | Tao Zhang Shiyuan Wang Haonan Zhang Kui Xiong Lin Wang |
author_sort | Tao Zhang |
collection | DOAJ |
description | As a nonlinear similarity measure defined in the reproducing kernel Hilbert space (RKHS), the correntropic loss (C-Loss) has been widely applied in robust learning and signal processing. However, the highly non-convex nature of C-Loss results in performance degradation. To address this issue, a convex kernel risk-sensitive loss (KRL) is proposed to measure the similarity in RKHS, which is the risk-sensitive loss defined as the expectation of an exponential function of the squared estimation error. In this paper, a novel nonlinear similarity measure, namely kernel risk-sensitive mean <i>p</i>-power error (KRP), is proposed by combining the mean <i>p</i>-power error into the KRL, which is a generalization of the KRL measure. The KRP with <inline-formula> <math display="inline"> <semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math> </inline-formula> reduces to the KRL, and can outperform the KRL when an appropriate <i>p</i> is configured in robust learning. Some properties of KRP are presented for discussion. To improve the robustness of the kernel recursive least squares algorithm (KRLS) and reduce its network size, two robust recursive kernel adaptive filters, namely recursive minimum kernel risk-sensitive mean <i>p</i>-power error algorithm (RMKRP) and its quantized RMKRP (QRMKRP), are proposed in the RKHS under the minimum kernel risk-sensitive mean <i>p</i>-power error (MKRP) criterion, respectively. Monte Carlo simulations are conducted to confirm the superiorities of the proposed RMKRP and its quantized version. |
first_indexed | 2024-04-11T18:23:02Z |
format | Article |
id | doaj.art-4c29b1fc19114afb94c0c2d5f93e3337 |
institution | Directory Open Access Journal |
issn | 1099-4300 |
language | English |
last_indexed | 2024-04-11T18:23:02Z |
publishDate | 2019-06-01 |
publisher | MDPI AG |
record_format | Article |
series | Entropy |
spelling | doaj.art-4c29b1fc19114afb94c0c2d5f93e33372022-12-22T04:09:43ZengMDPI AGEntropy1099-43002019-06-0121658810.3390/e21060588e21060588Kernel Risk-Sensitive Mean <i>p</i>-Power Error Algorithms for Robust LearningTao Zhang0Shiyuan Wang1Haonan Zhang2Kui Xiong3Lin Wang4College of Electronic and Information Engineering, Southwest University, Chongqing 400715, ChinaCollege of Electronic and Information Engineering, Southwest University, Chongqing 400715, ChinaCollege of Electronic and Information Engineering, Southwest University, Chongqing 400715, ChinaCollege of Electronic and Information Engineering, Southwest University, Chongqing 400715, ChinaCollege of Electronic and Information Engineering, Southwest University, Chongqing 400715, ChinaAs a nonlinear similarity measure defined in the reproducing kernel Hilbert space (RKHS), the correntropic loss (C-Loss) has been widely applied in robust learning and signal processing. However, the highly non-convex nature of C-Loss results in performance degradation. To address this issue, a convex kernel risk-sensitive loss (KRL) is proposed to measure the similarity in RKHS, which is the risk-sensitive loss defined as the expectation of an exponential function of the squared estimation error. In this paper, a novel nonlinear similarity measure, namely kernel risk-sensitive mean <i>p</i>-power error (KRP), is proposed by combining the mean <i>p</i>-power error into the KRL, which is a generalization of the KRL measure. The KRP with <inline-formula> <math display="inline"> <semantics> <mrow> <mi>p</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math> </inline-formula> reduces to the KRL, and can outperform the KRL when an appropriate <i>p</i> is configured in robust learning. Some properties of KRP are presented for discussion. To improve the robustness of the kernel recursive least squares algorithm (KRLS) and reduce its network size, two robust recursive kernel adaptive filters, namely recursive minimum kernel risk-sensitive mean <i>p</i>-power error algorithm (RMKRP) and its quantized RMKRP (QRMKRP), are proposed in the RKHS under the minimum kernel risk-sensitive mean <i>p</i>-power error (MKRP) criterion, respectively. Monte Carlo simulations are conducted to confirm the superiorities of the proposed RMKRP and its quantized version.https://www.mdpi.com/1099-4300/21/6/588correntropicquantizedkernel risk-sensitive mean p-power errorrecursivekernel adaptive filters |
spellingShingle | Tao Zhang Shiyuan Wang Haonan Zhang Kui Xiong Lin Wang Kernel Risk-Sensitive Mean <i>p</i>-Power Error Algorithms for Robust Learning Entropy correntropic quantized kernel risk-sensitive mean p-power error recursive kernel adaptive filters |
title | Kernel Risk-Sensitive Mean <i>p</i>-Power Error Algorithms for Robust Learning |
title_full | Kernel Risk-Sensitive Mean <i>p</i>-Power Error Algorithms for Robust Learning |
title_fullStr | Kernel Risk-Sensitive Mean <i>p</i>-Power Error Algorithms for Robust Learning |
title_full_unstemmed | Kernel Risk-Sensitive Mean <i>p</i>-Power Error Algorithms for Robust Learning |
title_short | Kernel Risk-Sensitive Mean <i>p</i>-Power Error Algorithms for Robust Learning |
title_sort | kernel risk sensitive mean i p i power error algorithms for robust learning |
topic | correntropic quantized kernel risk-sensitive mean p-power error recursive kernel adaptive filters |
url | https://www.mdpi.com/1099-4300/21/6/588 |
work_keys_str_mv | AT taozhang kernelrisksensitivemeanipipowererroralgorithmsforrobustlearning AT shiyuanwang kernelrisksensitivemeanipipowererroralgorithmsforrobustlearning AT haonanzhang kernelrisksensitivemeanipipowererroralgorithmsforrobustlearning AT kuixiong kernelrisksensitivemeanipipowererroralgorithmsforrobustlearning AT linwang kernelrisksensitivemeanipipowererroralgorithmsforrobustlearning |