Gradient Learning under Tilted Empirical Risk Minimization

Gradient Learning (GL), aiming to estimate the gradient of target function, has attracted much attention in variable selection problems due to its mild structure requirements and wide applicability. Despite rapid progress, the majority of the existing GL works are based on the empirical risk minimiz...

Full description

Bibliographic Details
Main Authors: Liyuan Liu, Biqin Song, Zhibin Pan, Chuanwu Yang, Chi Xiao, Weifu Li
Format: Article
Language:English
Published: MDPI AG 2022-07-01
Series:Entropy
Subjects:
Online Access:https://www.mdpi.com/1099-4300/24/7/956
_version_ 1797439635553517568
author Liyuan Liu
Biqin Song
Zhibin Pan
Chuanwu Yang
Chi Xiao
Weifu Li
author_facet Liyuan Liu
Biqin Song
Zhibin Pan
Chuanwu Yang
Chi Xiao
Weifu Li
author_sort Liyuan Liu
collection DOAJ
description Gradient Learning (GL), aiming to estimate the gradient of target function, has attracted much attention in variable selection problems due to its mild structure requirements and wide applicability. Despite rapid progress, the majority of the existing GL works are based on the empirical risk minimization (ERM) principle, which may face the degraded performance under complex data environment, e.g., non-Gaussian noise. To alleviate this sensitiveness, we propose a new GL model with the help of the tilted ERM criterion, and establish its theoretical support from the function approximation viewpoint. Specifically, the operator approximation technique plays the crucial role in our analysis. To solve the proposed learning objective, a gradient descent method is proposed, and the convergence analysis is provided. Finally, simulated experimental results validate the effectiveness of our approach when the input variables are correlated.
first_indexed 2024-03-09T11:55:57Z
format Article
id doaj.art-cc41992bb9794a6aaadbe41d2c9b176d
institution Directory Open Access Journal
issn 1099-4300
language English
last_indexed 2024-03-09T11:55:57Z
publishDate 2022-07-01
publisher MDPI AG
record_format Article
series Entropy
spelling doaj.art-cc41992bb9794a6aaadbe41d2c9b176d2023-11-30T23:09:30ZengMDPI AGEntropy1099-43002022-07-0124795610.3390/e24070956Gradient Learning under Tilted Empirical Risk MinimizationLiyuan Liu0Biqin Song1Zhibin Pan2Chuanwu Yang3Chi Xiao4Weifu Li5College of Science, Huazhong Agricultural University, Wuhan 430062, ChinaCollege of Science, Huazhong Agricultural University, Wuhan 430062, ChinaCollege of Science, Huazhong Agricultural University, Wuhan 430062, ChinaSchool of Electronic Information and Communications, Huazhong University of Science and Technology, Wuhan 430074, ChinaKey Laboratory of Biomedical Engineering of Hainan Province, School of Biomedical Engineering, Hainan University, Haikou 570228, ChinaCollege of Science, Huazhong Agricultural University, Wuhan 430062, ChinaGradient Learning (GL), aiming to estimate the gradient of target function, has attracted much attention in variable selection problems due to its mild structure requirements and wide applicability. Despite rapid progress, the majority of the existing GL works are based on the empirical risk minimization (ERM) principle, which may face the degraded performance under complex data environment, e.g., non-Gaussian noise. To alleviate this sensitiveness, we propose a new GL model with the help of the tilted ERM criterion, and establish its theoretical support from the function approximation viewpoint. Specifically, the operator approximation technique plays the crucial role in our analysis. To solve the proposed learning objective, a gradient descent method is proposed, and the convergence analysis is provided. Finally, simulated experimental results validate the effectiveness of our approach when the input variables are correlated.https://www.mdpi.com/1099-4300/24/7/956gradient learningoperator approximationreproducing kernel Hilbert spacestilted empirical risk minimization
spellingShingle Liyuan Liu
Biqin Song
Zhibin Pan
Chuanwu Yang
Chi Xiao
Weifu Li
Gradient Learning under Tilted Empirical Risk Minimization
Entropy
gradient learning
operator approximation
reproducing kernel Hilbert spaces
tilted empirical risk minimization
title Gradient Learning under Tilted Empirical Risk Minimization
title_full Gradient Learning under Tilted Empirical Risk Minimization
title_fullStr Gradient Learning under Tilted Empirical Risk Minimization
title_full_unstemmed Gradient Learning under Tilted Empirical Risk Minimization
title_short Gradient Learning under Tilted Empirical Risk Minimization
title_sort gradient learning under tilted empirical risk minimization
topic gradient learning
operator approximation
reproducing kernel Hilbert spaces
tilted empirical risk minimization
url https://www.mdpi.com/1099-4300/24/7/956
work_keys_str_mv AT liyuanliu gradientlearningundertiltedempiricalriskminimization
AT biqinsong gradientlearningundertiltedempiricalriskminimization
AT zhibinpan gradientlearningundertiltedempiricalriskminimization
AT chuanwuyang gradientlearningundertiltedempiricalriskminimization
AT chixiao gradientlearningundertiltedempiricalriskminimization
AT weifuli gradientlearningundertiltedempiricalriskminimization