Difference of two norms-regularizations for Q-Lasso
The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m-space, for m∈IN, is the set of errors when linear measurements are taken to recover a signal/image via the Lasso. Based on a...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
Emerald Publishing
2021-01-01
|
Series: | Applied Computing and Informatics |
Subjects: | |
Online Access: | https://www.emerald.com/insight/content/doi/10.1016/j.aci.2018.07.002/full/pdf |
_version_ | 1797796846125449216 |
---|---|
author | Abdellatif Moudafi |
author_facet | Abdellatif Moudafi |
author_sort | Abdellatif Moudafi |
collection | DOAJ |
description | The focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m-space, for m∈IN, is the set of errors when linear measurements are taken to recover a signal/image via the Lasso. Based on a recent work by Wang (2013), we are interested in two new penalty methods for Q-Lasso relying on two types of difference of convex functions (DC for short) programming where the DC objective functions are the difference of l1 and lσq norms and the difference of l1 and lr norms with r>1. By means of a generalized q-term shrinkage operator upon the special structure of lσq norm, we design a proximal gradient algorithm for handling the DC l1−lσq model. Then, based on the majorization scheme, we develop a majorized penalty algorithm for the DC l1−lr model. The convergence results of our new algorithms are presented as well. We would like to emphasize that extensive simulation results in the case Q={b} show that these two new algorithms offer improved signal recovery performance and require reduced computational effort relative to state-of-the-art l1 and lp (p∈(0,1)) models, see Wang (2013). We also devise two DC Algorithms on the spirit of a paper where exact DC representation of the cardinality constraint is investigated and which also used the largest-q norm of lσq and presented numerical results that show the efficiency of our DC Algorithm in comparison with other methods using other penalty terms in the context of quadratic programing, see Jun-ya et al. (2017). |
first_indexed | 2024-03-13T03:39:13Z |
format | Article |
id | doaj.art-296ccd7c133748ac8e5f1d622233c29e |
institution | Directory Open Access Journal |
issn | 2634-1964 2210-8327 |
language | English |
last_indexed | 2024-03-13T03:39:13Z |
publishDate | 2021-01-01 |
publisher | Emerald Publishing |
record_format | Article |
series | Applied Computing and Informatics |
spelling | doaj.art-296ccd7c133748ac8e5f1d622233c29e2023-06-23T09:38:01ZengEmerald PublishingApplied Computing and Informatics2634-19642210-83272021-01-01171798910.1016/j.aci.2018.07.002Difference of two norms-regularizations for Q-LassoAbdellatif Moudafi0Aix-Marseille Université, L.I.S UMR CNRS 7020, Domaine Universitaire de Saint-Jérome, Marseille, FranceThe focus of this paper is in Q-Lasso introduced in Alghamdi et al. (2013) which extended the Lasso by Tibshirani (1996). The closed convex subset Q belonging in a Euclidean m-space, for m∈IN, is the set of errors when linear measurements are taken to recover a signal/image via the Lasso. Based on a recent work by Wang (2013), we are interested in two new penalty methods for Q-Lasso relying on two types of difference of convex functions (DC for short) programming where the DC objective functions are the difference of l1 and lσq norms and the difference of l1 and lr norms with r>1. By means of a generalized q-term shrinkage operator upon the special structure of lσq norm, we design a proximal gradient algorithm for handling the DC l1−lσq model. Then, based on the majorization scheme, we develop a majorized penalty algorithm for the DC l1−lr model. The convergence results of our new algorithms are presented as well. We would like to emphasize that extensive simulation results in the case Q={b} show that these two new algorithms offer improved signal recovery performance and require reduced computational effort relative to state-of-the-art l1 and lp (p∈(0,1)) models, see Wang (2013). We also devise two DC Algorithms on the spirit of a paper where exact DC representation of the cardinality constraint is investigated and which also used the largest-q norm of lσq and presented numerical results that show the efficiency of our DC Algorithm in comparison with other methods using other penalty terms in the context of quadratic programing, see Jun-ya et al. (2017).https://www.emerald.com/insight/content/doi/10.1016/j.aci.2018.07.002/full/pdfQ-LassoSplit feasibilitySoft-thresholdingDC-regularizationProximal gradient algorithmMajorized penalty algorithm |
spellingShingle | Abdellatif Moudafi Difference of two norms-regularizations for Q-Lasso Applied Computing and Informatics Q-Lasso Split feasibility Soft-thresholding DC-regularization Proximal gradient algorithm Majorized penalty algorithm |
title | Difference of two norms-regularizations for Q-Lasso |
title_full | Difference of two norms-regularizations for Q-Lasso |
title_fullStr | Difference of two norms-regularizations for Q-Lasso |
title_full_unstemmed | Difference of two norms-regularizations for Q-Lasso |
title_short | Difference of two norms-regularizations for Q-Lasso |
title_sort | difference of two norms regularizations for q lasso |
topic | Q-Lasso Split feasibility Soft-thresholding DC-regularization Proximal gradient algorithm Majorized penalty algorithm |
url | https://www.emerald.com/insight/content/doi/10.1016/j.aci.2018.07.002/full/pdf |
work_keys_str_mv | AT abdellatifmoudafi differenceoftwonormsregularizationsforqlasso |