A new class of self‐normalising LMS algorithms

Abstract Many researchers and practitioners make heavy use of the least mean squares (LMS) algorithm as an efficient adaptive filter suitable for a multitude of problems. Despite being versatile and efficient, a drawback of this algorithm is that the adaptation rate, i.e. step‐size, has to be chosen...

Full description

Bibliographic Details
Main Authors: Oliver Ploder, Oliver Lang, Thomas Paireder, Christian Motz, Mario Huemer
Format: Article
Language:English
Published: Wiley 2022-06-01
Series:Electronics Letters
Online Access:https://doi.org/10.1049/ell2.12498
_version_ 1811159803621277696
author Oliver Ploder
Oliver Lang
Thomas Paireder
Christian Motz
Mario Huemer
author_facet Oliver Ploder
Oliver Lang
Thomas Paireder
Christian Motz
Mario Huemer
author_sort Oliver Ploder
collection DOAJ
description Abstract Many researchers and practitioners make heavy use of the least mean squares (LMS) algorithm as an efficient adaptive filter suitable for a multitude of problems. Despite being versatile and efficient, a drawback of this algorithm is that the adaptation rate, i.e. step‐size, has to be chosen very carefully in order to get the desired result (optimum compromise between fast adaptation and low steady state error). This choice was simplified by the invention of the normalised LMS, which bounds the step‐size and guarantees convergence. However, the optimum choice of the normalisation becomes non‐trivial if the system to be approximated is part of a bigger, non‐trivial model, e.g. cascaded filters or linear paths followed by nonlinearities. Such cases usually require approximations or worst‐case estimates in order to yield a normalised update algorithm, which might result in sub‐optimal performance. To counteract this problem, a new class of LMS algorithms which automatically choose their own normalisation terms, the so‐called self normalising LMS, is introduced. The simulations show that this new algorithm not only outperforms state‐of‐the‐art solutions in terms of steady state performance in a cascaded filter scenario but also converges just as fast as all other considered algorithms.
first_indexed 2024-04-10T05:48:22Z
format Article
id doaj.art-2d414fa410474fba8e19d094b02351d9
institution Directory Open Access Journal
issn 0013-5194
1350-911X
language English
last_indexed 2024-04-10T05:48:22Z
publishDate 2022-06-01
publisher Wiley
record_format Article
series Electronics Letters
spelling doaj.art-2d414fa410474fba8e19d094b02351d92023-03-05T10:20:33ZengWileyElectronics Letters0013-51941350-911X2022-06-01581249249410.1049/ell2.12498A new class of self‐normalising LMS algorithmsOliver Ploder0Oliver Lang1Thomas Paireder2Christian Motz3Mario Huemer4Christian Doppler Laboratory for Digitally Assisted RF Transceivers for Future Mobile Communications Institute of Signal Processing Johannes Kepler University Linz AustriaInstitute of Signal Processing Johannes Kepler University Linz AustriaChristian Doppler Laboratory for Digitally Assisted RF Transceivers for Future Mobile Communications Institute of Signal Processing Johannes Kepler University Linz AustriaChristian Doppler Laboratory for Digitally Assisted RF Transceivers for Future Mobile Communications Institute of Signal Processing Johannes Kepler University Linz AustriaInstitute of Signal Processing Johannes Kepler University Linz AustriaAbstract Many researchers and practitioners make heavy use of the least mean squares (LMS) algorithm as an efficient adaptive filter suitable for a multitude of problems. Despite being versatile and efficient, a drawback of this algorithm is that the adaptation rate, i.e. step‐size, has to be chosen very carefully in order to get the desired result (optimum compromise between fast adaptation and low steady state error). This choice was simplified by the invention of the normalised LMS, which bounds the step‐size and guarantees convergence. However, the optimum choice of the normalisation becomes non‐trivial if the system to be approximated is part of a bigger, non‐trivial model, e.g. cascaded filters or linear paths followed by nonlinearities. Such cases usually require approximations or worst‐case estimates in order to yield a normalised update algorithm, which might result in sub‐optimal performance. To counteract this problem, a new class of LMS algorithms which automatically choose their own normalisation terms, the so‐called self normalising LMS, is introduced. The simulations show that this new algorithm not only outperforms state‐of‐the‐art solutions in terms of steady state performance in a cascaded filter scenario but also converges just as fast as all other considered algorithms.https://doi.org/10.1049/ell2.12498
spellingShingle Oliver Ploder
Oliver Lang
Thomas Paireder
Christian Motz
Mario Huemer
A new class of self‐normalising LMS algorithms
Electronics Letters
title A new class of self‐normalising LMS algorithms
title_full A new class of self‐normalising LMS algorithms
title_fullStr A new class of self‐normalising LMS algorithms
title_full_unstemmed A new class of self‐normalising LMS algorithms
title_short A new class of self‐normalising LMS algorithms
title_sort new class of self normalising lms algorithms
url https://doi.org/10.1049/ell2.12498
work_keys_str_mv AT oliverploder anewclassofselfnormalisinglmsalgorithms
AT oliverlang anewclassofselfnormalisinglmsalgorithms
AT thomaspaireder anewclassofselfnormalisinglmsalgorithms
AT christianmotz anewclassofselfnormalisinglmsalgorithms
AT mariohuemer anewclassofselfnormalisinglmsalgorithms
AT oliverploder newclassofselfnormalisinglmsalgorithms
AT oliverlang newclassofselfnormalisinglmsalgorithms
AT thomaspaireder newclassofselfnormalisinglmsalgorithms
AT christianmotz newclassofselfnormalisinglmsalgorithms
AT mariohuemer newclassofselfnormalisinglmsalgorithms