Low Computational Cost for Sample Entropy

Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The...

Full description

Bibliographic Details
Main Authors: George Manis, Md Aktaruzzaman, Roberto Sassi
Format: Article
Language:English
Published: MDPI AG 2018-01-01
Series:Entropy
Subjects:
Online Access:http://www.mdpi.com/1099-4300/20/1/61
_version_ 1828273206908157952
author George Manis
Md Aktaruzzaman
Roberto Sassi
author_facet George Manis
Md Aktaruzzaman
Roberto Sassi
author_sort George Manis
collection DOAJ
description Sample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the k d -trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy.
first_indexed 2024-04-13T06:17:27Z
format Article
id doaj.art-aa94f4e98e8943ac9abde004c1f6a60a
institution Directory Open Access Journal
issn 1099-4300
language English
last_indexed 2024-04-13T06:17:27Z
publishDate 2018-01-01
publisher MDPI AG
record_format Article
series Entropy
spelling doaj.art-aa94f4e98e8943ac9abde004c1f6a60a2022-12-22T02:58:46ZengMDPI AGEntropy1099-43002018-01-012016110.3390/e20010061e20010061Low Computational Cost for Sample EntropyGeorge Manis0Md Aktaruzzaman1Roberto Sassi2Department of Computer Science and Engineering, University of Ioannina, Ioannina 45110, GreeceDepartment of Computer Science and Engineering, Islamic University Kushtia, Kushtia 7003, BangladeshDipartimento di Informatica, Università degli Studi di Milano, Crema 26013, ItalySample Entropy is the most popular definition of entropy and is widely used as a measure of the regularity/complexity of a time series. On the other hand, it is a computationally expensive method which may require a large amount of time when used in long series or with a large number of signals. The computationally intensive part is the similarity check between points in m dimensional space. In this paper, we propose new algorithms or extend already proposed ones, aiming to compute Sample Entropy quickly. All algorithms return exactly the same value for Sample Entropy, and no approximation techniques are used. We compare and evaluate them using cardiac inter-beat (RR) time series. We investigate three algorithms. The first one is an extension of the k d -trees algorithm, customized for Sample Entropy. The second one is an extension of an algorithm initially proposed for Approximate Entropy, again customized for Sample Entropy, but also improved to present even faster results. The last one is a completely new algorithm, presenting the fastest execution times for specific values of m, r, time series length, and signal characteristics. These algorithms are compared with the straightforward implementation, directly resulting from the definition of Sample Entropy, in order to give a clear image of the speedups achieved. All algorithms assume the classical approach to the metric, in which the maximum norm is used. The key idea of the two last suggested algorithms is to avoid unnecessary comparisons by detecting them early. We use the term unnecessary to refer to those comparisons for which we know a priori that they will fail at the similarity check. The number of avoided comparisons is proved to be very large, resulting in an analogous large reduction of execution time, making them the fastest algorithms available today for the computation of Sample Entropy.http://www.mdpi.com/1099-4300/20/1/61Sample Entropyalgorithmfast computationkd-treesbucket-assisted algorithm
spellingShingle George Manis
Md Aktaruzzaman
Roberto Sassi
Low Computational Cost for Sample Entropy
Entropy
Sample Entropy
algorithm
fast computation
kd-trees
bucket-assisted algorithm
title Low Computational Cost for Sample Entropy
title_full Low Computational Cost for Sample Entropy
title_fullStr Low Computational Cost for Sample Entropy
title_full_unstemmed Low Computational Cost for Sample Entropy
title_short Low Computational Cost for Sample Entropy
title_sort low computational cost for sample entropy
topic Sample Entropy
algorithm
fast computation
kd-trees
bucket-assisted algorithm
url http://www.mdpi.com/1099-4300/20/1/61
work_keys_str_mv AT georgemanis lowcomputationalcostforsampleentropy
AT mdaktaruzzaman lowcomputationalcostforsampleentropy
AT robertosassi lowcomputationalcostforsampleentropy