Mutual Information Gain and Linear/Nonlinear Redundancy for Agent Learning, Sequence Analysis, and Modeling
In many applications, intelligent agents need to identify any structure or apparent randomness in an environment and respond appropriately. We use the relative entropy to separate and quantify the presence of both linear and nonlinear redundancy in a sequence and we introduce the new quantities of t...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-05-01
|
Series: | Entropy |
Subjects: | |
Online Access: | https://www.mdpi.com/1099-4300/22/6/608 |
_version_ | 1797566561446264832 |
---|---|
author | Jerry D. Gibson |
author_facet | Jerry D. Gibson |
author_sort | Jerry D. Gibson |
collection | DOAJ |
description | In many applications, intelligent agents need to identify any structure or apparent randomness in an environment and respond appropriately. We use the relative entropy to separate and quantify the presence of both linear and nonlinear redundancy in a sequence and we introduce the new quantities of total mutual information gain and incremental mutual information gain. We illustrate how these new quantities can be used to analyze and characterize the structures and apparent randomness for purely autoregressive sequences and for speech signals with long and short term linear redundancies. The mutual information gain is shown to be an important new tool for capturing and quantifying learning for sequence modeling and analysis. |
first_indexed | 2024-03-10T19:29:35Z |
format | Article |
id | doaj.art-5bd64d49927e4ed1a71df7c7f04d6fe7 |
institution | Directory Open Access Journal |
issn | 1099-4300 |
language | English |
last_indexed | 2024-03-10T19:29:35Z |
publishDate | 2020-05-01 |
publisher | MDPI AG |
record_format | Article |
series | Entropy |
spelling | doaj.art-5bd64d49927e4ed1a71df7c7f04d6fe72023-11-20T02:15:15ZengMDPI AGEntropy1099-43002020-05-0122660810.3390/e22060608Mutual Information Gain and Linear/Nonlinear Redundancy for Agent Learning, Sequence Analysis, and ModelingJerry D. Gibson0Department of Electrical and Computer Engineering, University of California, Santa Barbara, CA 93106-9560, USAIn many applications, intelligent agents need to identify any structure or apparent randomness in an environment and respond appropriately. We use the relative entropy to separate and quantify the presence of both linear and nonlinear redundancy in a sequence and we introduce the new quantities of total mutual information gain and incremental mutual information gain. We illustrate how these new quantities can be used to analyze and characterize the structures and apparent randomness for purely autoregressive sequences and for speech signals with long and short term linear redundancies. The mutual information gain is shown to be an important new tool for capturing and quantifying learning for sequence modeling and analysis.https://www.mdpi.com/1099-4300/22/6/608agent learninglinear redundancynonlinear redundancymutual information gain |
spellingShingle | Jerry D. Gibson Mutual Information Gain and Linear/Nonlinear Redundancy for Agent Learning, Sequence Analysis, and Modeling Entropy agent learning linear redundancy nonlinear redundancy mutual information gain |
title | Mutual Information Gain and Linear/Nonlinear Redundancy for Agent Learning, Sequence Analysis, and Modeling |
title_full | Mutual Information Gain and Linear/Nonlinear Redundancy for Agent Learning, Sequence Analysis, and Modeling |
title_fullStr | Mutual Information Gain and Linear/Nonlinear Redundancy for Agent Learning, Sequence Analysis, and Modeling |
title_full_unstemmed | Mutual Information Gain and Linear/Nonlinear Redundancy for Agent Learning, Sequence Analysis, and Modeling |
title_short | Mutual Information Gain and Linear/Nonlinear Redundancy for Agent Learning, Sequence Analysis, and Modeling |
title_sort | mutual information gain and linear nonlinear redundancy for agent learning sequence analysis and modeling |
topic | agent learning linear redundancy nonlinear redundancy mutual information gain |
url | https://www.mdpi.com/1099-4300/22/6/608 |
work_keys_str_mv | AT jerrydgibson mutualinformationgainandlinearnonlinearredundancyforagentlearningsequenceanalysisandmodeling |