Interpretability in neural networks towards universal consistency

In the challenge of Artificial Intelligence in processing semantically evaluable information, the application of deep learning techniques depends not only on the algorithms, but also on the principles that explain how they work. The malfunction of a machine learning system, ML, can occur due lack of...

Full description

Bibliographic Details
Main Authors: Dionéia Motta Monte-Serrat, Carlo Cattani
Format: Article
Language:English
Published: KeAi Communications Co., Ltd. 2021-06-01
Series:International Journal of Cognitive Computing in Engineering
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S266630742100005X
_version_ 1797976431338192896
author Dionéia Motta Monte-Serrat
Carlo Cattani
author_facet Dionéia Motta Monte-Serrat
Carlo Cattani
author_sort Dionéia Motta Monte-Serrat
collection DOAJ
description In the challenge of Artificial Intelligence in processing semantically evaluable information, the application of deep learning techniques depends not only on the algorithms, but also on the principles that explain how they work. The malfunction of a machine learning system, ML, can occur due lack of knowledge of the algorithm intended behavior. The difficulty in debugging ML can be overcome by using strategies based on the universal structure of language that overlaps in the cognitive architecture of biological and intelligent systems. The appropriate choice of an algorithm inspired by the functioning of human language offers the computational scientist methodological strategies to clarify its performance analysis to optimize the interpretative activity under the good instrumentation of the system and to reach the performance level of an application considered safe. Neurolinguistic principles that link interpretation to language and cognition; the semantic dimension that arises not only from the linguistic system, but also from the context in which the information is produced; and the theoretical bases for understanding language as a 'form' (process) and not as a substance (set of signs) provide the groundwork for the intelligent systems’ improvement so that they have universal consistency and lessen the effects of the ‘curse of dimensionality’ or of the bias in the interpretation by the system. Semantics and statistics are considered to understand universal consistency as opposed to ideal consistency when evaluating a data set, since training alone is not sufficient to avoid data manipulation. We conclude that the 'key' for a good information classifier to achieve an acceptable performance of neural networks is in the dynamic aspect of language (language as a form / process) that: Guides the apprehension of how neural networks have access to weights (values); replicates this for intelligent systems making them invariant to many input transformations and guarantees an infinite amount of finite sample information, avoiding semantic distortion.
first_indexed 2024-04-11T04:50:49Z
format Article
id doaj.art-3bcd0ca1f88047089994b105bb717563
institution Directory Open Access Journal
issn 2666-3074
language English
last_indexed 2024-04-11T04:50:49Z
publishDate 2021-06-01
publisher KeAi Communications Co., Ltd.
record_format Article
series International Journal of Cognitive Computing in Engineering
spelling doaj.art-3bcd0ca1f88047089994b105bb7175632022-12-27T04:37:14ZengKeAi Communications Co., Ltd.International Journal of Cognitive Computing in Engineering2666-30742021-06-0123039Interpretability in neural networks towards universal consistencyDionéia Motta Monte-Serrat0Carlo Cattani1University of Sao Paulo Brazil; University of Ribeirao Preto Brazil; IEL-Unicamp, BrazilUniversity of Tuscia, Viterbo, Italy; Corresponding author.In the challenge of Artificial Intelligence in processing semantically evaluable information, the application of deep learning techniques depends not only on the algorithms, but also on the principles that explain how they work. The malfunction of a machine learning system, ML, can occur due lack of knowledge of the algorithm intended behavior. The difficulty in debugging ML can be overcome by using strategies based on the universal structure of language that overlaps in the cognitive architecture of biological and intelligent systems. The appropriate choice of an algorithm inspired by the functioning of human language offers the computational scientist methodological strategies to clarify its performance analysis to optimize the interpretative activity under the good instrumentation of the system and to reach the performance level of an application considered safe. Neurolinguistic principles that link interpretation to language and cognition; the semantic dimension that arises not only from the linguistic system, but also from the context in which the information is produced; and the theoretical bases for understanding language as a 'form' (process) and not as a substance (set of signs) provide the groundwork for the intelligent systems’ improvement so that they have universal consistency and lessen the effects of the ‘curse of dimensionality’ or of the bias in the interpretation by the system. Semantics and statistics are considered to understand universal consistency as opposed to ideal consistency when evaluating a data set, since training alone is not sufficient to avoid data manipulation. We conclude that the 'key' for a good information classifier to achieve an acceptable performance of neural networks is in the dynamic aspect of language (language as a form / process) that: Guides the apprehension of how neural networks have access to weights (values); replicates this for intelligent systems making them invariant to many input transformations and guarantees an infinite amount of finite sample information, avoiding semantic distortion.http://www.sciencedirect.com/science/article/pii/S266630742100005XIntelligent systemsInterpretabilityLanguage semanticsUniversal consistency
spellingShingle Dionéia Motta Monte-Serrat
Carlo Cattani
Interpretability in neural networks towards universal consistency
International Journal of Cognitive Computing in Engineering
Intelligent systems
Interpretability
Language semantics
Universal consistency
title Interpretability in neural networks towards universal consistency
title_full Interpretability in neural networks towards universal consistency
title_fullStr Interpretability in neural networks towards universal consistency
title_full_unstemmed Interpretability in neural networks towards universal consistency
title_short Interpretability in neural networks towards universal consistency
title_sort interpretability in neural networks towards universal consistency
topic Intelligent systems
Interpretability
Language semantics
Universal consistency
url http://www.sciencedirect.com/science/article/pii/S266630742100005X
work_keys_str_mv AT dioneiamottamonteserrat interpretabilityinneuralnetworkstowardsuniversalconsistency
AT carlocattani interpretabilityinneuralnetworkstowardsuniversalconsistency