Graph-Powered Interpretable Machine Learning Models for Abnormality Detection in Ego-Things Network
In recent days, it is becoming essential to ensure that the outcomes of signal processing methods based on machine learning (ML) data-driven models can provide interpretable predictions. The interpretability of ML models can be defined as the capability to understand the reasons that contributed to...
Main Authors: | Divya Thekke Kanapram, Lucio Marcenaro, David Martin Gomez, Carlo Regazzoni |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2022-03-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/22/6/2260 |
Similar Items
-
Adaptive Mobile Positioning in WCDMA Networks
by: Dong B., et al.
Published: (2005-01-01) -
Adaptive Mobile Positioning in WCDMA Networks
by: Wang Xiaodong, et al.
Published: (2005-01-01) -
Reconsidering the Nature of the Unconscious: A Question on Psychoanalysis in Literary Studies
by: L. Suharjanto, SJ
Published: (2017-01-01) -
Statistical Study of the Performance of Recursive Bayesian Filters with Abnormal Observations from Range Sensors
by: Manuel Castellano-Quero, et al.
Published: (2020-07-01) -
IoT and Interpretable Machine Learning Based Framework for Disease Prediction in Pearl Millet
by: Nidhi Kundu, et al.
Published: (2021-08-01)