Explainable artificial intelligence model to predict brain states from fNIRS signals
Objective: Most Deep Learning (DL) methods for the classification of functional Near-Infrared Spectroscopy (fNIRS) signals do so without explaining which features contribute to the classification of a task or imagery. An explainable artificial intelligence (xAI) system that can decompose the Deep Le...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2023-01-01
|
Series: | Frontiers in Human Neuroscience |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fnhum.2022.1029784/full |
_version_ | 1797948169388032000 |
---|---|
author | Caleb Jones Shibu Sujesh Sreedharan KM Arun Chandrasekharan Kesavadas Ranganatha Sitaram |
author_facet | Caleb Jones Shibu Sujesh Sreedharan KM Arun Chandrasekharan Kesavadas Ranganatha Sitaram |
author_sort | Caleb Jones Shibu |
collection | DOAJ |
description | Objective: Most Deep Learning (DL) methods for the classification of functional Near-Infrared Spectroscopy (fNIRS) signals do so without explaining which features contribute to the classification of a task or imagery. An explainable artificial intelligence (xAI) system that can decompose the Deep Learning mode’s output onto the input variables for fNIRS signals is described here.Approach: We propose an xAI-fNIRS system that consists of a classification module and an explanation module. The classification module consists of two separately trained sliding window-based classifiers, namely, (i) 1-D Convolutional Neural Network (CNN); and (ii) Long Short-Term Memory (LSTM). The explanation module uses SHAP (SHapley Additive exPlanations) to explain the CNN model’s output in terms of the model’s input.Main results: We observed that the classification module was able to classify two types of datasets: (a) Motor task (MT), acquired from three subjects; and (b) Motor imagery (MI), acquired from 29 subjects, with an accuracy of over 96% for both CNN and LSTM models. The explanation module was able to identify the channels contributing the most to the classification of MI or MT and therefore identify the channel locations and whether they correspond to oxy- or deoxy-hemoglobin levels in those locations.Significance: The xAI-fNIRS system can distinguish between the brain states related to overt and covert motor imagery from fNIRS signals with high classification accuracy and is able to explain the signal features that discriminate between the brain states of interest. |
first_indexed | 2024-04-10T21:39:04Z |
format | Article |
id | doaj.art-46464bc0e6ab4e8d9f81d26e0e585455 |
institution | Directory Open Access Journal |
issn | 1662-5161 |
language | English |
last_indexed | 2024-04-10T21:39:04Z |
publishDate | 2023-01-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Human Neuroscience |
spelling | doaj.art-46464bc0e6ab4e8d9f81d26e0e5854552023-01-19T06:22:45ZengFrontiers Media S.A.Frontiers in Human Neuroscience1662-51612023-01-011610.3389/fnhum.2022.10297841029784Explainable artificial intelligence model to predict brain states from fNIRS signalsCaleb Jones Shibu0Sujesh Sreedharan1KM Arun2Chandrasekharan Kesavadas3Ranganatha Sitaram4Department of Computer Science, University of Arizona, Tucson, AZ, United StatesDivision of Artificial Internal Organs, Department of Medical Devices Engineering, Biomedical Technology Wing, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Trivandrum, IndiaDepartment of Imaging Sciences and Interventional Radiology, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Trivandrum, IndiaDepartment of Imaging Sciences and Interventional Radiology, Sree Chitra Tirunal Institute for Medical Sciences and Technology, Trivandrum, IndiaDepartment of Diagnostic Imaging, St. Jude Children’s Research Hospital, Memphis, TN, United StatesObjective: Most Deep Learning (DL) methods for the classification of functional Near-Infrared Spectroscopy (fNIRS) signals do so without explaining which features contribute to the classification of a task or imagery. An explainable artificial intelligence (xAI) system that can decompose the Deep Learning mode’s output onto the input variables for fNIRS signals is described here.Approach: We propose an xAI-fNIRS system that consists of a classification module and an explanation module. The classification module consists of two separately trained sliding window-based classifiers, namely, (i) 1-D Convolutional Neural Network (CNN); and (ii) Long Short-Term Memory (LSTM). The explanation module uses SHAP (SHapley Additive exPlanations) to explain the CNN model’s output in terms of the model’s input.Main results: We observed that the classification module was able to classify two types of datasets: (a) Motor task (MT), acquired from three subjects; and (b) Motor imagery (MI), acquired from 29 subjects, with an accuracy of over 96% for both CNN and LSTM models. The explanation module was able to identify the channels contributing the most to the classification of MI or MT and therefore identify the channel locations and whether they correspond to oxy- or deoxy-hemoglobin levels in those locations.Significance: The xAI-fNIRS system can distinguish between the brain states related to overt and covert motor imagery from fNIRS signals with high classification accuracy and is able to explain the signal features that discriminate between the brain states of interest.https://www.frontiersin.org/articles/10.3389/fnhum.2022.1029784/fullbrain state classificationfunctional near-infrared spectroscopybrain-computer interfacedeep learningconvolutional neural networkslong short-term memory |
spellingShingle | Caleb Jones Shibu Sujesh Sreedharan KM Arun Chandrasekharan Kesavadas Ranganatha Sitaram Explainable artificial intelligence model to predict brain states from fNIRS signals Frontiers in Human Neuroscience brain state classification functional near-infrared spectroscopy brain-computer interface deep learning convolutional neural networks long short-term memory |
title | Explainable artificial intelligence model to predict brain states from fNIRS signals |
title_full | Explainable artificial intelligence model to predict brain states from fNIRS signals |
title_fullStr | Explainable artificial intelligence model to predict brain states from fNIRS signals |
title_full_unstemmed | Explainable artificial intelligence model to predict brain states from fNIRS signals |
title_short | Explainable artificial intelligence model to predict brain states from fNIRS signals |
title_sort | explainable artificial intelligence model to predict brain states from fnirs signals |
topic | brain state classification functional near-infrared spectroscopy brain-computer interface deep learning convolutional neural networks long short-term memory |
url | https://www.frontiersin.org/articles/10.3389/fnhum.2022.1029784/full |
work_keys_str_mv | AT calebjonesshibu explainableartificialintelligencemodeltopredictbrainstatesfromfnirssignals AT sujeshsreedharan explainableartificialintelligencemodeltopredictbrainstatesfromfnirssignals AT kmarun explainableartificialintelligencemodeltopredictbrainstatesfromfnirssignals AT chandrasekharankesavadas explainableartificialintelligencemodeltopredictbrainstatesfromfnirssignals AT ranganathasitaram explainableartificialintelligencemodeltopredictbrainstatesfromfnirssignals |