The goal of explaining black boxes in EEG seizure prediction is not to explain models' decisions
Abstract Many state‐of‐the‐art methods for seizure prediction, using the electroencephalogram, are based on machine learning models that are black boxes, weakening the trust of clinicians in them for high‐risk decisions. Seizure prediction concerns a multidimensional time‐series problem that perform...
Main Authors: | , , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2023-06-01
|
Series: | Epilepsia Open |
Subjects: | |
Online Access: | https://doi.org/10.1002/epi4.12748 |
_version_ | 1827935672683462656 |
---|---|
author | Mauro F. Pinto Joana Batista Adriana Leal Fábio Lopes Ana Oliveira António Dourado Sulaiman I. Abuhaiba Francisco Sales Pedro Martins César A. Teixeira |
author_facet | Mauro F. Pinto Joana Batista Adriana Leal Fábio Lopes Ana Oliveira António Dourado Sulaiman I. Abuhaiba Francisco Sales Pedro Martins César A. Teixeira |
author_sort | Mauro F. Pinto |
collection | DOAJ |
description | Abstract Many state‐of‐the‐art methods for seizure prediction, using the electroencephalogram, are based on machine learning models that are black boxes, weakening the trust of clinicians in them for high‐risk decisions. Seizure prediction concerns a multidimensional time‐series problem that performs continuous sliding window analysis and classification. In this work, we make a critical review of which explanations increase trust in models' decisions for predicting seizures. We developed three machine learning methodologies to explore their explainability potential. These contain different levels of model transparency: a logistic regression, an ensemble of 15 support vector machines, and an ensemble of three convolutional neural networks. For each methodology, we evaluated quasi‐prospectively the performance in 40 patients (testing data comprised 2055 hours and 104 seizures). We selected patients with good and poor performance to explain the models' decisions. Then, with grounded theory, we evaluated how these explanations helped specialists (data scientists and clinicians working in epilepsy) to understand the obtained model dynamics. We obtained four lessons for better communication between data scientists and clinicians. We found that the goal of explainability is not to explain the system's decisions but to improve the system itself. Model transparency is not the most significant factor in explaining a model decision for seizure prediction. Even when using intuitive and state‐of‐the‐art features, it is hard to understand brain dynamics and their relationship with the developed models. We achieve an increase in understanding by developing, in parallel, several systems that explicitly deal with signal dynamics changes that help develop a complete problem formulation. |
first_indexed | 2024-03-13T07:58:09Z |
format | Article |
id | doaj.art-718ae0aaf7014c37b060707c0c33dba0 |
institution | Directory Open Access Journal |
issn | 2470-9239 |
language | English |
last_indexed | 2024-03-13T07:58:09Z |
publishDate | 2023-06-01 |
publisher | Wiley |
record_format | Article |
series | Epilepsia Open |
spelling | doaj.art-718ae0aaf7014c37b060707c0c33dba02023-06-02T03:50:17ZengWileyEpilepsia Open2470-92392023-06-018228529710.1002/epi4.12748The goal of explaining black boxes in EEG seizure prediction is not to explain models' decisionsMauro F. Pinto0Joana Batista1Adriana Leal2Fábio Lopes3Ana Oliveira4António Dourado5Sulaiman I. Abuhaiba6Francisco Sales7Pedro Martins8César A. Teixeira9Department of Informatics Engineering CISUC Univ Coimbra Coimbra PortugalDepartment of Informatics Engineering CISUC Univ Coimbra Coimbra PortugalDepartment of Informatics Engineering CISUC Univ Coimbra Coimbra PortugalDepartment of Informatics Engineering CISUC Univ Coimbra Coimbra PortugalDepartment of Informatics Engineering CISUC Univ Coimbra Coimbra PortugalDepartment of Informatics Engineering CISUC Univ Coimbra Coimbra PortugalRefractory Epilepsy Reference Centre Centro Hospitalar e Universitário de Coimbra, EPE Coimbra PortugalRefractory Epilepsy Reference Centre Centro Hospitalar e Universitário de Coimbra, EPE Coimbra PortugalDepartment of Informatics Engineering CISUC Univ Coimbra Coimbra PortugalDepartment of Informatics Engineering CISUC Univ Coimbra Coimbra PortugalAbstract Many state‐of‐the‐art methods for seizure prediction, using the electroencephalogram, are based on machine learning models that are black boxes, weakening the trust of clinicians in them for high‐risk decisions. Seizure prediction concerns a multidimensional time‐series problem that performs continuous sliding window analysis and classification. In this work, we make a critical review of which explanations increase trust in models' decisions for predicting seizures. We developed three machine learning methodologies to explore their explainability potential. These contain different levels of model transparency: a logistic regression, an ensemble of 15 support vector machines, and an ensemble of three convolutional neural networks. For each methodology, we evaluated quasi‐prospectively the performance in 40 patients (testing data comprised 2055 hours and 104 seizures). We selected patients with good and poor performance to explain the models' decisions. Then, with grounded theory, we evaluated how these explanations helped specialists (data scientists and clinicians working in epilepsy) to understand the obtained model dynamics. We obtained four lessons for better communication between data scientists and clinicians. We found that the goal of explainability is not to explain the system's decisions but to improve the system itself. Model transparency is not the most significant factor in explaining a model decision for seizure prediction. Even when using intuitive and state‐of‐the‐art features, it is hard to understand brain dynamics and their relationship with the developed models. We achieve an increase in understanding by developing, in parallel, several systems that explicitly deal with signal dynamics changes that help develop a complete problem formulation.https://doi.org/10.1002/epi4.12748drug‐resistant epilepsyEEGexplainabilitymachine learningseizure prediction |
spellingShingle | Mauro F. Pinto Joana Batista Adriana Leal Fábio Lopes Ana Oliveira António Dourado Sulaiman I. Abuhaiba Francisco Sales Pedro Martins César A. Teixeira The goal of explaining black boxes in EEG seizure prediction is not to explain models' decisions Epilepsia Open drug‐resistant epilepsy EEG explainability machine learning seizure prediction |
title | The goal of explaining black boxes in EEG seizure prediction is not to explain models' decisions |
title_full | The goal of explaining black boxes in EEG seizure prediction is not to explain models' decisions |
title_fullStr | The goal of explaining black boxes in EEG seizure prediction is not to explain models' decisions |
title_full_unstemmed | The goal of explaining black boxes in EEG seizure prediction is not to explain models' decisions |
title_short | The goal of explaining black boxes in EEG seizure prediction is not to explain models' decisions |
title_sort | goal of explaining black boxes in eeg seizure prediction is not to explain models decisions |
topic | drug‐resistant epilepsy EEG explainability machine learning seizure prediction |
url | https://doi.org/10.1002/epi4.12748 |
work_keys_str_mv | AT maurofpinto thegoalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT joanabatista thegoalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT adrianaleal thegoalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT fabiolopes thegoalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT anaoliveira thegoalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT antoniodourado thegoalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT sulaimaniabuhaiba thegoalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT franciscosales thegoalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT pedromartins thegoalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT cesarateixeira thegoalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT maurofpinto goalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT joanabatista goalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT adrianaleal goalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT fabiolopes goalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT anaoliveira goalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT antoniodourado goalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT sulaimaniabuhaiba goalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT franciscosales goalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT pedromartins goalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions AT cesarateixeira goalofexplainingblackboxesineegseizurepredictionisnottoexplainmodelsdecisions |