Explaining machine-learning models for gamma-ray detection and identification

As more complex predictive models are used for gamma-ray spectral analysis, methods are needed to probe and understand their predictions and behavior. Recent work has begun to bring the latest techniques from the field of Explainable Artificial Intelligence (XAI) into the applications of gamma-ray s...

Full description

Bibliographic Details
Main Authors: Mark S. Bandstra, Joseph C. Curtis, James M. Ghawaly, A. Chandler Jones, Tenzing H. Y. Joshi
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2023-01-01
Series:PLoS ONE
Online Access:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10281578/?tool=EBI
_version_ 1797797892173332480
author Mark S. Bandstra
Joseph C. Curtis
James M. Ghawaly
A. Chandler Jones
Tenzing H. Y. Joshi
author_facet Mark S. Bandstra
Joseph C. Curtis
James M. Ghawaly
A. Chandler Jones
Tenzing H. Y. Joshi
author_sort Mark S. Bandstra
collection DOAJ
description As more complex predictive models are used for gamma-ray spectral analysis, methods are needed to probe and understand their predictions and behavior. Recent work has begun to bring the latest techniques from the field of Explainable Artificial Intelligence (XAI) into the applications of gamma-ray spectroscopy, including the introduction of gradient-based methods like saliency mapping and Gradient-weighted Class Activation Mapping (Grad-CAM), and black box methods like Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). In addition, new sources of synthetic radiological data are becoming available, and these new data sets present opportunities to train models using more data than ever before. In this work, we use a neural network model trained on synthetic NaI(Tl) urban search data to compare some of these explanation methods and identify modifications that need to be applied to adapt the methods to gamma-ray spectral data. We find that the black box methods LIME and SHAP are especially accurate in their results, and recommend SHAP since it requires little hyperparameter tuning. We also propose and demonstrate a technique for generating counterfactual explanations using orthogonal projections of LIME and SHAP explanations.
first_indexed 2024-03-13T03:55:13Z
format Article
id doaj.art-c096339ee9864dafb29776bbeeb0c4db
institution Directory Open Access Journal
issn 1932-6203
language English
last_indexed 2024-03-13T03:55:13Z
publishDate 2023-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLoS ONE
spelling doaj.art-c096339ee9864dafb29776bbeeb0c4db2023-06-22T05:31:38ZengPublic Library of Science (PLoS)PLoS ONE1932-62032023-01-01186Explaining machine-learning models for gamma-ray detection and identificationMark S. BandstraJoseph C. CurtisJames M. GhawalyA. Chandler JonesTenzing H. Y. JoshiAs more complex predictive models are used for gamma-ray spectral analysis, methods are needed to probe and understand their predictions and behavior. Recent work has begun to bring the latest techniques from the field of Explainable Artificial Intelligence (XAI) into the applications of gamma-ray spectroscopy, including the introduction of gradient-based methods like saliency mapping and Gradient-weighted Class Activation Mapping (Grad-CAM), and black box methods like Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP). In addition, new sources of synthetic radiological data are becoming available, and these new data sets present opportunities to train models using more data than ever before. In this work, we use a neural network model trained on synthetic NaI(Tl) urban search data to compare some of these explanation methods and identify modifications that need to be applied to adapt the methods to gamma-ray spectral data. We find that the black box methods LIME and SHAP are especially accurate in their results, and recommend SHAP since it requires little hyperparameter tuning. We also propose and demonstrate a technique for generating counterfactual explanations using orthogonal projections of LIME and SHAP explanations.https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10281578/?tool=EBI
spellingShingle Mark S. Bandstra
Joseph C. Curtis
James M. Ghawaly
A. Chandler Jones
Tenzing H. Y. Joshi
Explaining machine-learning models for gamma-ray detection and identification
PLoS ONE
title Explaining machine-learning models for gamma-ray detection and identification
title_full Explaining machine-learning models for gamma-ray detection and identification
title_fullStr Explaining machine-learning models for gamma-ray detection and identification
title_full_unstemmed Explaining machine-learning models for gamma-ray detection and identification
title_short Explaining machine-learning models for gamma-ray detection and identification
title_sort explaining machine learning models for gamma ray detection and identification
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10281578/?tool=EBI
work_keys_str_mv AT marksbandstra explainingmachinelearningmodelsforgammaraydetectionandidentification
AT josephccurtis explainingmachinelearningmodelsforgammaraydetectionandidentification
AT jamesmghawaly explainingmachinelearningmodelsforgammaraydetectionandidentification
AT achandlerjones explainingmachinelearningmodelsforgammaraydetectionandidentification
AT tenzinghyjoshi explainingmachinelearningmodelsforgammaraydetectionandidentification