Localization supervision of chest x-ray classifiers using label-specific eye-tracking annotation

Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding box...

Full description

Bibliographic Details
Main Authors: Ricardo Bigolin Lanfredi, Joyce D. Schroeder, Tolga Tasdizen
Format: Article
Language:English
Published: Frontiers Media S.A. 2023-06-01
Series:Frontiers in Radiology
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fradi.2023.1088068/full
Description
Summary:Convolutional neural networks (CNNs) have been successfully applied to chest x-ray (CXR) images. Moreover, annotated bounding boxes have been shown to improve the interpretability of a CNN in terms of localizing abnormalities. However, only a few relatively small CXR datasets containing bounding boxes are available, and collecting them is very costly. Opportunely, eye-tracking (ET) data can be collected during the clinical workflow of a radiologist. We use ET data recorded from radiologists while dictating CXR reports to train CNNs. We extract snippets from the ET data by associating them with the dictation of keywords and use them to supervise the localization of specific abnormalities. We show that this method can improve a model’s interpretability without impacting its image-level classification.
ISSN:2673-8740