Reducing CNN Textural Bias With k-Space Artifacts Improves Robustness

Convolutional neural networks (CNNs) have become the <italic>de facto</italic> algorithms of choice for semantic segmentation tasks in biomedical image processing. Yet, models based on CNNs remain susceptible to the domain shift problem, where a mismatch between source and target distrib...

Full description

Bibliographic Details
Main Authors: Yaniel Cabrera, Ahmed E. Fetit
Format: Article
Language:English
Published: IEEE 2022-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9786829/
_version_ 1817975992160878592
author Yaniel Cabrera
Ahmed E. Fetit
author_facet Yaniel Cabrera
Ahmed E. Fetit
author_sort Yaniel Cabrera
collection DOAJ
description Convolutional neural networks (CNNs) have become the <italic>de facto</italic> algorithms of choice for semantic segmentation tasks in biomedical image processing. Yet, models based on CNNs remain susceptible to the domain shift problem, where a mismatch between source and target distributions could lead to a drop in performance. CNNs were recently shown to exhibit a textural bias when processing natural images, and recent studies suggest that this bias also extends to the context of biomedical imaging. In this paper, we focus on Magnetic Resonance Images (MRI) and investigate textural bias in the context of <inline-formula> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula>-space artifacts (Gibbs, spike, and wraparound artifacts), which naturally manifest in clinical MRI scans. We show that carefully introducing such artifacts at training time can help reduce textural bias, and consequently lead to CNN models that are more robust to acquisition noise and out-of-distribution inference, including scans from hospitals not seen during training. We also present Gibbs ResUnet; a novel, end-to-end framework that automatically finds an optimal combination of Gibbs <inline-formula> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula>-space stylizations and segmentation model weights. We illustrate our findings on multimodal and multi-institutional clinical MRI datasets obtained retrospectively from the Medical Segmentation Decathlon <inline-formula> <tex-math notation="LaTeX">$(n=750)$ </tex-math></inline-formula> and The Cancer Imaging Archive <inline-formula> <tex-math notation="LaTeX">$(n=243)$ </tex-math></inline-formula>.
first_indexed 2024-04-13T21:56:41Z
format Article
id doaj.art-b3a69ca29174488083294b9cd874ff53
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-04-13T21:56:41Z
publishDate 2022-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-b3a69ca29174488083294b9cd874ff532022-12-22T02:28:14ZengIEEEIEEE Access2169-35362022-01-0110584315844610.1109/ACCESS.2022.31798449786829Reducing CNN Textural Bias With k-Space Artifacts Improves RobustnessYaniel Cabrera0https://orcid.org/0000-0001-8546-3239Ahmed E. Fetit1https://orcid.org/0000-0003-1199-1332Department of Computing, Imperial College London, London, U.K.Department of Computing, Imperial College London, London, U.K.Convolutional neural networks (CNNs) have become the <italic>de facto</italic> algorithms of choice for semantic segmentation tasks in biomedical image processing. Yet, models based on CNNs remain susceptible to the domain shift problem, where a mismatch between source and target distributions could lead to a drop in performance. CNNs were recently shown to exhibit a textural bias when processing natural images, and recent studies suggest that this bias also extends to the context of biomedical imaging. In this paper, we focus on Magnetic Resonance Images (MRI) and investigate textural bias in the context of <inline-formula> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula>-space artifacts (Gibbs, spike, and wraparound artifacts), which naturally manifest in clinical MRI scans. We show that carefully introducing such artifacts at training time can help reduce textural bias, and consequently lead to CNN models that are more robust to acquisition noise and out-of-distribution inference, including scans from hospitals not seen during training. We also present Gibbs ResUnet; a novel, end-to-end framework that automatically finds an optimal combination of Gibbs <inline-formula> <tex-math notation="LaTeX">${k}$ </tex-math></inline-formula>-space stylizations and segmentation model weights. We illustrate our findings on multimodal and multi-institutional clinical MRI datasets obtained retrospectively from the Medical Segmentation Decathlon <inline-formula> <tex-math notation="LaTeX">$(n=750)$ </tex-math></inline-formula> and The Cancer Imaging Archive <inline-formula> <tex-math notation="LaTeX">$(n=243)$ </tex-math></inline-formula>.https://ieeexplore.ieee.org/document/9786829/TexturebiasartifactsrobustnessMRICNNs
spellingShingle Yaniel Cabrera
Ahmed E. Fetit
Reducing CNN Textural Bias With k-Space Artifacts Improves Robustness
IEEE Access
Texture
bias
artifacts
robustness
MRI
CNNs
title Reducing CNN Textural Bias With k-Space Artifacts Improves Robustness
title_full Reducing CNN Textural Bias With k-Space Artifacts Improves Robustness
title_fullStr Reducing CNN Textural Bias With k-Space Artifacts Improves Robustness
title_full_unstemmed Reducing CNN Textural Bias With k-Space Artifacts Improves Robustness
title_short Reducing CNN Textural Bias With k-Space Artifacts Improves Robustness
title_sort reducing cnn textural bias with k space artifacts improves robustness
topic Texture
bias
artifacts
robustness
MRI
CNNs
url https://ieeexplore.ieee.org/document/9786829/
work_keys_str_mv AT yanielcabrera reducingcnntexturalbiaswithkspaceartifactsimprovesrobustness
AT ahmedefetit reducingcnntexturalbiaswithkspaceartifactsimprovesrobustness