COMFormer: classification of maternal-fetal and brain anatomy using a residual cross-covariance attention guided transformer in ultrasound
Monitoring the healthy development of a fetus requires accurate and timely identification of different maternal-fetal structures as they grow. To facilitate this objective in an automated fashion, we propose a deep-learning-based image classification architecture called the COMFormer to classify mat...
Main Authors: | , , , , , |
---|---|
Format: | Journal article |
Language: | English |
Published: |
IEEE
2023
|
_version_ | 1797112157313171456 |
---|---|
author | Sarker, MMK Singh, VK Alsharid, M Hernandez-Cruz, N Papageorghiou, AT Noble, JA |
author_facet | Sarker, MMK Singh, VK Alsharid, M Hernandez-Cruz, N Papageorghiou, AT Noble, JA |
author_sort | Sarker, MMK |
collection | OXFORD |
description | Monitoring the healthy development of a fetus requires accurate and timely identification of different maternal-fetal structures as they grow. To facilitate this objective in an automated fashion, we propose a deep-learning-based image classification architecture called the COMFormer to classify maternal-fetal and brain anatomical structures present in two-dimensional fetal ultrasound images. The proposed architecture classifies the two subcategories separately: maternal-fetal (abdomen, brain, femur, thorax, mother's cervix, and others) and brain anatomical structures (trans-thalamic, trans-cerebellum, trans-ventricular, and non-brain). Our proposed architecture relies on a transformer-based approach that leverages spatial and global features by using a newly designed residual cross-variance attention (R-XCA) block. This block introduces an advanced cross-covariance attention mechanism to capture a long-range representation from the input using spatial (e.g., shape, texture, intensity) and global features. To build COMFormer, we used a large publicly available dataset (BCNatal) consisting of 12, 400 images from 1,792 subjects. Experimental results prove that COMFormer outperforms the recent CNN and transformer-based models by achieving 95.64% and 96.33% classification accuracy on maternal-fetal and brain anatomy, respectively. |
first_indexed | 2024-03-07T08:20:14Z |
format | Journal article |
id | oxford-uuid:761b8d8f-8792-4950-8c7e-cdb3c17fd1e6 |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-07T08:20:14Z |
publishDate | 2023 |
publisher | IEEE |
record_format | dspace |
spelling | oxford-uuid:761b8d8f-8792-4950-8c7e-cdb3c17fd1e62024-01-29T13:21:32ZCOMFormer: classification of maternal-fetal and brain anatomy using a residual cross-covariance attention guided transformer in ultrasound Journal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:761b8d8f-8792-4950-8c7e-cdb3c17fd1e6EnglishSymplectic ElementsIEEE2023Sarker, MMKSingh, VKAlsharid, MHernandez-Cruz, NPapageorghiou, ATNoble, JAMonitoring the healthy development of a fetus requires accurate and timely identification of different maternal-fetal structures as they grow. To facilitate this objective in an automated fashion, we propose a deep-learning-based image classification architecture called the COMFormer to classify maternal-fetal and brain anatomical structures present in two-dimensional fetal ultrasound images. The proposed architecture classifies the two subcategories separately: maternal-fetal (abdomen, brain, femur, thorax, mother's cervix, and others) and brain anatomical structures (trans-thalamic, trans-cerebellum, trans-ventricular, and non-brain). Our proposed architecture relies on a transformer-based approach that leverages spatial and global features by using a newly designed residual cross-variance attention (R-XCA) block. This block introduces an advanced cross-covariance attention mechanism to capture a long-range representation from the input using spatial (e.g., shape, texture, intensity) and global features. To build COMFormer, we used a large publicly available dataset (BCNatal) consisting of 12, 400 images from 1,792 subjects. Experimental results prove that COMFormer outperforms the recent CNN and transformer-based models by achieving 95.64% and 96.33% classification accuracy on maternal-fetal and brain anatomy, respectively. |
spellingShingle | Sarker, MMK Singh, VK Alsharid, M Hernandez-Cruz, N Papageorghiou, AT Noble, JA COMFormer: classification of maternal-fetal and brain anatomy using a residual cross-covariance attention guided transformer in ultrasound |
title | COMFormer: classification of maternal-fetal and brain anatomy using a residual cross-covariance attention guided transformer in ultrasound
|
title_full | COMFormer: classification of maternal-fetal and brain anatomy using a residual cross-covariance attention guided transformer in ultrasound
|
title_fullStr | COMFormer: classification of maternal-fetal and brain anatomy using a residual cross-covariance attention guided transformer in ultrasound
|
title_full_unstemmed | COMFormer: classification of maternal-fetal and brain anatomy using a residual cross-covariance attention guided transformer in ultrasound
|
title_short | COMFormer: classification of maternal-fetal and brain anatomy using a residual cross-covariance attention guided transformer in ultrasound
|
title_sort | comformer classification of maternal fetal and brain anatomy using a residual cross covariance attention guided transformer in ultrasound |
work_keys_str_mv | AT sarkermmk comformerclassificationofmaternalfetalandbrainanatomyusingaresidualcrosscovarianceattentionguidedtransformerinultrasound AT singhvk comformerclassificationofmaternalfetalandbrainanatomyusingaresidualcrosscovarianceattentionguidedtransformerinultrasound AT alsharidm comformerclassificationofmaternalfetalandbrainanatomyusingaresidualcrosscovarianceattentionguidedtransformerinultrasound AT hernandezcruzn comformerclassificationofmaternalfetalandbrainanatomyusingaresidualcrosscovarianceattentionguidedtransformerinultrasound AT papageorghiouat comformerclassificationofmaternalfetalandbrainanatomyusingaresidualcrosscovarianceattentionguidedtransformerinultrasound AT nobleja comformerclassificationofmaternalfetalandbrainanatomyusingaresidualcrosscovarianceattentionguidedtransformerinultrasound |