Model-Agnostic Method for Thoracic Wall Segmentation in Fetal Ultrasound Videos

The application of segmentation methods to medical imaging has the potential to create novel diagnostic support models. With respect to fetal ultrasound, the thoracic wall is a key structure on the assessment of the chest region for examiners to recognize the relative orientation and size of structu...

Full description

Bibliographic Details
Main Authors: Kanto Shozu, Masaaki Komatsu, Akira Sakai, Reina Komatsu, Ai Dozen, Hidenori Machino, Suguru Yasutomi, Tatsuya Arakaki, Ken Asada, Syuzo Kaneko, Ryu Matsuoka, Akitoshi Nakashima, Akihiko Sekizawa, Ryuji Hamamoto
Format: Article
Language:English
Published: MDPI AG 2020-12-01
Series:Biomolecules
Subjects:
Online Access:https://www.mdpi.com/2218-273X/10/12/1691
_version_ 1827699862676701184
author Kanto Shozu
Masaaki Komatsu
Akira Sakai
Reina Komatsu
Ai Dozen
Hidenori Machino
Suguru Yasutomi
Tatsuya Arakaki
Ken Asada
Syuzo Kaneko
Ryu Matsuoka
Akitoshi Nakashima
Akihiko Sekizawa
Ryuji Hamamoto
author_facet Kanto Shozu
Masaaki Komatsu
Akira Sakai
Reina Komatsu
Ai Dozen
Hidenori Machino
Suguru Yasutomi
Tatsuya Arakaki
Ken Asada
Syuzo Kaneko
Ryu Matsuoka
Akitoshi Nakashima
Akihiko Sekizawa
Ryuji Hamamoto
author_sort Kanto Shozu
collection DOAJ
description The application of segmentation methods to medical imaging has the potential to create novel diagnostic support models. With respect to fetal ultrasound, the thoracic wall is a key structure on the assessment of the chest region for examiners to recognize the relative orientation and size of structures inside the thorax, which are critical components in neonatal prognosis. In this study, to improve the segmentation performance of the thoracic wall in fetal ultrasound videos, we proposed a novel model-agnostic method using deep learning techniques: the Multi-Frame + Cylinder method (MFCY). The Multi-frame method (MF) uses time-series information of ultrasound videos, and the Cylinder method (CY) utilizes the shape of the thoracic wall. To evaluate the achieved improvement, we performed segmentation using five-fold cross-validation on 538 ultrasound frames in the four-chamber view (4CV) of 256 normal cases using U-net and DeepLabv3+. MFCY increased the mean values of the intersection over union (IoU) of thoracic wall segmentation from 0.448 to 0.493 for U-net and from 0.417 to 0.470 for DeepLabv3+. These results demonstrated that MFCY improved the segmentation performance of the thoracic wall in fetal ultrasound videos without altering the network structure. MFCY is expected to facilitate the development of diagnostic support models in fetal ultrasound by providing further accurate segmentation of the thoracic wall.
first_indexed 2024-03-10T13:59:29Z
format Article
id doaj.art-d2cf4e6dd7694e5ba3d7e351bccdd9ec
institution Directory Open Access Journal
issn 2218-273X
language English
last_indexed 2024-03-10T13:59:29Z
publishDate 2020-12-01
publisher MDPI AG
record_format Article
series Biomolecules
spelling doaj.art-d2cf4e6dd7694e5ba3d7e351bccdd9ec2023-11-21T01:21:39ZengMDPI AGBiomolecules2218-273X2020-12-011012169110.3390/biom10121691Model-Agnostic Method for Thoracic Wall Segmentation in Fetal Ultrasound VideosKanto Shozu0Masaaki Komatsu1Akira Sakai2Reina Komatsu3Ai Dozen4Hidenori Machino5Suguru Yasutomi6Tatsuya Arakaki7Ken Asada8Syuzo Kaneko9Ryu Matsuoka10Akitoshi Nakashima11Akihiko Sekizawa12Ryuji Hamamoto13Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, JapanDivision of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, JapanArtificial Intelligence Laboratory, Fujitsu Laboratories Ltd., 4-1-1 Kamikodanaka, Nakahara-Ku, Kawasaki, Kanagawa 211-8588, JapanRIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, JapanDivision of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, JapanDivision of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, JapanArtificial Intelligence Laboratory, Fujitsu Laboratories Ltd., 4-1-1 Kamikodanaka, Nakahara-Ku, Kawasaki, Kanagawa 211-8588, JapanDepartment of Obstetrics and Gynecology, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-Ku, Tokyo 142-8666, JapanDivision of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, JapanDivision of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, JapanRIKEN AIP-Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, JapanDepartment of Obstetrics and Gynecology, University of Toyama, 2630 Sugitani, Toyama 930-0194, JapanDepartment of Obstetrics and Gynecology, Showa University School of Medicine, 1-5-8 Hatanodai, Shinagawa-Ku, Tokyo 142-8666, JapanDivision of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, JapanThe application of segmentation methods to medical imaging has the potential to create novel diagnostic support models. With respect to fetal ultrasound, the thoracic wall is a key structure on the assessment of the chest region for examiners to recognize the relative orientation and size of structures inside the thorax, which are critical components in neonatal prognosis. In this study, to improve the segmentation performance of the thoracic wall in fetal ultrasound videos, we proposed a novel model-agnostic method using deep learning techniques: the Multi-Frame + Cylinder method (MFCY). The Multi-frame method (MF) uses time-series information of ultrasound videos, and the Cylinder method (CY) utilizes the shape of the thoracic wall. To evaluate the achieved improvement, we performed segmentation using five-fold cross-validation on 538 ultrasound frames in the four-chamber view (4CV) of 256 normal cases using U-net and DeepLabv3+. MFCY increased the mean values of the intersection over union (IoU) of thoracic wall segmentation from 0.448 to 0.493 for U-net and from 0.417 to 0.470 for DeepLabv3+. These results demonstrated that MFCY improved the segmentation performance of the thoracic wall in fetal ultrasound videos without altering the network structure. MFCY is expected to facilitate the development of diagnostic support models in fetal ultrasound by providing further accurate segmentation of the thoracic wall.https://www.mdpi.com/2218-273X/10/12/1691deep learningfetal ultrasoundprenatal diagnosisthoracic wall segmentationmodel-agnosticensemble learning
spellingShingle Kanto Shozu
Masaaki Komatsu
Akira Sakai
Reina Komatsu
Ai Dozen
Hidenori Machino
Suguru Yasutomi
Tatsuya Arakaki
Ken Asada
Syuzo Kaneko
Ryu Matsuoka
Akitoshi Nakashima
Akihiko Sekizawa
Ryuji Hamamoto
Model-Agnostic Method for Thoracic Wall Segmentation in Fetal Ultrasound Videos
Biomolecules
deep learning
fetal ultrasound
prenatal diagnosis
thoracic wall segmentation
model-agnostic
ensemble learning
title Model-Agnostic Method for Thoracic Wall Segmentation in Fetal Ultrasound Videos
title_full Model-Agnostic Method for Thoracic Wall Segmentation in Fetal Ultrasound Videos
title_fullStr Model-Agnostic Method for Thoracic Wall Segmentation in Fetal Ultrasound Videos
title_full_unstemmed Model-Agnostic Method for Thoracic Wall Segmentation in Fetal Ultrasound Videos
title_short Model-Agnostic Method for Thoracic Wall Segmentation in Fetal Ultrasound Videos
title_sort model agnostic method for thoracic wall segmentation in fetal ultrasound videos
topic deep learning
fetal ultrasound
prenatal diagnosis
thoracic wall segmentation
model-agnostic
ensemble learning
url https://www.mdpi.com/2218-273X/10/12/1691
work_keys_str_mv AT kantoshozu modelagnosticmethodforthoracicwallsegmentationinfetalultrasoundvideos
AT masaakikomatsu modelagnosticmethodforthoracicwallsegmentationinfetalultrasoundvideos
AT akirasakai modelagnosticmethodforthoracicwallsegmentationinfetalultrasoundvideos
AT reinakomatsu modelagnosticmethodforthoracicwallsegmentationinfetalultrasoundvideos
AT aidozen modelagnosticmethodforthoracicwallsegmentationinfetalultrasoundvideos
AT hidenorimachino modelagnosticmethodforthoracicwallsegmentationinfetalultrasoundvideos
AT suguruyasutomi modelagnosticmethodforthoracicwallsegmentationinfetalultrasoundvideos
AT tatsuyaarakaki modelagnosticmethodforthoracicwallsegmentationinfetalultrasoundvideos
AT kenasada modelagnosticmethodforthoracicwallsegmentationinfetalultrasoundvideos
AT syuzokaneko modelagnosticmethodforthoracicwallsegmentationinfetalultrasoundvideos
AT ryumatsuoka modelagnosticmethodforthoracicwallsegmentationinfetalultrasoundvideos
AT akitoshinakashima modelagnosticmethodforthoracicwallsegmentationinfetalultrasoundvideos
AT akihikosekizawa modelagnosticmethodforthoracicwallsegmentationinfetalultrasoundvideos
AT ryujihamamoto modelagnosticmethodforthoracicwallsegmentationinfetalultrasoundvideos