Multi-Modal Segmentation of 3D Brain Scans Using Neural Networks
Anatomical segmentation of brain scans is highly relevant for diagnostics and neuroradiology research. Conventionally, segmentation is performed on T1-weighted MRI scans, due to the strong soft-tissue contrast. In this work, we report on a comparative study of automated, learning-based brain segment...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2021-07-01
|
Series: | Frontiers in Neurology |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fneur.2021.653375/full |
_version_ | 1818644318812897280 |
---|---|
author | Jonathan Zopes Moritz Platscher Silvio Paganucci Christian Federau |
author_facet | Jonathan Zopes Moritz Platscher Silvio Paganucci Christian Federau |
author_sort | Jonathan Zopes |
collection | DOAJ |
description | Anatomical segmentation of brain scans is highly relevant for diagnostics and neuroradiology research. Conventionally, segmentation is performed on T1-weighted MRI scans, due to the strong soft-tissue contrast. In this work, we report on a comparative study of automated, learning-based brain segmentation on various other contrasts of MRI and also computed tomography (CT) scans and investigate the anatomical soft-tissue information contained in these imaging modalities. A large database of in total 853 MRI/CT brain scans enables us to train convolutional neural networks (CNNs) for segmentation. We benchmark the CNN performance on four different imaging modalities and 27 anatomical substructures. For each modality we train a separate CNN based on a common architecture. We find average Dice scores of 86.7 ± 4.1% (T1-weighted MRI), 81.9 ± 6.7% (fluid-attenuated inversion recovery MRI), 80.8 ± 6.6% (diffusion-weighted MRI) and 80.7 ± 8.2% (CT), respectively. The performance is assessed relative to labels obtained using the widely-adopted FreeSurfer software package. The segmentation pipeline uses dropout sampling to identify corrupted input scans or low-quality segmentations. Full segmentation of 3D volumes with more than 2 million voxels requires <1s of processing time on a graphical processing unit. |
first_indexed | 2024-12-17T00:12:57Z |
format | Article |
id | doaj.art-b5f0896ad4f2408e95a3491b4098bc4e |
institution | Directory Open Access Journal |
issn | 1664-2295 |
language | English |
last_indexed | 2024-12-17T00:12:57Z |
publishDate | 2021-07-01 |
publisher | Frontiers Media S.A. |
record_format | Article |
series | Frontiers in Neurology |
spelling | doaj.art-b5f0896ad4f2408e95a3491b4098bc4e2022-12-21T22:10:47ZengFrontiers Media S.A.Frontiers in Neurology1664-22952021-07-011210.3389/fneur.2021.653375653375Multi-Modal Segmentation of 3D Brain Scans Using Neural NetworksJonathan ZopesMoritz PlatscherSilvio PaganucciChristian FederauAnatomical segmentation of brain scans is highly relevant for diagnostics and neuroradiology research. Conventionally, segmentation is performed on T1-weighted MRI scans, due to the strong soft-tissue contrast. In this work, we report on a comparative study of automated, learning-based brain segmentation on various other contrasts of MRI and also computed tomography (CT) scans and investigate the anatomical soft-tissue information contained in these imaging modalities. A large database of in total 853 MRI/CT brain scans enables us to train convolutional neural networks (CNNs) for segmentation. We benchmark the CNN performance on four different imaging modalities and 27 anatomical substructures. For each modality we train a separate CNN based on a common architecture. We find average Dice scores of 86.7 ± 4.1% (T1-weighted MRI), 81.9 ± 6.7% (fluid-attenuated inversion recovery MRI), 80.8 ± 6.6% (diffusion-weighted MRI) and 80.7 ± 8.2% (CT), respectively. The performance is assessed relative to labels obtained using the widely-adopted FreeSurfer software package. The segmentation pipeline uses dropout sampling to identify corrupted input scans or low-quality segmentations. Full segmentation of 3D volumes with more than 2 million voxels requires <1s of processing time on a graphical processing unit.https://www.frontiersin.org/articles/10.3389/fneur.2021.653375/fullbrain imaging (CT and MRI)anatomical segmentationmulti-modalconvolutional neural networksdropout sampling |
spellingShingle | Jonathan Zopes Moritz Platscher Silvio Paganucci Christian Federau Multi-Modal Segmentation of 3D Brain Scans Using Neural Networks Frontiers in Neurology brain imaging (CT and MRI) anatomical segmentation multi-modal convolutional neural networks dropout sampling |
title | Multi-Modal Segmentation of 3D Brain Scans Using Neural Networks |
title_full | Multi-Modal Segmentation of 3D Brain Scans Using Neural Networks |
title_fullStr | Multi-Modal Segmentation of 3D Brain Scans Using Neural Networks |
title_full_unstemmed | Multi-Modal Segmentation of 3D Brain Scans Using Neural Networks |
title_short | Multi-Modal Segmentation of 3D Brain Scans Using Neural Networks |
title_sort | multi modal segmentation of 3d brain scans using neural networks |
topic | brain imaging (CT and MRI) anatomical segmentation multi-modal convolutional neural networks dropout sampling |
url | https://www.frontiersin.org/articles/10.3389/fneur.2021.653375/full |
work_keys_str_mv | AT jonathanzopes multimodalsegmentationof3dbrainscansusingneuralnetworks AT moritzplatscher multimodalsegmentationof3dbrainscansusingneuralnetworks AT silviopaganucci multimodalsegmentationof3dbrainscansusingneuralnetworks AT christianfederau multimodalsegmentationof3dbrainscansusingneuralnetworks |