MRI Breast Tumor Segmentation Using Different Encoder and Decoder CNN Architectures
Breast tumor segmentation in medical images is a decisive step for diagnosis and treatment follow-up. Automating this challenging task helps radiologists to reduce the high manual workload of breast cancer analysis. In this paper, we propose two deep learning approaches to automate the breast tumor...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2019-06-01
|
Series: | Computers |
Subjects: | |
Online Access: | https://www.mdpi.com/2073-431X/8/3/52 |
_version_ | 1828363999446564864 |
---|---|
author | Mohammed El Adoui Sidi Ahmed Mahmoudi Mohamed Amine Larhmam Mohammed Benjelloun |
author_facet | Mohammed El Adoui Sidi Ahmed Mahmoudi Mohamed Amine Larhmam Mohammed Benjelloun |
author_sort | Mohammed El Adoui |
collection | DOAJ |
description | Breast tumor segmentation in medical images is a decisive step for diagnosis and treatment follow-up. Automating this challenging task helps radiologists to reduce the high manual workload of breast cancer analysis. In this paper, we propose two deep learning approaches to automate the breast tumor segmentation in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) by building two fully convolutional neural networks (CNN) based on <i>SegNet</i> and <i>U-Net</i>. The obtained models can handle both detection and segmentation on each single DCE-MRI slice. In this study, we used a dataset of 86 DCE-MRIs, acquired before and after two cycles of chemotherapy, of 43 patients with local advanced breast cancer, a total of 5452 slices were used to train and validate the proposed models. The data were annotated manually by an experienced radiologist. To reduce the training time, a high-performance architecture composed of graphic processing units was used. The model was trained and validated, respectively, on 85% and 15% of the data. A mean intersection over union (IoU) of 68.88 was achieved using <i>SegNet</i> and 76.14% using <i>U-Net</i> architecture. |
first_indexed | 2024-04-14T05:09:05Z |
format | Article |
id | doaj.art-1eb6c019087349ec97f56e3b4ec76c07 |
institution | Directory Open Access Journal |
issn | 2073-431X |
language | English |
last_indexed | 2024-04-14T05:09:05Z |
publishDate | 2019-06-01 |
publisher | MDPI AG |
record_format | Article |
series | Computers |
spelling | doaj.art-1eb6c019087349ec97f56e3b4ec76c072022-12-22T02:10:36ZengMDPI AGComputers2073-431X2019-06-01835210.3390/computers8030052computers8030052MRI Breast Tumor Segmentation Using Different Encoder and Decoder CNN ArchitecturesMohammed El Adoui0Sidi Ahmed Mahmoudi1Mohamed Amine Larhmam2Mohammed Benjelloun3Computer Science Unit, Faculty of Engineering, University of Mons, Place du Parc, 20, 7000 Mons, BelgiumComputer Science Unit, Faculty of Engineering, University of Mons, Place du Parc, 20, 7000 Mons, BelgiumComputer Science Unit, Faculty of Engineering, University of Mons, Place du Parc, 20, 7000 Mons, BelgiumComputer Science Unit, Faculty of Engineering, University of Mons, Place du Parc, 20, 7000 Mons, BelgiumBreast tumor segmentation in medical images is a decisive step for diagnosis and treatment follow-up. Automating this challenging task helps radiologists to reduce the high manual workload of breast cancer analysis. In this paper, we propose two deep learning approaches to automate the breast tumor segmentation in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) by building two fully convolutional neural networks (CNN) based on <i>SegNet</i> and <i>U-Net</i>. The obtained models can handle both detection and segmentation on each single DCE-MRI slice. In this study, we used a dataset of 86 DCE-MRIs, acquired before and after two cycles of chemotherapy, of 43 patients with local advanced breast cancer, a total of 5452 slices were used to train and validate the proposed models. The data were annotated manually by an experienced radiologist. To reduce the training time, a high-performance architecture composed of graphic processing units was used. The model was trained and validated, respectively, on 85% and 15% of the data. A mean intersection over union (IoU) of 68.88 was achieved using <i>SegNet</i> and 76.14% using <i>U-Net</i> architecture.https://www.mdpi.com/2073-431X/8/3/52breast tumor segmentationMRIencoder–decoderdeep learningHPC<i>SegNet</i><i>U-Net</i> |
spellingShingle | Mohammed El Adoui Sidi Ahmed Mahmoudi Mohamed Amine Larhmam Mohammed Benjelloun MRI Breast Tumor Segmentation Using Different Encoder and Decoder CNN Architectures Computers breast tumor segmentation MRI encoder–decoder deep learning HPC <i>SegNet</i> <i>U-Net</i> |
title | MRI Breast Tumor Segmentation Using Different Encoder and Decoder CNN Architectures |
title_full | MRI Breast Tumor Segmentation Using Different Encoder and Decoder CNN Architectures |
title_fullStr | MRI Breast Tumor Segmentation Using Different Encoder and Decoder CNN Architectures |
title_full_unstemmed | MRI Breast Tumor Segmentation Using Different Encoder and Decoder CNN Architectures |
title_short | MRI Breast Tumor Segmentation Using Different Encoder and Decoder CNN Architectures |
title_sort | mri breast tumor segmentation using different encoder and decoder cnn architectures |
topic | breast tumor segmentation MRI encoder–decoder deep learning HPC <i>SegNet</i> <i>U-Net</i> |
url | https://www.mdpi.com/2073-431X/8/3/52 |
work_keys_str_mv | AT mohammedeladoui mribreasttumorsegmentationusingdifferentencoderanddecodercnnarchitectures AT sidiahmedmahmoudi mribreasttumorsegmentationusingdifferentencoderanddecodercnnarchitectures AT mohamedaminelarhmam mribreasttumorsegmentationusingdifferentencoderanddecodercnnarchitectures AT mohammedbenjelloun mribreasttumorsegmentationusingdifferentencoderanddecodercnnarchitectures |