HEA-Net: Attention and MLP Hybrid Encoder Architecture for Medical Image Segmentation

The model, Transformer, is known to rely on a self-attention mechanism to model distant dependencies, which focuses on modeling the dependencies of the global elements. However, its sensitivity to the local details of the foreground information is not significant. Local detail features help to ident...

Full description

Bibliographic Details
Main Authors: Lijing An, Liejun Wang, Yongming Li
Format: Article
Language:English
Published: MDPI AG 2022-09-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/22/18/7024
_version_ 1827656764014723072
author Lijing An
Liejun Wang
Yongming Li
author_facet Lijing An
Liejun Wang
Yongming Li
author_sort Lijing An
collection DOAJ
description The model, Transformer, is known to rely on a self-attention mechanism to model distant dependencies, which focuses on modeling the dependencies of the global elements. However, its sensitivity to the local details of the foreground information is not significant. Local detail features help to identify the blurred boundaries in medical images more accurately. In order to make up for the defects of Transformer and capture more abundant local information, this paper proposes an attention and MLP hybrid-encoder architecture combining the Efficient Attention Module (EAM) with a Dual-channel Shift MLP module (DS-MLP), called HEA-Net. Specifically, we effectively connect the convolution block with Transformer through EAM to enhance the foreground and suppress the invalid background information in medical images. Meanwhile, DS-MLP further enhances the foreground information via channel and spatial shift operations. Extensive experiments on public datasets confirm the excellent performance of our proposed HEA-Net. In particular, on the GlaS and MoNuSeg datasets, the Dice reached 90.56% and 80.80%, respectively, and the IoU reached 83.62% and 68.26%, respectively.
first_indexed 2024-03-09T22:32:56Z
format Article
id doaj.art-7721c327c43849f7959cd9df9ee06e94
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-09T22:32:56Z
publishDate 2022-09-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-7721c327c43849f7959cd9df9ee06e942023-11-23T18:53:14ZengMDPI AGSensors1424-82202022-09-012218702410.3390/s22187024HEA-Net: Attention and MLP Hybrid Encoder Architecture for Medical Image SegmentationLijing An0Liejun Wang1Yongming Li2College of Information Science and Engineering, Xinjiang University, Urumqi 830000, ChinaCollege of Information Science and Engineering, Xinjiang University, Urumqi 830000, ChinaCollege of Information Science and Engineering, Xinjiang University, Urumqi 830000, ChinaThe model, Transformer, is known to rely on a self-attention mechanism to model distant dependencies, which focuses on modeling the dependencies of the global elements. However, its sensitivity to the local details of the foreground information is not significant. Local detail features help to identify the blurred boundaries in medical images more accurately. In order to make up for the defects of Transformer and capture more abundant local information, this paper proposes an attention and MLP hybrid-encoder architecture combining the Efficient Attention Module (EAM) with a Dual-channel Shift MLP module (DS-MLP), called HEA-Net. Specifically, we effectively connect the convolution block with Transformer through EAM to enhance the foreground and suppress the invalid background information in medical images. Meanwhile, DS-MLP further enhances the foreground information via channel and spatial shift operations. Extensive experiments on public datasets confirm the excellent performance of our proposed HEA-Net. In particular, on the GlaS and MoNuSeg datasets, the Dice reached 90.56% and 80.80%, respectively, and the IoU reached 83.62% and 68.26%, respectively.https://www.mdpi.com/1424-8220/22/18/7024attentionMLPTransformer
spellingShingle Lijing An
Liejun Wang
Yongming Li
HEA-Net: Attention and MLP Hybrid Encoder Architecture for Medical Image Segmentation
Sensors
attention
MLP
Transformer
title HEA-Net: Attention and MLP Hybrid Encoder Architecture for Medical Image Segmentation
title_full HEA-Net: Attention and MLP Hybrid Encoder Architecture for Medical Image Segmentation
title_fullStr HEA-Net: Attention and MLP Hybrid Encoder Architecture for Medical Image Segmentation
title_full_unstemmed HEA-Net: Attention and MLP Hybrid Encoder Architecture for Medical Image Segmentation
title_short HEA-Net: Attention and MLP Hybrid Encoder Architecture for Medical Image Segmentation
title_sort hea net attention and mlp hybrid encoder architecture for medical image segmentation
topic attention
MLP
Transformer
url https://www.mdpi.com/1424-8220/22/18/7024
work_keys_str_mv AT lijingan heanetattentionandmlphybridencoderarchitectureformedicalimagesegmentation
AT liejunwang heanetattentionandmlphybridencoderarchitectureformedicalimagesegmentation
AT yongmingli heanetattentionandmlphybridencoderarchitectureformedicalimagesegmentation