LightEyes: A Lightweight Fundus Segmentation Network for Mobile Edge Computing

Fundus is the only structure that can be observed without trauma to the human body. By analyzing color fundus images, the diagnosis basis for various diseases can be obtained. Recently, fundus image segmentation has witnessed vast progress with the development of deep learning. However, the improvem...

Full description

Bibliographic Details
Main Author: Song Guo
Format: Article
Language:English
Published: MDPI AG 2022-04-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/22/9/3112
_version_ 1797502977875902464
author Song Guo
author_facet Song Guo
author_sort Song Guo
collection DOAJ
description Fundus is the only structure that can be observed without trauma to the human body. By analyzing color fundus images, the diagnosis basis for various diseases can be obtained. Recently, fundus image segmentation has witnessed vast progress with the development of deep learning. However, the improvement of segmentation accuracy comes with the complexity of deep models. As a result, these models show low inference speeds and high memory usages when deploying to mobile edges. To promote the deployment of deep fundus segmentation models to mobile devices, we aim to design a lightweight fundus segmentation network. Our observation comes from the fact that high-resolution representations could boost the segmentation of tiny fundus structures, and the classification of small fundus structures depends more on local features. To this end, we propose a lightweight segmentation model called LightEyes. We first design a high-resolution backbone network to learn high-resolution representations, so that the spatial relationship between feature maps can be always retained. Meanwhile, considering high-resolution features means high memory usage; for each layer, we use at most 16 convolutional filters to reduce memory usage and decrease training difficulty. LightEyes has been verified on three kinds of fundus segmentation tasks, including the hard exudate, the microaneurysm, and the vessel, on five publicly available datasets. Experimental results show that LightEyes achieves highly competitive segmentation accuracy and segmentation speed compared with state-of-the-art fundus segmentation models, while running at 1.6 images/s Cambricon-1A speed and 51.3 images/s GPU speed with only 36k parameters.
first_indexed 2024-03-10T03:43:54Z
format Article
id doaj.art-65a935f8a6cc4a96a4a0a7b769ef5649
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-10T03:43:54Z
publishDate 2022-04-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-65a935f8a6cc4a96a4a0a7b769ef56492023-11-23T09:13:37ZengMDPI AGSensors1424-82202022-04-01229311210.3390/s22093112LightEyes: A Lightweight Fundus Segmentation Network for Mobile Edge ComputingSong Guo0School of Information and Control Engineering, Xi’an University of Architecture and Technology, Xi’an 710055, ChinaFundus is the only structure that can be observed without trauma to the human body. By analyzing color fundus images, the diagnosis basis for various diseases can be obtained. Recently, fundus image segmentation has witnessed vast progress with the development of deep learning. However, the improvement of segmentation accuracy comes with the complexity of deep models. As a result, these models show low inference speeds and high memory usages when deploying to mobile edges. To promote the deployment of deep fundus segmentation models to mobile devices, we aim to design a lightweight fundus segmentation network. Our observation comes from the fact that high-resolution representations could boost the segmentation of tiny fundus structures, and the classification of small fundus structures depends more on local features. To this end, we propose a lightweight segmentation model called LightEyes. We first design a high-resolution backbone network to learn high-resolution representations, so that the spatial relationship between feature maps can be always retained. Meanwhile, considering high-resolution features means high memory usage; for each layer, we use at most 16 convolutional filters to reduce memory usage and decrease training difficulty. LightEyes has been verified on three kinds of fundus segmentation tasks, including the hard exudate, the microaneurysm, and the vessel, on five publicly available datasets. Experimental results show that LightEyes achieves highly competitive segmentation accuracy and segmentation speed compared with state-of-the-art fundus segmentation models, while running at 1.6 images/s Cambricon-1A speed and 51.3 images/s GPU speed with only 36k parameters.https://www.mdpi.com/1424-8220/22/9/3112lightweight networkfast semantic segmentationmobile edge computingfundus image
spellingShingle Song Guo
LightEyes: A Lightweight Fundus Segmentation Network for Mobile Edge Computing
Sensors
lightweight network
fast semantic segmentation
mobile edge computing
fundus image
title LightEyes: A Lightweight Fundus Segmentation Network for Mobile Edge Computing
title_full LightEyes: A Lightweight Fundus Segmentation Network for Mobile Edge Computing
title_fullStr LightEyes: A Lightweight Fundus Segmentation Network for Mobile Edge Computing
title_full_unstemmed LightEyes: A Lightweight Fundus Segmentation Network for Mobile Edge Computing
title_short LightEyes: A Lightweight Fundus Segmentation Network for Mobile Edge Computing
title_sort lighteyes a lightweight fundus segmentation network for mobile edge computing
topic lightweight network
fast semantic segmentation
mobile edge computing
fundus image
url https://www.mdpi.com/1424-8220/22/9/3112
work_keys_str_mv AT songguo lighteyesalightweightfundussegmentationnetworkformobileedgecomputing