LIDA‐YOLO: An unsupervised low‐illumination object detection based on domain adaptation

Abstract The low‐light environment is integral to everyday activities but poses significant challenges in object detection. Due to the low brightness, noise, and insufficient illumination of the acquired image, the model's object detection performance is reduced. Opposing recent studies mainly...

Full description

Bibliographic Details
Main Authors: Yun Xiao, Hai Liao
Format: Article
Language:English
Published: Wiley 2024-04-01
Series:IET Image Processing
Subjects:
Online Access:https://doi.org/10.1049/ipr2.13017
_version_ 1797216887201857536
author Yun Xiao
Hai Liao
author_facet Yun Xiao
Hai Liao
author_sort Yun Xiao
collection DOAJ
description Abstract The low‐light environment is integral to everyday activities but poses significant challenges in object detection. Due to the low brightness, noise, and insufficient illumination of the acquired image, the model's object detection performance is reduced. Opposing recent studies mainly developing using supervised learning models, this paper suggests LIDA‐YOLO, an approach for unsupervised adaptation of low‐illumination object detectors. The model improves the YOLOv3 by using normal illumination images as the source domain and low‐illumination images as the target domain and achieves object detection in low‐illumination images through an unsupervised learning strategy. Specifically, a multi‐scale local feature alignment and global feature alignment module are proposed to align the overall attributes of the image and feature biases such as background, scene, and target layout are thus reduced. The experimental results of LIDA‐YOLO on the ExDark dataset achieved the highest performance mAP score of 56.65% compared to several current state‐of‐the‐art unsupervised domain adaptation object detection methods. Compared to I3Net, the performance improvement is 4.04%, and compared to OSHOT, the performance improvement is 6.5%. LIDA‐YOLO achieves a performance improvement of 2.7% compared to the supervised baseline method YOLOv3. Overall, the suggested LIDA‐YOLO model requires fewer samples and presents a stronger generalization ability than previous works.
first_indexed 2024-04-24T11:53:06Z
format Article
id doaj.art-3cff910e8e2e4765899a004a958b4951
institution Directory Open Access Journal
issn 1751-9659
1751-9667
language English
last_indexed 2024-04-24T11:53:06Z
publishDate 2024-04-01
publisher Wiley
record_format Article
series IET Image Processing
spelling doaj.art-3cff910e8e2e4765899a004a958b49512024-04-09T06:07:10ZengWileyIET Image Processing1751-96591751-96672024-04-011851178118810.1049/ipr2.13017LIDA‐YOLO: An unsupervised low‐illumination object detection based on domain adaptationYun Xiao0Hai Liao1School of Software Sichuan Vocational College of Information Technology Guangyuan ChinaSchool of Software Sichuan Vocational College of Information Technology Guangyuan ChinaAbstract The low‐light environment is integral to everyday activities but poses significant challenges in object detection. Due to the low brightness, noise, and insufficient illumination of the acquired image, the model's object detection performance is reduced. Opposing recent studies mainly developing using supervised learning models, this paper suggests LIDA‐YOLO, an approach for unsupervised adaptation of low‐illumination object detectors. The model improves the YOLOv3 by using normal illumination images as the source domain and low‐illumination images as the target domain and achieves object detection in low‐illumination images through an unsupervised learning strategy. Specifically, a multi‐scale local feature alignment and global feature alignment module are proposed to align the overall attributes of the image and feature biases such as background, scene, and target layout are thus reduced. The experimental results of LIDA‐YOLO on the ExDark dataset achieved the highest performance mAP score of 56.65% compared to several current state‐of‐the‐art unsupervised domain adaptation object detection methods. Compared to I3Net, the performance improvement is 4.04%, and compared to OSHOT, the performance improvement is 6.5%. LIDA‐YOLO achieves a performance improvement of 2.7% compared to the supervised baseline method YOLOv3. Overall, the suggested LIDA‐YOLO model requires fewer samples and presents a stronger generalization ability than previous works.https://doi.org/10.1049/ipr2.13017computer visionimage enhancementobject detectionunsupervised learning
spellingShingle Yun Xiao
Hai Liao
LIDA‐YOLO: An unsupervised low‐illumination object detection based on domain adaptation
IET Image Processing
computer vision
image enhancement
object detection
unsupervised learning
title LIDA‐YOLO: An unsupervised low‐illumination object detection based on domain adaptation
title_full LIDA‐YOLO: An unsupervised low‐illumination object detection based on domain adaptation
title_fullStr LIDA‐YOLO: An unsupervised low‐illumination object detection based on domain adaptation
title_full_unstemmed LIDA‐YOLO: An unsupervised low‐illumination object detection based on domain adaptation
title_short LIDA‐YOLO: An unsupervised low‐illumination object detection based on domain adaptation
title_sort lida yolo an unsupervised low illumination object detection based on domain adaptation
topic computer vision
image enhancement
object detection
unsupervised learning
url https://doi.org/10.1049/ipr2.13017
work_keys_str_mv AT yunxiao lidayoloanunsupervisedlowilluminationobjectdetectionbasedondomainadaptation
AT hailiao lidayoloanunsupervisedlowilluminationobjectdetectionbasedondomainadaptation