Improved YOLOv3 Integrating SENet and Optimized GIoU Loss for Occluded Pedestrian Detection

Occluded pedestrian detection faces huge challenges. False positives and false negatives in crowd occlusion scenes will reduce the accuracy of occluded pedestrian detection. To overcome this problem, we proposed an improved you-only-look-once version 3 (YOLOv3) based on squeeze-and-excitation networ...

Full description

Bibliographic Details
Main Authors: Qiangbo Zhang, Yunxiang Liu, Yu Zhang, Ming Zong, Jianlin Zhu
Format: Article
Language:English
Published: MDPI AG 2023-11-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/23/22/9089
_version_ 1797457845519646720
author Qiangbo Zhang
Yunxiang Liu
Yu Zhang
Ming Zong
Jianlin Zhu
author_facet Qiangbo Zhang
Yunxiang Liu
Yu Zhang
Ming Zong
Jianlin Zhu
author_sort Qiangbo Zhang
collection DOAJ
description Occluded pedestrian detection faces huge challenges. False positives and false negatives in crowd occlusion scenes will reduce the accuracy of occluded pedestrian detection. To overcome this problem, we proposed an improved you-only-look-once version 3 (YOLOv3) based on squeeze-and-excitation networks (SENet) and optimized generalized intersection over union (GIoU) loss for occluded pedestrian detection, namely YOLOv3-Occlusion (YOLOv3-Occ). The proposed network model considered incorporating squeeze-and-excitation networks (SENet) into YOLOv3, which assigned greater weights to the features of unobstructed parts of pedestrians to solve the problem of feature extraction against unsheltered parts. For the loss function, a new generalized intersection over union<sub>intersection over groundtruth</sub> (GIoU<sub>IoG</sub>) loss was developed to ensure the areas of predicted frames of pedestrian invariant based on the GIoU loss, which tackled the problem of inaccurate positioning of pedestrians. The proposed method, YOLOv3-Occ, was validated on the CityPersons and COCO2014 datasets. Experimental results show the proposed method could obtain 1.2% MR<sup>−2</sup> gains on the CityPersons dataset and 0.7% mAP@50 improvements on the COCO2014 dataset.
first_indexed 2024-03-09T16:28:36Z
format Article
id doaj.art-9af13fff337f4c569d81eaf153e513fd
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-09T16:28:36Z
publishDate 2023-11-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-9af13fff337f4c569d81eaf153e513fd2023-11-24T15:05:17ZengMDPI AGSensors1424-82202023-11-012322908910.3390/s23229089Improved YOLOv3 Integrating SENet and Optimized GIoU Loss for Occluded Pedestrian DetectionQiangbo Zhang0Yunxiang Liu1Yu Zhang2Ming Zong3Jianlin Zhu4School of Computer Science and Information Engineering, Shanghai Institute of Technology, Shanghai 201418, ChinaSchool of Computer Science and Information Engineering, Shanghai Institute of Technology, Shanghai 201418, ChinaSchool of Computer Science and Information Engineering, Shanghai Institute of Technology, Shanghai 201418, ChinaSchool of Computer Science and Information Engineering, Shanghai Institute of Technology, Shanghai 201418, ChinaSchool of Computer Science and Information Engineering, Shanghai Institute of Technology, Shanghai 201418, ChinaOccluded pedestrian detection faces huge challenges. False positives and false negatives in crowd occlusion scenes will reduce the accuracy of occluded pedestrian detection. To overcome this problem, we proposed an improved you-only-look-once version 3 (YOLOv3) based on squeeze-and-excitation networks (SENet) and optimized generalized intersection over union (GIoU) loss for occluded pedestrian detection, namely YOLOv3-Occlusion (YOLOv3-Occ). The proposed network model considered incorporating squeeze-and-excitation networks (SENet) into YOLOv3, which assigned greater weights to the features of unobstructed parts of pedestrians to solve the problem of feature extraction against unsheltered parts. For the loss function, a new generalized intersection over union<sub>intersection over groundtruth</sub> (GIoU<sub>IoG</sub>) loss was developed to ensure the areas of predicted frames of pedestrian invariant based on the GIoU loss, which tackled the problem of inaccurate positioning of pedestrians. The proposed method, YOLOv3-Occ, was validated on the CityPersons and COCO2014 datasets. Experimental results show the proposed method could obtain 1.2% MR<sup>−2</sup> gains on the CityPersons dataset and 0.7% mAP@50 improvements on the COCO2014 dataset.https://www.mdpi.com/1424-8220/23/22/9089occluded pedestrian detectionfalse positivesfalse negativesloss function
spellingShingle Qiangbo Zhang
Yunxiang Liu
Yu Zhang
Ming Zong
Jianlin Zhu
Improved YOLOv3 Integrating SENet and Optimized GIoU Loss for Occluded Pedestrian Detection
Sensors
occluded pedestrian detection
false positives
false negatives
loss function
title Improved YOLOv3 Integrating SENet and Optimized GIoU Loss for Occluded Pedestrian Detection
title_full Improved YOLOv3 Integrating SENet and Optimized GIoU Loss for Occluded Pedestrian Detection
title_fullStr Improved YOLOv3 Integrating SENet and Optimized GIoU Loss for Occluded Pedestrian Detection
title_full_unstemmed Improved YOLOv3 Integrating SENet and Optimized GIoU Loss for Occluded Pedestrian Detection
title_short Improved YOLOv3 Integrating SENet and Optimized GIoU Loss for Occluded Pedestrian Detection
title_sort improved yolov3 integrating senet and optimized giou loss for occluded pedestrian detection
topic occluded pedestrian detection
false positives
false negatives
loss function
url https://www.mdpi.com/1424-8220/23/22/9089
work_keys_str_mv AT qiangbozhang improvedyolov3integratingsenetandoptimizedgioulossforoccludedpedestriandetection
AT yunxiangliu improvedyolov3integratingsenetandoptimizedgioulossforoccludedpedestriandetection
AT yuzhang improvedyolov3integratingsenetandoptimizedgioulossforoccludedpedestriandetection
AT mingzong improvedyolov3integratingsenetandoptimizedgioulossforoccludedpedestriandetection
AT jianlinzhu improvedyolov3integratingsenetandoptimizedgioulossforoccludedpedestriandetection