Improved YOLOv4-Tiny Target Detection Method Based on Adaptive Self-Order Piecewise Enhancement and Multiscale Feature Optimization

To improve the accuracy of material identification under low contrast conditions, this paper proposes an improved YOLOv4-tiny target detection method based on an adaptive self-order piecewise enhancement and multiscale feature optimization. The model first constructs an adaptive self-rank piecewise...

Full description

Bibliographic Details
Main Authors: Dengsheng Cai, Zhigang Lu, Xiangsuo Fan, Wentao Ding, Bing Li
Format: Article
Language:English
Published: MDPI AG 2023-07-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/13/14/8177
Description
Summary:To improve the accuracy of material identification under low contrast conditions, this paper proposes an improved YOLOv4-tiny target detection method based on an adaptive self-order piecewise enhancement and multiscale feature optimization. The model first constructs an adaptive self-rank piecewise enhancement algorithm to enhance low-contrast images and then considers the fast detection ability of the YOLOv4-tiny network. To make the detection network have a higher accuracy, this paper adds an SE channel attention mechanism and an SPP module to this lightweight backbone network to increase the receptive field of the model and enrich the expression ability of the feature map. The network can pay more attention to salient information, suppress edge information, and effectively improve the training accuracy of the model. At the same time, to better fuse the features of different scales, the FPN multiscale feature fusion structure is redesigned to strengthen the fusion of semantic information at all levels of the network, enhance the ability of network feature extraction, and improve the overall detection accuracy of the model. The experimental results show that compared with the mainstream network framework, the improved YOLOv4-tiny network in this paper effectively improves the running speed and target detection accuracy of the model, and its <i>mAP</i> index reaches 98.85%, achieving better detection results.
ISSN:2076-3417