A Multi-Channel Parallel Keypoint Fusion Framework for Human Pose Estimation

Although modeling self-attention can significantly reduce computational complexity, human pose estimation performance is still affected by occlusion and background noise, and undifferentiated feature fusion leads to significant information loss. To address these issues, we propose a novel human pose...

Full description

Bibliographic Details
Main Authors: Xilong Wang, Nianfeng Shi, Guoqiang Wang, Jie Shao, Shuaibo Zhao
Format: Article
Language:English
Published: MDPI AG 2023-09-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/12/19/4019
_version_ 1797576038023168000
author Xilong Wang
Nianfeng Shi
Guoqiang Wang
Jie Shao
Shuaibo Zhao
author_facet Xilong Wang
Nianfeng Shi
Guoqiang Wang
Jie Shao
Shuaibo Zhao
author_sort Xilong Wang
collection DOAJ
description Although modeling self-attention can significantly reduce computational complexity, human pose estimation performance is still affected by occlusion and background noise, and undifferentiated feature fusion leads to significant information loss. To address these issues, we propose a novel human pose estimation framework called DatPose (deformable convolution and attention for human pose estimation), which combines deformable convolution and self-attention to relieve these issues. Considering that the keypoints of the human body are mostly distributed at the edge of the human body, we adopt the deformable convolution strategy to obtain the low-level feature information of the image. Our proposed method leverages visual cues to capture detailed keypoint information, which we embed into the Transformer encoder to learn the keypoint constraints. More importantly, we designed a multi-channel two-way parallel module with self-attention and convolution fusion to enhance the weight of the keypoints in visual cues. In order to strengthen the implicit relationship of fusion, we attempt to generate keypoint tokens to the visual cues of the fusion module and transformers, respectively. Our experimental results on the COCO and MPII datasets show that performing the keypoint fusion module improves keypoint information. Extensive experiments and visual analysis demonstrate the robustness of our model in complex scenes and our framework outperforms popular lightweight networks in human pose estimation.
first_indexed 2024-03-10T21:46:51Z
format Article
id doaj.art-9c2cd14a2f6e45089c473d2c0558f064
institution Directory Open Access Journal
issn 2079-9292
language English
last_indexed 2024-03-10T21:46:51Z
publishDate 2023-09-01
publisher MDPI AG
record_format Article
series Electronics
spelling doaj.art-9c2cd14a2f6e45089c473d2c0558f0642023-11-19T14:16:05ZengMDPI AGElectronics2079-92922023-09-011219401910.3390/electronics12194019A Multi-Channel Parallel Keypoint Fusion Framework for Human Pose EstimationXilong Wang0Nianfeng Shi1Guoqiang Wang2Jie Shao3Shuaibo Zhao4College of Electronics and Information Engineering, Shanghai University of Electric Power, Shanghai 201306, ChinaSchool of Computer and Information Engineering, Luoyang Institute of Science and Technology, Luoyang 471023, ChinaSchool of Computer and Information Engineering, Luoyang Institute of Science and Technology, Luoyang 471023, ChinaCollege of Electronics and Information Engineering, Shanghai University of Electric Power, Shanghai 201306, ChinaCollege of Information Engineering, Henan University of Science and Technology, Luoyang 471023, ChinaAlthough modeling self-attention can significantly reduce computational complexity, human pose estimation performance is still affected by occlusion and background noise, and undifferentiated feature fusion leads to significant information loss. To address these issues, we propose a novel human pose estimation framework called DatPose (deformable convolution and attention for human pose estimation), which combines deformable convolution and self-attention to relieve these issues. Considering that the keypoints of the human body are mostly distributed at the edge of the human body, we adopt the deformable convolution strategy to obtain the low-level feature information of the image. Our proposed method leverages visual cues to capture detailed keypoint information, which we embed into the Transformer encoder to learn the keypoint constraints. More importantly, we designed a multi-channel two-way parallel module with self-attention and convolution fusion to enhance the weight of the keypoints in visual cues. In order to strengthen the implicit relationship of fusion, we attempt to generate keypoint tokens to the visual cues of the fusion module and transformers, respectively. Our experimental results on the COCO and MPII datasets show that performing the keypoint fusion module improves keypoint information. Extensive experiments and visual analysis demonstrate the robustness of our model in complex scenes and our framework outperforms popular lightweight networks in human pose estimation.https://www.mdpi.com/2079-9292/12/19/4019human pose estimationdeformable convolutionself-attentionkeypoint fusionlightweight networks
spellingShingle Xilong Wang
Nianfeng Shi
Guoqiang Wang
Jie Shao
Shuaibo Zhao
A Multi-Channel Parallel Keypoint Fusion Framework for Human Pose Estimation
Electronics
human pose estimation
deformable convolution
self-attention
keypoint fusion
lightweight networks
title A Multi-Channel Parallel Keypoint Fusion Framework for Human Pose Estimation
title_full A Multi-Channel Parallel Keypoint Fusion Framework for Human Pose Estimation
title_fullStr A Multi-Channel Parallel Keypoint Fusion Framework for Human Pose Estimation
title_full_unstemmed A Multi-Channel Parallel Keypoint Fusion Framework for Human Pose Estimation
title_short A Multi-Channel Parallel Keypoint Fusion Framework for Human Pose Estimation
title_sort multi channel parallel keypoint fusion framework for human pose estimation
topic human pose estimation
deformable convolution
self-attention
keypoint fusion
lightweight networks
url https://www.mdpi.com/2079-9292/12/19/4019
work_keys_str_mv AT xilongwang amultichannelparallelkeypointfusionframeworkforhumanposeestimation
AT nianfengshi amultichannelparallelkeypointfusionframeworkforhumanposeestimation
AT guoqiangwang amultichannelparallelkeypointfusionframeworkforhumanposeestimation
AT jieshao amultichannelparallelkeypointfusionframeworkforhumanposeestimation
AT shuaibozhao amultichannelparallelkeypointfusionframeworkforhumanposeestimation
AT xilongwang multichannelparallelkeypointfusionframeworkforhumanposeestimation
AT nianfengshi multichannelparallelkeypointfusionframeworkforhumanposeestimation
AT guoqiangwang multichannelparallelkeypointfusionframeworkforhumanposeestimation
AT jieshao multichannelparallelkeypointfusionframeworkforhumanposeestimation
AT shuaibozhao multichannelparallelkeypointfusionframeworkforhumanposeestimation