Convolutional non‐local spatial‐temporal learning for multi‐modality action recognition

Abstract Traditional deep convolutional networks have shown that both RGB and depth are complementary for video action recognition. However, it is difficult to enhance the action recognition accuracy because of the limitation of the single convolutional networks to extract the underlying relationshi...

Full description

Bibliographic Details
Main Authors: Ziliang Ren, Huaqiang Yuan, Wenhong Wei, Tiezhu Zhao, Qieshi Zhang
Format: Article
Language:English
Published: Wiley 2022-09-01
Series:Electronics Letters
Subjects:
Online Access:https://doi.org/10.1049/ell2.12597
Description
Summary:Abstract Traditional deep convolutional networks have shown that both RGB and depth are complementary for video action recognition. However, it is difficult to enhance the action recognition accuracy because of the limitation of the single convolutional networks to extract the underlying relationship and complementary features between these two kinds of modalities. The authors proposed a novel two stream convolutional networks for multi‐modality action recognition by joint optimisation learning to extract global features from RGB and depth sequences. Specifically, a non‐local multi‐modality compensation block is introduced to learn the semantic fusion features for the recognition performance. Experimental results on two multi‐modality human action datasets, including NTU RGB+D 120 and PKU‐MMD dataset, verify the effectiveness of our proposed recognition framework and demonstrate that the proposed non‐local multi‐modality compensation block can learn complementary features and enhance the recognition accuracy.
ISSN:0013-5194
1350-911X