Integrally Cooperative Spatio-Temporal Feature Representation of Motion Joints for Action Recognition

In contemporary research on human action recognition, most methods separately consider the movement features of each joint. However, they ignore that human action is a result of integrally cooperative movement of each joint. Regarding the problem, this paper proposes an action feature representation...

Full description

Bibliographic Details
Main Authors: Xin Chao, Zhenjie Hou, Jiuzhen Liang, Tianjin Yang
Format: Article
Language:English
Published: MDPI AG 2020-09-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/20/18/5180
_version_ 1797553996490080256
author Xin Chao
Zhenjie Hou
Jiuzhen Liang
Tianjin Yang
author_facet Xin Chao
Zhenjie Hou
Jiuzhen Liang
Tianjin Yang
author_sort Xin Chao
collection DOAJ
description In contemporary research on human action recognition, most methods separately consider the movement features of each joint. However, they ignore that human action is a result of integrally cooperative movement of each joint. Regarding the problem, this paper proposes an action feature representation, called Motion Collaborative Spatio-Temporal Vector (MCSTV) and Motion Spatio-Temporal Map (MSTM). MCSTV comprehensively considers the integral and cooperative between the motion joints. MCSTV weighted accumulates limbs’ motion vector to form a new vector to account for the movement features of human action. To describe the action more comprehensively and accurately, we extract key motion energy by key information extraction based on inter-frame energy fluctuation, project the energy to three orthogonal axes and stitch them in temporal series to construct the MSTM. To combine the advantages of MSTM and MCSTV, we propose Multi-Target Subspace Learning (MTSL). MTSL projects MSTM and MCSTV into a common subspace and makes them complement each other. The results on MSR-Action3D and UTD-MHAD show that our method has higher recognition accuracy than most existing human action recognition algorithms.
first_indexed 2024-03-10T16:24:29Z
format Article
id doaj.art-763f34baea1847bf81d4245107e52bfb
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-10T16:24:29Z
publishDate 2020-09-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-763f34baea1847bf81d4245107e52bfb2023-11-20T13:20:18ZengMDPI AGSensors1424-82202020-09-012018518010.3390/s20185180Integrally Cooperative Spatio-Temporal Feature Representation of Motion Joints for Action RecognitionXin Chao0Zhenjie Hou1Jiuzhen Liang2Tianjin Yang3School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, ChinaSchool of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, ChinaSchool of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, ChinaSchool of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, ChinaIn contemporary research on human action recognition, most methods separately consider the movement features of each joint. However, they ignore that human action is a result of integrally cooperative movement of each joint. Regarding the problem, this paper proposes an action feature representation, called Motion Collaborative Spatio-Temporal Vector (MCSTV) and Motion Spatio-Temporal Map (MSTM). MCSTV comprehensively considers the integral and cooperative between the motion joints. MCSTV weighted accumulates limbs’ motion vector to form a new vector to account for the movement features of human action. To describe the action more comprehensively and accurately, we extract key motion energy by key information extraction based on inter-frame energy fluctuation, project the energy to three orthogonal axes and stitch them in temporal series to construct the MSTM. To combine the advantages of MSTM and MCSTV, we propose Multi-Target Subspace Learning (MTSL). MTSL projects MSTM and MCSTV into a common subspace and makes them complement each other. The results on MSR-Action3D and UTD-MHAD show that our method has higher recognition accuracy than most existing human action recognition algorithms.https://www.mdpi.com/1424-8220/20/18/5180human action recognitionMotion Collaborative Spatio-Temporal VectorMotion Spatio-Temporal MapMulti-Target Subspace Learningkey information extraction based on inter-frame energy fluctuation
spellingShingle Xin Chao
Zhenjie Hou
Jiuzhen Liang
Tianjin Yang
Integrally Cooperative Spatio-Temporal Feature Representation of Motion Joints for Action Recognition
Sensors
human action recognition
Motion Collaborative Spatio-Temporal Vector
Motion Spatio-Temporal Map
Multi-Target Subspace Learning
key information extraction based on inter-frame energy fluctuation
title Integrally Cooperative Spatio-Temporal Feature Representation of Motion Joints for Action Recognition
title_full Integrally Cooperative Spatio-Temporal Feature Representation of Motion Joints for Action Recognition
title_fullStr Integrally Cooperative Spatio-Temporal Feature Representation of Motion Joints for Action Recognition
title_full_unstemmed Integrally Cooperative Spatio-Temporal Feature Representation of Motion Joints for Action Recognition
title_short Integrally Cooperative Spatio-Temporal Feature Representation of Motion Joints for Action Recognition
title_sort integrally cooperative spatio temporal feature representation of motion joints for action recognition
topic human action recognition
Motion Collaborative Spatio-Temporal Vector
Motion Spatio-Temporal Map
Multi-Target Subspace Learning
key information extraction based on inter-frame energy fluctuation
url https://www.mdpi.com/1424-8220/20/18/5180
work_keys_str_mv AT xinchao integrallycooperativespatiotemporalfeaturerepresentationofmotionjointsforactionrecognition
AT zhenjiehou integrallycooperativespatiotemporalfeaturerepresentationofmotionjointsforactionrecognition
AT jiuzhenliang integrallycooperativespatiotemporalfeaturerepresentationofmotionjointsforactionrecognition
AT tianjinyang integrallycooperativespatiotemporalfeaturerepresentationofmotionjointsforactionrecognition