Spatial Temporal Variation Graph Convolutional Networks (STV-GCN) for Skeleton-Based Emotional Action Recognition

The main core purpose of artificial emotional intelligence is to recognize human emotions. Technologies such as facial, semantic, or brainwave recognition applications have been widely proposed. However, the abovementioned recognition techniques for emotional features require a large number of train...

Full description

Bibliographic Details
Main Authors: Ming-Fong Tsai, Chiung-Hung Chen
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9328124/
Description
Summary:The main core purpose of artificial emotional intelligence is to recognize human emotions. Technologies such as facial, semantic, or brainwave recognition applications have been widely proposed. However, the abovementioned recognition techniques for emotional features require a large number of training samples to obtain high accuracy. Human behaviour pattern can be trained and recognized by the continuous movement of the Spatial Temporal Graph Convolution Network (ST-GCN). However, this technology does not distinguish between the speed of delicate emotions, and the speed of human behaviour and delicate changes of emotions cannot be effectively distinguished. This research paper proposes Spatial Temporal Variation Convolutional Network training for human emotion recognition, using skeleton detection technology to calculate the degree of skeleton point change between consecutive actions and using the nearest neighbour algorithm to classify speed levels and train the ST-GCN recognition model to obtain the emotional state. Application of the speed change recognition ability of the Spatial Temporal Variation Graph Convolution Network (STV-GCN) to artificial emotional intelligence calculation makes it possible to efficiently recognize the delicate actions of happy, sad, fear, and angry in human behaviour. The STV-GCN technology proposed in this paper is compared with ST-GCN and can effectively improve the recognition accuracy by more than 50%.
ISSN:2169-3536