Summary: | In recent years, graph convolutional networks (GCNs) have been extensively applied in numerous fields, demonstrating strong performances. Although existing GCN-based models have extraordinary feature representation capabilities in spatial modeling and perform exceptionally well in skeleton-based action recognition, they work poorly for fine-grained recognition. The key issue involves tiny distinctions between multiple classes. To address this issue, we propose a novel module named the topology-embedded temporal attention module (TE-TAM). Through embedding the temporal-different topology modeled with local area skeleton points in spatial and temporal dimensions, the TE-TAM achieves dynamical attention learning for the temporal dimensions of distinct data samples, to capture minor differences among intra-frames and inter-frames, making the characteristics more discriminating, and increasing the distances between various classes. To verify the validity of the proposed module, we inserted the module into the GCN-based models and tested them on FSD-30. Experimental results show that the GCN-based models with TE-TAMs outperformed the property of pred GCN-based models.
|