Local Multi-Head Channel Self-Attention for Facial Expression Recognition
Since the Transformer architecture was introduced in 2017, there has been many attempts to bring the <i>self-attention</i> paradigm in the field of computer vision. In this paper, we propose <i>LHC</i>: Local multi-Head Channel <i>self-attention</i>, a novel <i...
| Main Authors: | Roberto Pecoraro, Valerio Basile, Viviana Bono |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2022-09-01
|
| Series: | Information |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2078-2489/13/9/419 |
Similar Items
-
Facial Expression Recognition Based on Vision Transformer with Hybrid Local Attention
by: Yuan Tian, et al.
Published: (2024-07-01) -
Facial Expression Recognition Based on Separable Convolution Network and Attention Mechanism
by: Amir Khani Yengikand, et al.
Published: (2023-10-01) -
Facial Expression Recognition Based on Fine-Tuned Channel–Spatial Attention Transformer
by: Huang Yao, et al.
Published: (2023-07-01) -
Facial Expression Recognition Using Convolutional Neural Network with Attention Module
by: Habib Bahari Khoirullah, et al.
Published: (2022-12-01) -
Recognition of Teachers’ Facial Expression Intensity Based on Convolutional Neural Network and Attention Mechanism
by: Kun Zheng, et al.
Published: (2020-01-01)