Local Multi-Head Channel Self-Attention for Facial Expression Recognition

Since the Transformer architecture was introduced in 2017, there has been many attempts to bring the <i>self-attention</i> paradigm in the field of computer vision. In this paper, we propose <i>LHC</i>: Local multi-Head Channel <i>self-attention</i>, a novel <i...

Full description

Bibliographic Details
Main Authors: Roberto Pecoraro, Valerio Basile, Viviana Bono
Format: Article
Language:English
Published: MDPI AG 2022-09-01
Series:Information
Subjects:
Online Access:https://www.mdpi.com/2078-2489/13/9/419