Rethinking Attention Mechanisms in Vision Transformers with Graph Structures

In this paper, we propose a new type of vision transformer (ViT) based on graph head attention (GHA). Because the multi-head attention (MHA) of a pure ViT requires multiple parameters and tends to lose the locality of an image, we replaced MHA with GHA by applying a graph to the attention head of th...

Full description

Bibliographic Details
Main Authors: Hyeongjin Kim, Byoung Chul Ko
Format: Article
Language:English
Published: MDPI AG 2024-02-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/24/4/1111