Multi-Manifold Attention for Vision Transformers
Vision Transformers are very popular nowadays due to their state-of-the-art performance in several computer vision tasks, such as image classification and action recognition. Although their performance has been greatly enhanced through highly descriptive patch embeddings and hierarchical structures,...
Main Authors: | Dimitrios Konstantinidis, Ilias Papastratis, Kosmas Dimitropoulos, Petros Daras |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10305583/ |
Similar Items
-
PLG-ViT: Vision Transformer with Parallel Local and Global Self-Attention
by: Nikolas Ebert, et al.
Published: (2023-03-01) -
Privacy-Preserving Semantic Segmentation Using Vision Transformer
by: Hitoshi Kiya, et al.
Published: (2022-08-01) -
CAGNet: A Multi-Scale Convolutional Attention Method for Glass Detection Based on Transformer
by: Xiaohang Hu, et al.
Published: (2023-09-01) -
Artificial Intelligence Technologies for Sign Language
by: Ilias Papastratis, et al.
Published: (2021-08-01) -
Continuous Sign Language Recognition through a Context-Aware Generative Adversarial Network
by: Ilias Papastratis, et al.
Published: (2021-04-01)