HAT: A Visual Transformer Model for Image Recognition Based on Hierarchical Attention Transformation
In the field of image recognition, Visual Transformer (ViT) has excellent performance. However, ViT, relies on a fixed self-attentive layer, tends to lead to computational redundancy and makes it difficult to maintain the integrity of the image convolutional feature sequence during the training proc...
Main Authors: | Xuanyu Zhao, Tao Hu, Chunxia Mao, Ye Yuan, Jun Li |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10247525/ |
Similar Items
-
Audio–Visual Speech Recognition Based on Dual Cross-Modality Attentions with the Transformer Model
by: Yong-Hyeok Lee, et al.
Published: (2020-10-01) -
Siamese hierarchical feature fusion transformer for efficient tracking
by: Jiahai Dai, et al.
Published: (2022-12-01) -
Risky-Driving-Image Recognition Based on Visual Attention Mechanism and Deep Learning
by: Wei Song, et al.
Published: (2022-08-01) -
UATNet: U-Shape Attention-Based Transformer Net for Meteorological Satellite Cloud Recognition
by: Zhanjie Wang, et al.
Published: (2021-12-01) -
Facial Expression Recognition Based on Fine-Tuned Channel–Spatial Attention Transformer
by: Huang Yao, et al.
Published: (2023-07-01)