Exploring Multi-Level Attention and Semantic Relationship for Remote Sensing Image Captioning
Remote sensing image captioning, which aims to understand high-level semantic information and interactions of different ground objects, is a new emerging research topic in recent years. Though image captioning has developed rapidly with convolutional neural networks (CNNs) and recurrent neural netwo...
Main Authors: | Zhenghang Yuan, Xuelong Li, Qi Wang |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8943170/ |
Similar Items
-
Cross-Modal Retrieval and Semantic Refinement for Remote Sensing Image Captioning
by: Zhengxin Li, et al.
Published: (2024-01-01) -
VAA: Visual Aligning Attention Model for Remote Sensing Image Captioning
by: Zhengyuan Zhang, et al.
Published: (2019-01-01) -
A Novel Hybrid Attention-Driven Multistream Hierarchical Graph Embedding Network for Remote Sensing Object Detection
by: Shu Tian, et al.
Published: (2022-10-01) -
MC-Net: multi-scale contextual information aggregation network for image captioning on remote sensing images
by: Haiyan Huang, et al.
Published: (2023-12-01) -
Multiscale Multiinteraction Network for Remote Sensing Image Captioning
by: Yong Wang, et al.
Published: (2022-01-01)