Interactive Change-Aware Transformer Network for Remote Sensing Image Change Captioning

Remote sensing image change captioning (RSICC) aims to automatically generate sentences describing the difference in content in remote sensing bitemporal images. Recent works extract the changes between bitemporal features and employ a hierarchical approach to fuse multiple changes of interest, yiel...

Full description

Bibliographic Details
Main Authors: Chen Cai, Yi Wang, Kim-Hui Yap
Format: Article
Language:English
Published: MDPI AG 2023-12-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/15/23/5611
_version_ 1827592015343255552
author Chen Cai
Yi Wang
Kim-Hui Yap
author_facet Chen Cai
Yi Wang
Kim-Hui Yap
author_sort Chen Cai
collection DOAJ
description Remote sensing image change captioning (RSICC) aims to automatically generate sentences describing the difference in content in remote sensing bitemporal images. Recent works extract the changes between bitemporal features and employ a hierarchical approach to fuse multiple changes of interest, yielding change captions. However, these methods directly aggregate all features, potentially incorporating non-change-focused information from each encoder layer into the change caption decoder, adversely affecting the performance of change captioning. To address this problem, we proposed an Interactive Change-Aware Transformer Network (ICT-Net). ICT-Net is able to extract and incorporate the most critical changes of interest in each encoder layer to improve change description generation. It initially extracts bitemporal visual features from the CNN backbone and employs an Interactive Change-Aware Encoder (ICE) to capture the crucial difference between these features. Specifically, the ICE captures the most change-aware discriminative information between the paired bitemporal features interactively through difference and content attention encoding. A Multi-Layer Adaptive Fusion (MAF) module is proposed to adaptively aggregate the relevant change-aware features in the ICE layers while minimizing the impact of irrelevant visual features. Moreover, we extend the ICE to extract multi-scale changes and introduce a novel Cross Gated-Attention (CGA) module into the change caption decoder to select essential discriminative multi-scale features to improve the change captioning performance. We evaluate our method on two RSICC datasets (e.g., LEVIR-CC and LEVIRCCD), and the experimental results demonstrate that our method achieves a state-of-the-art performance.
first_indexed 2024-03-09T01:42:45Z
format Article
id doaj.art-186d58f3e1d4498db28d4518ba0d45e2
institution Directory Open Access Journal
issn 2072-4292
language English
last_indexed 2024-03-09T01:42:45Z
publishDate 2023-12-01
publisher MDPI AG
record_format Article
series Remote Sensing
spelling doaj.art-186d58f3e1d4498db28d4518ba0d45e22023-12-08T15:25:14ZengMDPI AGRemote Sensing2072-42922023-12-011523561110.3390/rs15235611Interactive Change-Aware Transformer Network for Remote Sensing Image Change CaptioningChen Cai0Yi Wang1Kim-Hui Yap2School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, SingaporeDepartment of Electrical and Electronic Engineering, The Hong Kong Polytechnic University, Hong KongSchool of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, SingaporeRemote sensing image change captioning (RSICC) aims to automatically generate sentences describing the difference in content in remote sensing bitemporal images. Recent works extract the changes between bitemporal features and employ a hierarchical approach to fuse multiple changes of interest, yielding change captions. However, these methods directly aggregate all features, potentially incorporating non-change-focused information from each encoder layer into the change caption decoder, adversely affecting the performance of change captioning. To address this problem, we proposed an Interactive Change-Aware Transformer Network (ICT-Net). ICT-Net is able to extract and incorporate the most critical changes of interest in each encoder layer to improve change description generation. It initially extracts bitemporal visual features from the CNN backbone and employs an Interactive Change-Aware Encoder (ICE) to capture the crucial difference between these features. Specifically, the ICE captures the most change-aware discriminative information between the paired bitemporal features interactively through difference and content attention encoding. A Multi-Layer Adaptive Fusion (MAF) module is proposed to adaptively aggregate the relevant change-aware features in the ICE layers while minimizing the impact of irrelevant visual features. Moreover, we extend the ICE to extract multi-scale changes and introduce a novel Cross Gated-Attention (CGA) module into the change caption decoder to select essential discriminative multi-scale features to improve the change captioning performance. We evaluate our method on two RSICC datasets (e.g., LEVIR-CC and LEVIRCCD), and the experimental results demonstrate that our method achieves a state-of-the-art performance.https://www.mdpi.com/2072-4292/15/23/5611image change captioningremote sensingmulti-layer change awarenesstransformer
spellingShingle Chen Cai
Yi Wang
Kim-Hui Yap
Interactive Change-Aware Transformer Network for Remote Sensing Image Change Captioning
Remote Sensing
image change captioning
remote sensing
multi-layer change awareness
transformer
title Interactive Change-Aware Transformer Network for Remote Sensing Image Change Captioning
title_full Interactive Change-Aware Transformer Network for Remote Sensing Image Change Captioning
title_fullStr Interactive Change-Aware Transformer Network for Remote Sensing Image Change Captioning
title_full_unstemmed Interactive Change-Aware Transformer Network for Remote Sensing Image Change Captioning
title_short Interactive Change-Aware Transformer Network for Remote Sensing Image Change Captioning
title_sort interactive change aware transformer network for remote sensing image change captioning
topic image change captioning
remote sensing
multi-layer change awareness
transformer
url https://www.mdpi.com/2072-4292/15/23/5611
work_keys_str_mv AT chencai interactivechangeawaretransformernetworkforremotesensingimagechangecaptioning
AT yiwang interactivechangeawaretransformernetworkforremotesensingimagechangecaptioning
AT kimhuiyap interactivechangeawaretransformernetworkforremotesensingimagechangecaptioning