VAM-Net: Vegetation-Attentive deep network for Multi-modal fusion of visible-light and vegetation-sensitive images
Multi-modal fusion of remote sensing images poses challenges because of the intricate imaging mechanisms and variations in radiation across different modalities. Specifically, the fusion of visible-light and vegetation-sensitive images encounters similar difficulties. Traditional methods have seldom...
Main Authors: | Yufu Zang, Shuye Wang, Haiyan Guan, Daifeng Peng, Jike Chen, Yanming Chen, Mahmoud R. Delavar |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2024-03-01
|
Series: | International Journal of Applied Earth Observations and Geoinformation |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S1569843223004661 |
Similar Items
-
Agricultural development driven by the digital economy: improved EfficientNet vegetable quality grading
by: Jun Wen, et al.
Published: (2024-01-01) -
Multi-Modality Emotion Recognition Model with GAT-Based Multi-Head Inter-Modality Attention
by: Changzeng Fu, et al.
Published: (2020-08-01) -
Multi-Temporal Unmanned Aerial Vehicle Remote Sensing for Vegetable Mapping Using an Attention-Based Recurrent Convolutional Neural Network
by: Quanlong Feng, et al.
Published: (2020-05-01) -
DA-GAN: Dual Attention Generative Adversarial Network for Cross-Modal Retrieval
by: Liewu Cai, et al.
Published: (2022-01-01) -
VegNet: Dataset of vegetable quality images for machine learning applications
by: Yogesh Suryawanshi, et al.
Published: (2022-12-01)