Transformer-based semantic segmentation for large-scale building footprint extraction from very-high resolution satellite images

Extracting building footprints from extensive very-high spatial resolution (VHSR) remote sensing data is crucial for diverse applications, including surveying, urban studies, population estimation, identification of informal settlements, and disaster management. Although convolutional neural network...

Full description

Bibliographic Details
Main Authors: A. Gibril, Mohamed Barakat, Al-Ruzouq, Rami, Shanableh, Abdallah, Jena, Ratiranjan, Bolcek, Jan, Mohd Shafri, Helmi Zulhaidi, Ghorbanzadeh, Omid
Format: Article
Language:English
Published: Elsevier 2024
Online Access:http://psasir.upm.edu.my/id/eprint/112078/1/1-s2.0-S0273117724002205-main.pdf
_version_ 1825940068136648704
author A. Gibril, Mohamed Barakat
Al-Ruzouq, Rami
Shanableh, Abdallah
Jena, Ratiranjan
Bolcek, Jan
Mohd Shafri, Helmi Zulhaidi
Ghorbanzadeh, Omid
author_facet A. Gibril, Mohamed Barakat
Al-Ruzouq, Rami
Shanableh, Abdallah
Jena, Ratiranjan
Bolcek, Jan
Mohd Shafri, Helmi Zulhaidi
Ghorbanzadeh, Omid
author_sort A. Gibril, Mohamed Barakat
collection UPM
description Extracting building footprints from extensive very-high spatial resolution (VHSR) remote sensing data is crucial for diverse applications, including surveying, urban studies, population estimation, identification of informal settlements, and disaster management. Although convolutional neural networks (CNNs) are commonly utilized for this purpose, their effectiveness is constrained by limitations in capturing long-range relationships and contextual details due to the localized nature of convolution operations. This study introduces the masked-attention mask transformer (Mask2Former), based on the Swin Transformer, for building footprint extraction from large-scale satellite imagery. To enhance the capture of large-scale semantic information and extract multiscale features, a hierarchical vision transformer with shifted windows (Swin Transformer) serves as the backbone network. An extensive analysis compares the efficiency and generalizability of Mask2Former with four CNN models (PSPNet, DeepLabV3+, UpperNet-ConvNext, and SegNeXt) and two transformer-based models (UpperNet-Swin and SegFormer) featuring different complexities. Results reveal superior performance of transformer-based models over CNN-based counterparts, showcasing exceptional generalization across diverse testing areas with varying building structures, heights, and sizes. Specifically, Mask2Former with the Swin transformer backbone achieves a mean intersection over union between 88% and 93%, along with a mean F-score (mF-score) ranging from 91% to 96.35% across various urban landscapes. © 2024 COSPAR
first_indexed 2024-12-09T02:23:06Z
format Article
id upm.eprints-112078
institution Universiti Putra Malaysia
language English
last_indexed 2024-12-09T02:23:06Z
publishDate 2024
publisher Elsevier
record_format dspace
spelling upm.eprints-1120782024-10-28T02:23:30Z http://psasir.upm.edu.my/id/eprint/112078/ Transformer-based semantic segmentation for large-scale building footprint extraction from very-high resolution satellite images A. Gibril, Mohamed Barakat Al-Ruzouq, Rami Shanableh, Abdallah Jena, Ratiranjan Bolcek, Jan Mohd Shafri, Helmi Zulhaidi Ghorbanzadeh, Omid Extracting building footprints from extensive very-high spatial resolution (VHSR) remote sensing data is crucial for diverse applications, including surveying, urban studies, population estimation, identification of informal settlements, and disaster management. Although convolutional neural networks (CNNs) are commonly utilized for this purpose, their effectiveness is constrained by limitations in capturing long-range relationships and contextual details due to the localized nature of convolution operations. This study introduces the masked-attention mask transformer (Mask2Former), based on the Swin Transformer, for building footprint extraction from large-scale satellite imagery. To enhance the capture of large-scale semantic information and extract multiscale features, a hierarchical vision transformer with shifted windows (Swin Transformer) serves as the backbone network. An extensive analysis compares the efficiency and generalizability of Mask2Former with four CNN models (PSPNet, DeepLabV3+, UpperNet-ConvNext, and SegNeXt) and two transformer-based models (UpperNet-Swin and SegFormer) featuring different complexities. Results reveal superior performance of transformer-based models over CNN-based counterparts, showcasing exceptional generalization across diverse testing areas with varying building structures, heights, and sizes. Specifically, Mask2Former with the Swin transformer backbone achieves a mean intersection over union between 88% and 93%, along with a mean F-score (mF-score) ranging from 91% to 96.35% across various urban landscapes. © 2024 COSPAR Elsevier 2024 Article PeerReviewed text en http://psasir.upm.edu.my/id/eprint/112078/1/1-s2.0-S0273117724002205-main.pdf A. Gibril, Mohamed Barakat and Al-Ruzouq, Rami and Shanableh, Abdallah and Jena, Ratiranjan and Bolcek, Jan and Mohd Shafri, Helmi Zulhaidi and Ghorbanzadeh, Omid (2024) Transformer-based semantic segmentation for large-scale building footprint extraction from very-high resolution satellite images. Advances in Space Research, 73 (10). pp. 1-18. ISSN 0273-1177; ESSN: 1879-1948 https://www.sciencedirect.com/science/article/pii/S0273117724002205?via%3Dihub 10.1016/j.asr.2024.03.002
spellingShingle A. Gibril, Mohamed Barakat
Al-Ruzouq, Rami
Shanableh, Abdallah
Jena, Ratiranjan
Bolcek, Jan
Mohd Shafri, Helmi Zulhaidi
Ghorbanzadeh, Omid
Transformer-based semantic segmentation for large-scale building footprint extraction from very-high resolution satellite images
title Transformer-based semantic segmentation for large-scale building footprint extraction from very-high resolution satellite images
title_full Transformer-based semantic segmentation for large-scale building footprint extraction from very-high resolution satellite images
title_fullStr Transformer-based semantic segmentation for large-scale building footprint extraction from very-high resolution satellite images
title_full_unstemmed Transformer-based semantic segmentation for large-scale building footprint extraction from very-high resolution satellite images
title_short Transformer-based semantic segmentation for large-scale building footprint extraction from very-high resolution satellite images
title_sort transformer based semantic segmentation for large scale building footprint extraction from very high resolution satellite images
url http://psasir.upm.edu.my/id/eprint/112078/1/1-s2.0-S0273117724002205-main.pdf
work_keys_str_mv AT agibrilmohamedbarakat transformerbasedsemanticsegmentationforlargescalebuildingfootprintextractionfromveryhighresolutionsatelliteimages
AT alruzouqrami transformerbasedsemanticsegmentationforlargescalebuildingfootprintextractionfromveryhighresolutionsatelliteimages
AT shanablehabdallah transformerbasedsemanticsegmentationforlargescalebuildingfootprintextractionfromveryhighresolutionsatelliteimages
AT jenaratiranjan transformerbasedsemanticsegmentationforlargescalebuildingfootprintextractionfromveryhighresolutionsatelliteimages
AT bolcekjan transformerbasedsemanticsegmentationforlargescalebuildingfootprintextractionfromveryhighresolutionsatelliteimages
AT mohdshafrihelmizulhaidi transformerbasedsemanticsegmentationforlargescalebuildingfootprintextractionfromveryhighresolutionsatelliteimages
AT ghorbanzadehomid transformerbasedsemanticsegmentationforlargescalebuildingfootprintextractionfromveryhighresolutionsatelliteimages