Development of deep learning-based fusion method for building detection using LiDAR and very high resolution images

Buildings play an essential role in urban construction, planning, and climate studies. Extracting detailed and accurate information about building such as value, usage, height, and size provides information for town planning, urban management, and three-dimensional (3D) visualization. Building ex...

Full description

Bibliographic Details
Main Author: Nahhas, Faten Hamed
Format: Thesis
Language:English
Published: 2018
Subjects:
Online Access:http://psasir.upm.edu.my/id/eprint/92183/1/FK%202019%2055%20-%20IR.pdf
_version_ 1825937167902310400
author Nahhas, Faten Hamed
author_facet Nahhas, Faten Hamed
author_sort Nahhas, Faten Hamed
collection UPM
description Buildings play an essential role in urban construction, planning, and climate studies. Extracting detailed and accurate information about building such as value, usage, height, and size provides information for town planning, urban management, and three-dimensional (3D) visualization. Building extraction with remote sensing data especially LiDAR (Light Detection And Ranging) and VHR (Very High Resolution) images is a difficult task and open research problem. For this purpose, scientists have been developing methods utilizing the standard pixel features and additional height features of the data in various ways. In urban areas, extracting buildings is more complex than extracting them in rural areas. This is because of the presence of nearby objects, such as trees, which frequently have similar elevations as buildings. Additional challenges also come from different material combinations that create a variety of intensity in the spectral bands, employed. Two common methods are widely used in literature, pixel-based and object-based methods (also known as OBIA). The former results in salt and pepper like noise in the detected buildings, while the latter requires proper feature selection and image segmentation. Both methods have poor generalization and transferability to other environments, scale dependency, and require good quality training examples. As a result, the main goal of this research is to design and optimize deep learning-based fusion techniques using Autoencoders (AE) and Convolutional Neural Networks (CNN) for integrating LiDAR and Worldview-3 (WV3) data for building extraction. The optimization was carried out using grid and random search techniques to improve the performance of models. Specifically, two fusion methods were developed. First, a method for fusion of LiDAR-based digital surface model (DSM) with orthophoto (LO-Fusion), and a second method for LiDAR-DSM with WV3 (LW-Fusion) image. The results of this thesis are promising. Our method achieved the highest accuracies of 97.34%, 94.48%, and 94.37% in the three-subset areas. It performed better than the traditional methods, such as support vector machine (SVM), random forest (RF), and K-nearest neighbour (KNN). The highest validation accuracy in this group of methods was 89.04%, achieved by SVM. Although KNN achieved better training accuracy (92.34%) than RF, the latter achieved better validation accuracy than the former (86.17%). Furthermore, CNN and Optimized CNN with the random search were used to detect buildings in the same areas using only LiDAR and orthophoto data. The experimental results show that the use of additional features of WV3 image fused with LiDAR data can increase validation accuracy by almost 11%. The validation accuracy of Optimized CNN with only LiDAR and orthophoto data was 86.19%, which is relatively lower than those of SVM and RF. Overall, proper optimization can improve the use of deep learning models such as CNN and autoencoders to the extent of outperforming OBIA for building detection from LiDAR and VHR satellite data.
first_indexed 2024-03-06T10:55:23Z
format Thesis
id upm.eprints-92183
institution Universiti Putra Malaysia
language English
last_indexed 2024-03-06T10:55:23Z
publishDate 2018
record_format dspace
spelling upm.eprints-921832022-03-29T00:02:17Z http://psasir.upm.edu.my/id/eprint/92183/ Development of deep learning-based fusion method for building detection using LiDAR and very high resolution images Nahhas, Faten Hamed Buildings play an essential role in urban construction, planning, and climate studies. Extracting detailed and accurate information about building such as value, usage, height, and size provides information for town planning, urban management, and three-dimensional (3D) visualization. Building extraction with remote sensing data especially LiDAR (Light Detection And Ranging) and VHR (Very High Resolution) images is a difficult task and open research problem. For this purpose, scientists have been developing methods utilizing the standard pixel features and additional height features of the data in various ways. In urban areas, extracting buildings is more complex than extracting them in rural areas. This is because of the presence of nearby objects, such as trees, which frequently have similar elevations as buildings. Additional challenges also come from different material combinations that create a variety of intensity in the spectral bands, employed. Two common methods are widely used in literature, pixel-based and object-based methods (also known as OBIA). The former results in salt and pepper like noise in the detected buildings, while the latter requires proper feature selection and image segmentation. Both methods have poor generalization and transferability to other environments, scale dependency, and require good quality training examples. As a result, the main goal of this research is to design and optimize deep learning-based fusion techniques using Autoencoders (AE) and Convolutional Neural Networks (CNN) for integrating LiDAR and Worldview-3 (WV3) data for building extraction. The optimization was carried out using grid and random search techniques to improve the performance of models. Specifically, two fusion methods were developed. First, a method for fusion of LiDAR-based digital surface model (DSM) with orthophoto (LO-Fusion), and a second method for LiDAR-DSM with WV3 (LW-Fusion) image. The results of this thesis are promising. Our method achieved the highest accuracies of 97.34%, 94.48%, and 94.37% in the three-subset areas. It performed better than the traditional methods, such as support vector machine (SVM), random forest (RF), and K-nearest neighbour (KNN). The highest validation accuracy in this group of methods was 89.04%, achieved by SVM. Although KNN achieved better training accuracy (92.34%) than RF, the latter achieved better validation accuracy than the former (86.17%). Furthermore, CNN and Optimized CNN with the random search were used to detect buildings in the same areas using only LiDAR and orthophoto data. The experimental results show that the use of additional features of WV3 image fused with LiDAR data can increase validation accuracy by almost 11%. The validation accuracy of Optimized CNN with only LiDAR and orthophoto data was 86.19%, which is relatively lower than those of SVM and RF. Overall, proper optimization can improve the use of deep learning models such as CNN and autoencoders to the extent of outperforming OBIA for building detection from LiDAR and VHR satellite data. 2018-08 Thesis NonPeerReviewed text en http://psasir.upm.edu.my/id/eprint/92183/1/FK%202019%2055%20-%20IR.pdf Nahhas, Faten Hamed (2018) Development of deep learning-based fusion method for building detection using LiDAR and very high resolution images. Doctoral thesis, Universiti Putra Malaysia. Structural engineering Optical radar
spellingShingle Structural engineering
Optical radar
Nahhas, Faten Hamed
Development of deep learning-based fusion method for building detection using LiDAR and very high resolution images
title Development of deep learning-based fusion method for building detection using LiDAR and very high resolution images
title_full Development of deep learning-based fusion method for building detection using LiDAR and very high resolution images
title_fullStr Development of deep learning-based fusion method for building detection using LiDAR and very high resolution images
title_full_unstemmed Development of deep learning-based fusion method for building detection using LiDAR and very high resolution images
title_short Development of deep learning-based fusion method for building detection using LiDAR and very high resolution images
title_sort development of deep learning based fusion method for building detection using lidar and very high resolution images
topic Structural engineering
Optical radar
url http://psasir.upm.edu.my/id/eprint/92183/1/FK%202019%2055%20-%20IR.pdf
work_keys_str_mv AT nahhasfatenhamed developmentofdeeplearningbasedfusionmethodforbuildingdetectionusinglidarandveryhighresolutionimages