Integration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR Data

Object-based image analysis (OBIA) has been widely used for land use and land cover (LULC) mapping using optical and synthetic aperture radar (SAR) images because it can utilize spatial information, reduce the effect of salt and pepper, and delineate LULC boundaries. With recent advances in machine...

Full description

Bibliographic Details
Main Authors: Shengjie Liu, Zhixin Qi, Xia Li, Anthony Gar-On Yeh
Format: Article
Language:English
Published: MDPI AG 2019-03-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/11/6/690
_version_ 1818085258767106048
author Shengjie Liu
Zhixin Qi
Xia Li
Anthony Gar-On Yeh
author_facet Shengjie Liu
Zhixin Qi
Xia Li
Anthony Gar-On Yeh
author_sort Shengjie Liu
collection DOAJ
description Object-based image analysis (OBIA) has been widely used for land use and land cover (LULC) mapping using optical and synthetic aperture radar (SAR) images because it can utilize spatial information, reduce the effect of salt and pepper, and delineate LULC boundaries. With recent advances in machine learning, convolutional neural networks (CNNs) have become state-of-the-art algorithms. However, CNNs cannot be easily integrated with OBIA because the processing unit of CNNs is a rectangular image, whereas that of OBIA is an irregular image object. To obtain object-based thematic maps, this study developed a new method that integrates object-based post-classification refinement (OBPR) and CNNs for LULC mapping using Sentinel optical and SAR data. After producing the classification map by CNN, each image object was labeled with the most frequent land cover category of its pixels. The proposed method was tested on the optical-SAR Sentinel Guangzhou dataset with 10 m spatial resolution, the optical-SAR Zhuhai-Macau local climate zones (LCZ) dataset with 100 m spatial resolution, and a hyperspectral benchmark the University of Pavia with 1.3 m spatial resolution. It outperformed OBIA support vector machine (SVM) and random forest (RF). SVM and RF could benefit more from the combined use of optical and SAR data compared with CNN, whereas spatial information learned by CNN was very effective for classification. With the ability to extract spatial features and maintain object boundaries, the proposed method considerably improved the classification accuracy of urban ground targets. It achieved overall accuracy (OA) of 95.33% for the Sentinel Guangzhou dataset, OA of 77.64% for the Zhuhai-Macau LCZ dataset, and OA of 95.70% for the University of Pavia dataset with only 10 labeled samples per class.
first_indexed 2024-12-10T20:06:56Z
format Article
id doaj.art-0142f6ace796477da87f720550d938a6
institution Directory Open Access Journal
issn 2072-4292
language English
last_indexed 2024-12-10T20:06:56Z
publishDate 2019-03-01
publisher MDPI AG
record_format Article
series Remote Sensing
spelling doaj.art-0142f6ace796477da87f720550d938a62022-12-22T01:35:21ZengMDPI AGRemote Sensing2072-42922019-03-0111669010.3390/rs11060690rs11060690Integration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR DataShengjie Liu0Zhixin Qi1Xia Li2Anthony Gar-On Yeh3Guangdong Provincial Key Laboratory of Urbanization and Geo-simulation, School of Geography and Planning, Sun Yat-sen University, Guangzhou 510275, ChinaGuangdong Provincial Key Laboratory of Urbanization and Geo-simulation, School of Geography and Planning, Sun Yat-sen University, Guangzhou 510275, ChinaSchool of Geographic Sciences, Key Lab. of Geographic Information Science (Ministry of Education), East China Normal University, 500 Dongchuan Rd, Shanghai 200241, ChinaDepartment of Urban Planning and Design, The University of Hong Kong, Pokfulam Road, Hong Kong, ChinaObject-based image analysis (OBIA) has been widely used for land use and land cover (LULC) mapping using optical and synthetic aperture radar (SAR) images because it can utilize spatial information, reduce the effect of salt and pepper, and delineate LULC boundaries. With recent advances in machine learning, convolutional neural networks (CNNs) have become state-of-the-art algorithms. However, CNNs cannot be easily integrated with OBIA because the processing unit of CNNs is a rectangular image, whereas that of OBIA is an irregular image object. To obtain object-based thematic maps, this study developed a new method that integrates object-based post-classification refinement (OBPR) and CNNs for LULC mapping using Sentinel optical and SAR data. After producing the classification map by CNN, each image object was labeled with the most frequent land cover category of its pixels. The proposed method was tested on the optical-SAR Sentinel Guangzhou dataset with 10 m spatial resolution, the optical-SAR Zhuhai-Macau local climate zones (LCZ) dataset with 100 m spatial resolution, and a hyperspectral benchmark the University of Pavia with 1.3 m spatial resolution. It outperformed OBIA support vector machine (SVM) and random forest (RF). SVM and RF could benefit more from the combined use of optical and SAR data compared with CNN, whereas spatial information learned by CNN was very effective for classification. With the ability to extract spatial features and maintain object boundaries, the proposed method considerably improved the classification accuracy of urban ground targets. It achieved overall accuracy (OA) of 95.33% for the Sentinel Guangzhou dataset, OA of 77.64% for the Zhuhai-Macau LCZ dataset, and OA of 95.70% for the University of Pavia dataset with only 10 labeled samples per class.https://www.mdpi.com/2072-4292/11/6/690object-based post-classification refinement (OBPR)convolutional neural network (CNN)synthetic aperture radar (SAR)land use and land coverobject-based image analysis (OBIA)
spellingShingle Shengjie Liu
Zhixin Qi
Xia Li
Anthony Gar-On Yeh
Integration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR Data
Remote Sensing
object-based post-classification refinement (OBPR)
convolutional neural network (CNN)
synthetic aperture radar (SAR)
land use and land cover
object-based image analysis (OBIA)
title Integration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR Data
title_full Integration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR Data
title_fullStr Integration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR Data
title_full_unstemmed Integration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR Data
title_short Integration of Convolutional Neural Networks and Object-Based Post-Classification Refinement for Land Use and Land Cover Mapping with Optical and SAR Data
title_sort integration of convolutional neural networks and object based post classification refinement for land use and land cover mapping with optical and sar data
topic object-based post-classification refinement (OBPR)
convolutional neural network (CNN)
synthetic aperture radar (SAR)
land use and land cover
object-based image analysis (OBIA)
url https://www.mdpi.com/2072-4292/11/6/690
work_keys_str_mv AT shengjieliu integrationofconvolutionalneuralnetworksandobjectbasedpostclassificationrefinementforlanduseandlandcovermappingwithopticalandsardata
AT zhixinqi integrationofconvolutionalneuralnetworksandobjectbasedpostclassificationrefinementforlanduseandlandcovermappingwithopticalandsardata
AT xiali integrationofconvolutionalneuralnetworksandobjectbasedpostclassificationrefinementforlanduseandlandcovermappingwithopticalandsardata
AT anthonygaronyeh integrationofconvolutionalneuralnetworksandobjectbasedpostclassificationrefinementforlanduseandlandcovermappingwithopticalandsardata