Road Extraction from UAV Images Using a Deep ResDCLnet Architecture

Obtaining near real-time road features is very important in emergent situations like flood and geological disaster cases. Remote sensing images with very high spatial resolution usually have many details in land use and land cover, which complicate the detection and extraction of road features. In t...

Full description

Bibliographic Details
Main Authors: Wuttichai Boonpook, Yumin Tan, Bingxin Bai, Bo Xu
Format: Article
Language:English
Published: Taylor & Francis Group 2021-05-01
Series:Canadian Journal of Remote Sensing
Online Access:http://dx.doi.org/10.1080/07038992.2021.1913046
Description
Summary:Obtaining near real-time road features is very important in emergent situations like flood and geological disaster cases. Remote sensing images with very high spatial resolution usually have many details in land use and land cover, which complicate the detection and extraction of road features. In this paper, we propose a deep residual deconvolutional network (Deep ResDCLnet), to extract road features from unmanned aerial vehicle (UAV) images. This proposed network is based on the deep neural network from SegNet architecture, the rich skip connection in a residual bottleneck, and the direct relationship among intermediate feature maps from the pixel deconvolution algorithm. It can improve the performance of a supervised learning model by differentiating and extracting complex road features on aerial photographs and UAV imagery. The proposed network is evaluated with the standard public Massachusetts road dataset and the UAV dataset collected alongside Yangtze River, and is compared with four state-of-art network architectures. The results show that the Deep ResDCLnet outperforms all four networks in terms of extraction accuracy, which demonstrates the effectiveness of the network in road extraction from very high spatial resolution imagery.
ISSN:1712-7971