Adaboost-like End-to-End multiple lightweight U-nets for road extraction from optical remote sensing images

Road extraction from optical remote sensing images has many important application scenarios, such as navigation, automatic driving and road network planning, etc. Current deep learning based models have achieved great successes in road extraction. Most deep learning models improve abilities rely on...

Full description

Bibliographic Details
Main Authors: Ziyi Chen, Cheng Wang, Jonathan Li, Wentao Fan, Jixiang Du, Bineng Zhong
Format: Article
Language:English
Published: Elsevier 2021-08-01
Series:International Journal of Applied Earth Observations and Geoinformation
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S0303243421000489
Description
Summary:Road extraction from optical remote sensing images has many important application scenarios, such as navigation, automatic driving and road network planning, etc. Current deep learning based models have achieved great successes in road extraction. Most deep learning models improve abilities rely on using deeper layers, resulting to the obese of the trained model. Besides, the training of a deep model is also difficult, and may be easy to fall into over fitting. Thus, this paper studies to improve the performance through combining multiple lightweight models. However, in fact multiple isolated lightweight models may perform worse than a deeper and larger model. The reason is that those models are trained isolated. To solve the above problem, we propose an Adaboost-like End-To-End Multiple Lightweight U-Nets model (AEML U-Nets) for road extraction. Our model consists of multiple lightweight U-Net parts. Each output of prior U-Net is as the input of next U-Net. We design our model as multiple-objective optimization problem to jointly train all the U-Nets. The approach is tested on two open datasets (LRSNY and Massachusetts) and Shaoshan dataset. Experimental results prove that our model has better performance compared with other state-of-the-art semantic segmentation methods.
ISSN:1569-8432