DEEP NEURAL NETWORKS FOR ABOVE-GROUND DETECTION IN VERY HIGH SPATIAL RESOLUTION DIGITAL ELEVATION MODELS
Deep Learning techniques have lately received increased attention for achieving state-of-the-art results in many classification problems, including various vision tasks. In this work, we implement a Deep Learning technique for classifying above-ground objects within urban environments by using a Mul...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Copernicus Publications
2015-03-01
|
Series: | ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences |
Online Access: | http://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/II-3-W4/103/2015/isprsannals-II-3-W4-103-2015.pdf |
Summary: | Deep Learning techniques have lately received increased attention for achieving state-of-the-art results in many classification problems, including various vision tasks. In this work, we implement a Deep Learning technique for classifying above-ground objects within urban environments by using a Multilayer Perceptron model and VHSR DEM data. In this context, we propose a novel method called M-ramp which significantly improves the classifier’s estimations by neglecting artefacts, minimizing convergence time and improving overall accuracy. We support the importance of using the M-ramp model in DEM classification by conducting a set of experiments with both quantitative and qualitative results. Precisely, we initially train our algorithm with random DEM tiles and their respective point-labels, considering less than 0.1% over the test area, depicting the city center of Munich (25 km<sup>2</sup>). Furthermore with no additional training, we classify two much larger unseen extents of the greater Munich area (424 km<sup>2</sup>) and Dongying city, China (257 km<sup>2</sup>) and evaluate their respective results for proving knowledge-transferability. Through the use of M-ramp, we were able to accelerate the convergence by a magnitude of 8 and achieve a decrease in above-ground relative error by 24.8% and 5.5% over the different datasets. |
---|---|
ISSN: | 2194-9042 2194-9050 |