Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization

This paper extends recent research into the usefulness of volunteered photos for land cover extraction, and investigates whether this usefulness can be automatically assessed by an easily accessible, off-the-shelf neural network pre-trained on a variety of scene characteristics. Geo-tagged photograp...

Full description

Bibliographic Details
Main Authors: Lukasz Tracewski, Lucy Bastin, Cidalia C. Fonte
Format: Article
Language:English
Published: Taylor & Francis Group 2017-07-01
Series:Geo-spatial Information Science
Subjects:
Online Access:http://dx.doi.org/10.1080/10095020.2017.1373955
_version_ 1818481316313694208
author Lukasz Tracewski
Lucy Bastin
Cidalia C. Fonte
author_facet Lukasz Tracewski
Lucy Bastin
Cidalia C. Fonte
author_sort Lukasz Tracewski
collection DOAJ
description This paper extends recent research into the usefulness of volunteered photos for land cover extraction, and investigates whether this usefulness can be automatically assessed by an easily accessible, off-the-shelf neural network pre-trained on a variety of scene characteristics. Geo-tagged photographs are sometimes presented to volunteers as part of a game which requires them to extract relevant facts about land use. The challenge is to select the most relevant photographs in order to most efficiently extract the useful information while maintaining the engagement and interests of volunteers. By repurposing an existing network which had been trained on an extensive library of potentially relevant features, we can quickly carry out initial assessments of the general value of this approach, pick out especially salient features, and identify focus areas for future neural network training and development. We compare two approaches to extract land cover information from the network: a simple post hoc weighting approach accessible to non-technical audiences and a more complex decision tree approach that involves training on domain-specific features of interest. Both approaches had reasonable success in characterizing human influence within a scene when identifying the land use types (as classified by Urban Atlas) present within a buffer around the photograph’s location. This work identifies important limitations and opportunities for using volunteered photographs as follows: (1) the false precision of a photograph’s location is less useful for identifying on-the-spot land cover than the information it can give on neighbouring combinations of land cover; (2) ground-acquired photographs, interpreted by a neural network, can supplement plan view imagery by identifying features which will never be discernible from above; (3) when dealing with contexts where there are very few exemplars of particular classes, an independent a posteriori weighting of existing scene attributes and categories can buffer against over-specificity.
first_indexed 2024-12-10T11:33:17Z
format Article
id doaj.art-3744fc8210764fceb1a2d496c3d3bda4
institution Directory Open Access Journal
issn 1009-5020
1993-5153
language English
last_indexed 2024-12-10T11:33:17Z
publishDate 2017-07-01
publisher Taylor & Francis Group
record_format Article
series Geo-spatial Information Science
spelling doaj.art-3744fc8210764fceb1a2d496c3d3bda42022-12-22T01:50:29ZengTaylor & Francis GroupGeo-spatial Information Science1009-50201993-51532017-07-0120325226810.1080/10095020.2017.13739551373955Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterizationLukasz Tracewski0Lucy Bastin1Cidalia C. Fonte2Aston UniversityAston UniversityINESC Coimbra, University of CoimbraThis paper extends recent research into the usefulness of volunteered photos for land cover extraction, and investigates whether this usefulness can be automatically assessed by an easily accessible, off-the-shelf neural network pre-trained on a variety of scene characteristics. Geo-tagged photographs are sometimes presented to volunteers as part of a game which requires them to extract relevant facts about land use. The challenge is to select the most relevant photographs in order to most efficiently extract the useful information while maintaining the engagement and interests of volunteers. By repurposing an existing network which had been trained on an extensive library of potentially relevant features, we can quickly carry out initial assessments of the general value of this approach, pick out especially salient features, and identify focus areas for future neural network training and development. We compare two approaches to extract land cover information from the network: a simple post hoc weighting approach accessible to non-technical audiences and a more complex decision tree approach that involves training on domain-specific features of interest. Both approaches had reasonable success in characterizing human influence within a scene when identifying the land use types (as classified by Urban Atlas) present within a buffer around the photograph’s location. This work identifies important limitations and opportunities for using volunteered photographs as follows: (1) the false precision of a photograph’s location is less useful for identifying on-the-spot land cover than the information it can give on neighbouring combinations of land cover; (2) ground-acquired photographs, interpreted by a neural network, can supplement plan view imagery by identifying features which will never be discernible from above; (3) when dealing with contexts where there are very few exemplars of particular classes, an independent a posteriori weighting of existing scene attributes and categories can buffer against over-specificity.http://dx.doi.org/10.1080/10095020.2017.1373955Land coverland usevolunteered geographic information (VGI)photographconvolutional neural networkmachine learning
spellingShingle Lukasz Tracewski
Lucy Bastin
Cidalia C. Fonte
Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
Geo-spatial Information Science
Land cover
land use
volunteered geographic information (VGI)
photograph
convolutional neural network
machine learning
title Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
title_full Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
title_fullStr Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
title_full_unstemmed Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
title_short Repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
title_sort repurposing a deep learning network to filter and classify volunteered photographs for land cover and land use characterization
topic Land cover
land use
volunteered geographic information (VGI)
photograph
convolutional neural network
machine learning
url http://dx.doi.org/10.1080/10095020.2017.1373955
work_keys_str_mv AT lukasztracewski repurposingadeeplearningnetworktofilterandclassifyvolunteeredphotographsforlandcoverandlandusecharacterization
AT lucybastin repurposingadeeplearningnetworktofilterandclassifyvolunteeredphotographsforlandcoverandlandusecharacterization
AT cidaliacfonte repurposingadeeplearningnetworktofilterandclassifyvolunteeredphotographsforlandcoverandlandusecharacterization