Automating Ground Control Point Detection in Drone Imagery: From Computer Vision to Deep Learning
Drone-based photogrammetry typically requires the task of georeferencing aerial images by detecting the center of Ground Control Points (GCPs) placed in the field. Since this is a very labor-intensive task, it could benefit greatly from automation. In this study, we explore the extent to which tradi...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2024-02-01
|
Series: | Remote Sensing |
Subjects: | |
Online Access: | https://www.mdpi.com/2072-4292/16/5/794 |
_version_ | 1797263986095292416 |
---|---|
author | Gonzalo Muradás Odriozola Klaas Pauly Samuel Oswald Dries Raymaekers |
author_facet | Gonzalo Muradás Odriozola Klaas Pauly Samuel Oswald Dries Raymaekers |
author_sort | Gonzalo Muradás Odriozola |
collection | DOAJ |
description | Drone-based photogrammetry typically requires the task of georeferencing aerial images by detecting the center of Ground Control Points (GCPs) placed in the field. Since this is a very labor-intensive task, it could benefit greatly from automation. In this study, we explore the extent to which traditional computer vision approaches can be generalized to deal with variability in real-world drone data sets and focus on training different residual neural networks (ResNet) to improve generalization. The models were trained to detect single keypoints of fixed-sized image tiles with a historic collection of drone-based Red–Green–Blue (RGB) images with black and white GCP markers in which the center was manually labeled by experienced photogrammetry operators. Different depths of ResNets and various hyperparameters (learning rate, batch size) were tested. The best results reached sub-pixel accuracy with a mean absolute error of 0.586. The paper demonstrates that this approach to drone-based mapping is a promising and effective way to reduce the human workload required for georeferencing aerial images. |
first_indexed | 2024-04-25T00:21:43Z |
format | Article |
id | doaj.art-fc0e4933e0b645a29f860a06b253c53b |
institution | Directory Open Access Journal |
issn | 2072-4292 |
language | English |
last_indexed | 2024-04-25T00:21:43Z |
publishDate | 2024-02-01 |
publisher | MDPI AG |
record_format | Article |
series | Remote Sensing |
spelling | doaj.art-fc0e4933e0b645a29f860a06b253c53b2024-03-12T16:54:02ZengMDPI AGRemote Sensing2072-42922024-02-0116579410.3390/rs16050794Automating Ground Control Point Detection in Drone Imagery: From Computer Vision to Deep LearningGonzalo Muradás Odriozola0Klaas Pauly1Samuel Oswald2Dries Raymaekers3Image and Speech Processing (PSI), Department of Electrical Engineering (ESAT), KU Leuven, B-3000 Leuven, BelgiumRemote Sensing Unit, Flemish Institute for Technological Research (VITO), Boeretang 200, B-2400 Mol, BelgiumRemote Sensing Unit, Flemish Institute for Technological Research (VITO), Boeretang 200, B-2400 Mol, BelgiumRemote Sensing Unit, Flemish Institute for Technological Research (VITO), Boeretang 200, B-2400 Mol, BelgiumDrone-based photogrammetry typically requires the task of georeferencing aerial images by detecting the center of Ground Control Points (GCPs) placed in the field. Since this is a very labor-intensive task, it could benefit greatly from automation. In this study, we explore the extent to which traditional computer vision approaches can be generalized to deal with variability in real-world drone data sets and focus on training different residual neural networks (ResNet) to improve generalization. The models were trained to detect single keypoints of fixed-sized image tiles with a historic collection of drone-based Red–Green–Blue (RGB) images with black and white GCP markers in which the center was manually labeled by experienced photogrammetry operators. Different depths of ResNets and various hyperparameters (learning rate, batch size) were tested. The best results reached sub-pixel accuracy with a mean absolute error of 0.586. The paper demonstrates that this approach to drone-based mapping is a promising and effective way to reduce the human workload required for georeferencing aerial images.https://www.mdpi.com/2072-4292/16/5/794dronesphotogrammetryground control pointsGCPsRGBcomputer vision |
spellingShingle | Gonzalo Muradás Odriozola Klaas Pauly Samuel Oswald Dries Raymaekers Automating Ground Control Point Detection in Drone Imagery: From Computer Vision to Deep Learning Remote Sensing drones photogrammetry ground control points GCPs RGB computer vision |
title | Automating Ground Control Point Detection in Drone Imagery: From Computer Vision to Deep Learning |
title_full | Automating Ground Control Point Detection in Drone Imagery: From Computer Vision to Deep Learning |
title_fullStr | Automating Ground Control Point Detection in Drone Imagery: From Computer Vision to Deep Learning |
title_full_unstemmed | Automating Ground Control Point Detection in Drone Imagery: From Computer Vision to Deep Learning |
title_short | Automating Ground Control Point Detection in Drone Imagery: From Computer Vision to Deep Learning |
title_sort | automating ground control point detection in drone imagery from computer vision to deep learning |
topic | drones photogrammetry ground control points GCPs RGB computer vision |
url | https://www.mdpi.com/2072-4292/16/5/794 |
work_keys_str_mv | AT gonzalomuradasodriozola automatinggroundcontrolpointdetectionindroneimageryfromcomputervisiontodeeplearning AT klaaspauly automatinggroundcontrolpointdetectionindroneimageryfromcomputervisiontodeeplearning AT samueloswald automatinggroundcontrolpointdetectionindroneimageryfromcomputervisiontodeeplearning AT driesraymaekers automatinggroundcontrolpointdetectionindroneimageryfromcomputervisiontodeeplearning |