Beyond Measurement: Extracting Vegetation Height from High Resolution Imagery with Deep Learning

Measuring and monitoring the height of vegetation provides important insights into forest age and habitat quality. These are essential for the accuracy of applications that are highly reliant on up-to-date and accurate vegetation data. Current vegetation sensing practices involve ground survey, phot...

Full description

Bibliographic Details
Main Authors: David Radke, Daniel Radke, John Radke
Format: Article
Language:English
Published: MDPI AG 2020-11-01
Series:Remote Sensing
Subjects:
Online Access:https://www.mdpi.com/2072-4292/12/22/3797
Description
Summary:Measuring and monitoring the height of vegetation provides important insights into forest age and habitat quality. These are essential for the accuracy of applications that are highly reliant on up-to-date and accurate vegetation data. Current vegetation sensing practices involve ground survey, photogrammetry, synthetic aperture radar (SAR), and airborne light detection and ranging sensors (LiDAR). While these methods provide high resolution and accuracy, their hardware and collection effort prohibits highly recurrent and widespread collection. In response to the limitations of current methods, we designed Y-NET, a novel deep learning model to generate high resolution models of vegetation from highly recurrent multispectral aerial imagery and elevation data. Y-NET’s architecture uses convolutional layers to learn correlations between different input features and vegetation height, generating an accurate vegetation surface model (VSM) at <inline-formula><math display="inline"><semantics><mrow><mn>1</mn><mo>×</mo><mn>1</mn></mrow></semantics></math></inline-formula> m resolution. We evaluated Y-NET on 235 km<inline-formula><math display="inline"><semantics><msup><mrow></mrow><mn>2</mn></msup></semantics></math></inline-formula> of the East San Francisco Bay Area and find that Y-NET achieves low error from LiDAR when tested on new locations. Y-NET also achieves an <inline-formula><math display="inline"><semantics><msup><mi>R</mi><mn>2</mn></msup></semantics></math></inline-formula> of 0.83 and can effectively model complex vegetation through side-by-side visual comparisons. Furthermore, we show that Y-NET is able to identify instances of vegetation growth and mitigation by comparing aerial imagery and LiDAR collected at different times.
ISSN:2072-4292