End-to-End Learning for Visual Navigation of Forest Environments
Off-road navigation in forest environments is a challenging problem in field robotics. Rovers are required to infer their traversability over a priori unknown and dynamically changing forest terrain using noisy onboard navigation sensors. The problem is compounded for small-sized rovers, such as tha...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-01-01
|
Series: | Forests |
Subjects: | |
Online Access: | https://www.mdpi.com/1999-4907/14/2/268 |
_version_ | 1797620948526956544 |
---|---|
author | Chaoyue Niu Klaus-Peter Zauner Danesh Tarapore |
author_facet | Chaoyue Niu Klaus-Peter Zauner Danesh Tarapore |
author_sort | Chaoyue Niu |
collection | DOAJ |
description | Off-road navigation in forest environments is a challenging problem in field robotics. Rovers are required to infer their traversability over a priori unknown and dynamically changing forest terrain using noisy onboard navigation sensors. The problem is compounded for small-sized rovers, such as that of a swarm. Their size-proportional low-viewpoint affords them a restricted view for navigation, which may be partially occluded by forest vegetation. Hand-crafted features, typically employed for terrain traversability analysis, are often brittle and may fail to discriminate obstacles in varying lighting and weather conditions. We design a low-cost navigation system tailored for small-sized forest rovers using self-learned features. The MobileNet-V1 and MobileNet-V2 models, trained following an end-to-end learning approach, are deployed to steer a mobile platform, with a human-in-the-loop, towards traversable paths while avoiding obstacles. Receiving a 128 × 96 pixel RGB image from a monocular camera as input, the algorithm running on a Raspberry Pi 4, exhibited robustness to motion blur, low lighting, shadows and high-contrast lighting conditions. It was able to successfully navigate a total of over 3 km of real-world forest terrain comprising shrubs, dense bushes, tall grass, fallen branches, fallen tree trunks, and standing trees, in over five different weather conditions and four different times of day. |
first_indexed | 2024-03-11T08:48:47Z |
format | Article |
id | doaj.art-59c70e385e4b4bef9e3a29e3ce12d8ad |
institution | Directory Open Access Journal |
issn | 1999-4907 |
language | English |
last_indexed | 2024-03-11T08:48:47Z |
publishDate | 2023-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Forests |
spelling | doaj.art-59c70e385e4b4bef9e3a29e3ce12d8ad2023-11-16T20:33:37ZengMDPI AGForests1999-49072023-01-0114226810.3390/f14020268End-to-End Learning for Visual Navigation of Forest EnvironmentsChaoyue Niu0Klaus-Peter Zauner1Danesh Tarapore2School of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UKSchool of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UKSchool of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UKOff-road navigation in forest environments is a challenging problem in field robotics. Rovers are required to infer their traversability over a priori unknown and dynamically changing forest terrain using noisy onboard navigation sensors. The problem is compounded for small-sized rovers, such as that of a swarm. Their size-proportional low-viewpoint affords them a restricted view for navigation, which may be partially occluded by forest vegetation. Hand-crafted features, typically employed for terrain traversability analysis, are often brittle and may fail to discriminate obstacles in varying lighting and weather conditions. We design a low-cost navigation system tailored for small-sized forest rovers using self-learned features. The MobileNet-V1 and MobileNet-V2 models, trained following an end-to-end learning approach, are deployed to steer a mobile platform, with a human-in-the-loop, towards traversable paths while avoiding obstacles. Receiving a 128 × 96 pixel RGB image from a monocular camera as input, the algorithm running on a Raspberry Pi 4, exhibited robustness to motion blur, low lighting, shadows and high-contrast lighting conditions. It was able to successfully navigate a total of over 3 km of real-world forest terrain comprising shrubs, dense bushes, tall grass, fallen branches, fallen tree trunks, and standing trees, in over five different weather conditions and four different times of day.https://www.mdpi.com/1999-4907/14/2/268off-road visual navigationend-to-end learningmulticlass classificationlow-viewpoint forest navigationlow-cost sensorssmall-sized rovers |
spellingShingle | Chaoyue Niu Klaus-Peter Zauner Danesh Tarapore End-to-End Learning for Visual Navigation of Forest Environments Forests off-road visual navigation end-to-end learning multiclass classification low-viewpoint forest navigation low-cost sensors small-sized rovers |
title | End-to-End Learning for Visual Navigation of Forest Environments |
title_full | End-to-End Learning for Visual Navigation of Forest Environments |
title_fullStr | End-to-End Learning for Visual Navigation of Forest Environments |
title_full_unstemmed | End-to-End Learning for Visual Navigation of Forest Environments |
title_short | End-to-End Learning for Visual Navigation of Forest Environments |
title_sort | end to end learning for visual navigation of forest environments |
topic | off-road visual navigation end-to-end learning multiclass classification low-viewpoint forest navigation low-cost sensors small-sized rovers |
url | https://www.mdpi.com/1999-4907/14/2/268 |
work_keys_str_mv | AT chaoyueniu endtoendlearningforvisualnavigationofforestenvironments AT klauspeterzauner endtoendlearningforvisualnavigationofforestenvironments AT daneshtarapore endtoendlearningforvisualnavigationofforestenvironments |