Learn to steer through deep reinforcement learning

It is crucial for robots to autonomously steer in complex environments safely without colliding with any obstacles. Compared to conventional methods, deep reinforcement learning-based methods are able to learn from past experiences automatically and enhance the generalization capability to cope with...

Full description

Bibliographic Details
Main Authors: Wu, Keyu, Esfahani, Mahdi Abolfazli, Yuan, Shenghai, Wang, Han
Other Authors: School of Electrical and Electronic Engineering
Format: Journal Article
Language:English
Published: 2019
Subjects:
Online Access:https://hdl.handle.net/10356/103342
http://hdl.handle.net/10220/47293
_version_ 1826110283503894528
author Wu, Keyu
Esfahani, Mahdi Abolfazli
Yuan, Shenghai
Wang, Han
author2 School of Electrical and Electronic Engineering
author_facet School of Electrical and Electronic Engineering
Wu, Keyu
Esfahani, Mahdi Abolfazli
Yuan, Shenghai
Wang, Han
author_sort Wu, Keyu
collection NTU
description It is crucial for robots to autonomously steer in complex environments safely without colliding with any obstacles. Compared to conventional methods, deep reinforcement learning-based methods are able to learn from past experiences automatically and enhance the generalization capability to cope with unseen circumstances. Therefore, we propose an end-to-end deep reinforcement learning algorithm in this paper to improve the performance of autonomous steering in complex environments. By embedding a branching noisy dueling architecture, the proposed model is capable of deriving steering commands directly from raw depth images with high efficiency. Specifically, our learning-based approach extracts the feature representation from depth inputs through convolutional neural networks and maps it to both linear and angular velocity commands simultaneously through different streams of the network. Moreover, the training framework is also meticulously designed to improve the learning efficiency and effectiveness. It is worth noting that the developed system is readily transferable from virtual training scenarios to real-world deployment without any fine-tuning by utilizing depth images. The proposed method is evaluated and compared with a series of baseline methods in various virtual environments. Experimental results demonstrate the superiority of the proposed model in terms of average reward, learning efficiency, success rate as well as computational time. Moreover, a variety of real-world experiments are also conducted which reveal the high adaptability of our model to both static and dynamic obstacle-cluttered environments.
first_indexed 2024-10-01T02:31:52Z
format Journal Article
id ntu-10356/103342
institution Nanyang Technological University
language English
last_indexed 2024-10-01T02:31:52Z
publishDate 2019
record_format dspace
spelling ntu-10356/1033422020-03-07T14:00:36Z Learn to steer through deep reinforcement learning Wu, Keyu Esfahani, Mahdi Abolfazli Yuan, Shenghai Wang, Han School of Electrical and Electronic Engineering Autonomous Steering DRNTU::Engineering::Electrical and electronic engineering Deep Reinforcement Learning It is crucial for robots to autonomously steer in complex environments safely without colliding with any obstacles. Compared to conventional methods, deep reinforcement learning-based methods are able to learn from past experiences automatically and enhance the generalization capability to cope with unseen circumstances. Therefore, we propose an end-to-end deep reinforcement learning algorithm in this paper to improve the performance of autonomous steering in complex environments. By embedding a branching noisy dueling architecture, the proposed model is capable of deriving steering commands directly from raw depth images with high efficiency. Specifically, our learning-based approach extracts the feature representation from depth inputs through convolutional neural networks and maps it to both linear and angular velocity commands simultaneously through different streams of the network. Moreover, the training framework is also meticulously designed to improve the learning efficiency and effectiveness. It is worth noting that the developed system is readily transferable from virtual training scenarios to real-world deployment without any fine-tuning by utilizing depth images. The proposed method is evaluated and compared with a series of baseline methods in various virtual environments. Experimental results demonstrate the superiority of the proposed model in terms of average reward, learning efficiency, success rate as well as computational time. Moreover, a variety of real-world experiments are also conducted which reveal the high adaptability of our model to both static and dynamic obstacle-cluttered environments. Published version 2019-01-02T03:35:17Z 2019-12-06T21:10:30Z 2019-01-02T03:35:17Z 2019-12-06T21:10:30Z 2018 Journal Article Wu, K., Esfahani, M. A., Yuan, S., & Wang, H. (2018). Learn to Steer through Deep Reinforcement Learning. Sensors, 18(11), 3650-. doi:10.3390/s18113650 1424-8220 https://hdl.handle.net/10356/103342 http://hdl.handle.net/10220/47293 10.3390/s18113650 en Sensors © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). 19 p. application/pdf
spellingShingle Autonomous Steering
DRNTU::Engineering::Electrical and electronic engineering
Deep Reinforcement Learning
Wu, Keyu
Esfahani, Mahdi Abolfazli
Yuan, Shenghai
Wang, Han
Learn to steer through deep reinforcement learning
title Learn to steer through deep reinforcement learning
title_full Learn to steer through deep reinforcement learning
title_fullStr Learn to steer through deep reinforcement learning
title_full_unstemmed Learn to steer through deep reinforcement learning
title_short Learn to steer through deep reinforcement learning
title_sort learn to steer through deep reinforcement learning
topic Autonomous Steering
DRNTU::Engineering::Electrical and electronic engineering
Deep Reinforcement Learning
url https://hdl.handle.net/10356/103342
http://hdl.handle.net/10220/47293
work_keys_str_mv AT wukeyu learntosteerthroughdeepreinforcementlearning
AT esfahanimahdiabolfazli learntosteerthroughdeepreinforcementlearning
AT yuanshenghai learntosteerthroughdeepreinforcementlearning
AT wanghan learntosteerthroughdeepreinforcementlearning