Utilizing Human Feedback in Autonomous Driving: Discrete vs. Continuous

Deep reinforcement learning (Deep RL) algorithms are defined with fully continuous or discrete action spaces. Among DRL algorithms, soft actor–critic (SAC) is a powerful method capable of handling complex and continuous state–action spaces. However, a long training time and data efficiency are the m...

Full description

Bibliographic Details
Main Authors: Maryam Savari, Yoonsuck Choe
Format: Article
Language:English
Published: MDPI AG 2022-07-01
Series:Machines
Subjects:
Online Access:https://www.mdpi.com/2075-1702/10/8/609
_version_ 1797409229770850304
author Maryam Savari
Yoonsuck Choe
author_facet Maryam Savari
Yoonsuck Choe
author_sort Maryam Savari
collection DOAJ
description Deep reinforcement learning (Deep RL) algorithms are defined with fully continuous or discrete action spaces. Among DRL algorithms, soft actor–critic (SAC) is a powerful method capable of handling complex and continuous state–action spaces. However, a long training time and data efficiency are the main drawbacks of this algorithm, even though SAC is robust for complex and dynamic environments. One of the proposed solutions to overcome this issue is to utilize human feedback. In this paper, we investigate different forms of human feedback: head direction vs. steering and discrete vs. continuous feedback. To this end, a real-time human demonstration from steer and human head direction with discrete or continuous actions were employed as human feedback in an autonomous driving task in the CARLA simulator. We used alternating actions from a human expert and SAC to have a real-time human demonstration. Furthermore, to test the method without potential individual differences in human performance, we tested the discrete vs. continuous feedback in an inverted pendulum task, with an ideal controller to stand in for the human expert. The results for both the CARLA and the inverted pendulum tasks showed a significant reduction in the training time and a significant increase in gained rewards with discrete feedback, as opposed to continuous feedback, while the action space remained continuous. It was also shown that head direction feedback can be almost as good as steering feedback. We expect our findings to provide a simple yet efficient training method for Deep RL for autonomous driving, utilizing multiple sources of human feedback.
first_indexed 2024-03-09T04:11:17Z
format Article
id doaj.art-0ee54bba0da9483a9b5979bedb6926e7
institution Directory Open Access Journal
issn 2075-1702
language English
last_indexed 2024-03-09T04:11:17Z
publishDate 2022-07-01
publisher MDPI AG
record_format Article
series Machines
spelling doaj.art-0ee54bba0da9483a9b5979bedb6926e72023-12-03T13:59:29ZengMDPI AGMachines2075-17022022-07-0110860910.3390/machines10080609Utilizing Human Feedback in Autonomous Driving: Discrete vs. ContinuousMaryam Savari0Yoonsuck Choe1Department of Computer Science & Engineering, Texas A&M University, College Station, TX 76549, USADepartment of Computer Science & Engineering, Texas A&M University, College Station, TX 76549, USADeep reinforcement learning (Deep RL) algorithms are defined with fully continuous or discrete action spaces. Among DRL algorithms, soft actor–critic (SAC) is a powerful method capable of handling complex and continuous state–action spaces. However, a long training time and data efficiency are the main drawbacks of this algorithm, even though SAC is robust for complex and dynamic environments. One of the proposed solutions to overcome this issue is to utilize human feedback. In this paper, we investigate different forms of human feedback: head direction vs. steering and discrete vs. continuous feedback. To this end, a real-time human demonstration from steer and human head direction with discrete or continuous actions were employed as human feedback in an autonomous driving task in the CARLA simulator. We used alternating actions from a human expert and SAC to have a real-time human demonstration. Furthermore, to test the method without potential individual differences in human performance, we tested the discrete vs. continuous feedback in an inverted pendulum task, with an ideal controller to stand in for the human expert. The results for both the CARLA and the inverted pendulum tasks showed a significant reduction in the training time and a significant increase in gained rewards with discrete feedback, as opposed to continuous feedback, while the action space remained continuous. It was also shown that head direction feedback can be almost as good as steering feedback. We expect our findings to provide a simple yet efficient training method for Deep RL for autonomous driving, utilizing multiple sources of human feedback.https://www.mdpi.com/2075-1702/10/8/609deep reinforcement learningsoft actor–criticcontinuous actionsdiscrete action feedbacklearning from demonstrationslearning from interventions
spellingShingle Maryam Savari
Yoonsuck Choe
Utilizing Human Feedback in Autonomous Driving: Discrete vs. Continuous
Machines
deep reinforcement learning
soft actor–critic
continuous actions
discrete action feedback
learning from demonstrations
learning from interventions
title Utilizing Human Feedback in Autonomous Driving: Discrete vs. Continuous
title_full Utilizing Human Feedback in Autonomous Driving: Discrete vs. Continuous
title_fullStr Utilizing Human Feedback in Autonomous Driving: Discrete vs. Continuous
title_full_unstemmed Utilizing Human Feedback in Autonomous Driving: Discrete vs. Continuous
title_short Utilizing Human Feedback in Autonomous Driving: Discrete vs. Continuous
title_sort utilizing human feedback in autonomous driving discrete vs continuous
topic deep reinforcement learning
soft actor–critic
continuous actions
discrete action feedback
learning from demonstrations
learning from interventions
url https://www.mdpi.com/2075-1702/10/8/609
work_keys_str_mv AT maryamsavari utilizinghumanfeedbackinautonomousdrivingdiscretevscontinuous
AT yoonsuckchoe utilizinghumanfeedbackinautonomousdrivingdiscretevscontinuous