Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement Learning
To avoid inefficient movement or the freezing problem in crowded environments, we previously proposed a human-aware interactive navigation method that uses inducement, i.e., voice reminders or physical touch. However, the use of inducement largely depends on many factors, including human attributes,...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10061379/ |
_version_ | 1797870695431012352 |
---|---|
author | Mitsuhiro Kamezaki Ryan Ong Shigeki Sugano |
author_facet | Mitsuhiro Kamezaki Ryan Ong Shigeki Sugano |
author_sort | Mitsuhiro Kamezaki |
collection | DOAJ |
description | To avoid inefficient movement or the freezing problem in crowded environments, we previously proposed a human-aware interactive navigation method that uses inducement, i.e., voice reminders or physical touch. However, the use of inducement largely depends on many factors, including human attributes, task contents, and environmental contexts. Thus, it is unrealistic to pre-design a set of parameters such as the coefficients in the cost function, personal space, and velocity in accordance with the situation. To understand and evaluate if inducement (voice reminder in this study) is effective and how and when it must be used, we propose to comprehend them through multiagent deep reinforcement learning in which the robot voluntarily acquires an inducing policy suitable for the situation. Specifically, we evaluate whether a voice reminder can improve the time to reach the goal by learning when the robot uses it. Results of simulation experiments with four different situations show that the robot could learn inducing policies suited for each situation, and the effectiveness of inducement is greatly improved in more congested and narrow situations. |
first_indexed | 2024-04-10T00:32:35Z |
format | Article |
id | doaj.art-e7048566e2584b4baa33125f68118b32 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-04-10T00:32:35Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-e7048566e2584b4baa33125f68118b322023-03-14T23:00:23ZengIEEEIEEE Access2169-35362023-01-0111239462395510.1109/ACCESS.2023.325351310061379Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement LearningMitsuhiro Kamezaki0https://orcid.org/0000-0002-4377-8993Ryan Ong1Shigeki Sugano2https://orcid.org/0000-0002-9331-2446Waseda Research Institute for Science and Engineering, Waseda University, Tokyo, Shinjuku-ku, JapanDepartment of Modern Mechanical Engineering, Waseda University, Tokyo, Shinjuku-ku, JapanWaseda Research Institute for Science and Engineering, Waseda University, Tokyo, Shinjuku-ku, JapanTo avoid inefficient movement or the freezing problem in crowded environments, we previously proposed a human-aware interactive navigation method that uses inducement, i.e., voice reminders or physical touch. However, the use of inducement largely depends on many factors, including human attributes, task contents, and environmental contexts. Thus, it is unrealistic to pre-design a set of parameters such as the coefficients in the cost function, personal space, and velocity in accordance with the situation. To understand and evaluate if inducement (voice reminder in this study) is effective and how and when it must be used, we propose to comprehend them through multiagent deep reinforcement learning in which the robot voluntarily acquires an inducing policy suitable for the situation. Specifically, we evaluate whether a voice reminder can improve the time to reach the goal by learning when the robot uses it. Results of simulation experiments with four different situations show that the robot could learn inducing policies suited for each situation, and the effectiveness of inducement is greatly improved in more congested and narrow situations.https://ieeexplore.ieee.org/document/10061379/Autonomous mobile robotmultiagent deep reinforcement learninginducing policy acquisitioncollaborative robot navigation |
spellingShingle | Mitsuhiro Kamezaki Ryan Ong Shigeki Sugano Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement Learning IEEE Access Autonomous mobile robot multiagent deep reinforcement learning inducing policy acquisition collaborative robot navigation |
title | Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement Learning |
title_full | Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement Learning |
title_fullStr | Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement Learning |
title_full_unstemmed | Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement Learning |
title_short | Acquisition of Inducing Policy in Collaborative Robot Navigation Based on Multiagent Deep Reinforcement Learning |
title_sort | acquisition of inducing policy in collaborative robot navigation based on multiagent deep reinforcement learning |
topic | Autonomous mobile robot multiagent deep reinforcement learning inducing policy acquisition collaborative robot navigation |
url | https://ieeexplore.ieee.org/document/10061379/ |
work_keys_str_mv | AT mitsuhirokamezaki acquisitionofinducingpolicyincollaborativerobotnavigationbasedonmultiagentdeepreinforcementlearning AT ryanong acquisitionofinducingpolicyincollaborativerobotnavigationbasedonmultiagentdeepreinforcementlearning AT shigekisugano acquisitionofinducingpolicyincollaborativerobotnavigationbasedonmultiagentdeepreinforcementlearning |