Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning
Finding feasible, collision-free paths for multiagent systems can be challenging, particularly in non-communicating scenarios where each agent's intent (e.g. goal) is unobservable to the others. In particular, finding time efficient paths often requires anticipating interaction with neighboring...
Main Authors: | , , , |
---|---|
Other Authors: | |
Format: | Article |
Published: |
Institute of Electrical and Electronics Engineers (IEEE)
2018
|
Online Access: | http://hdl.handle.net/1721.1/114720 https://orcid.org/0000-0003-3756-3256 https://orcid.org/0000-0002-1648-8325 https://orcid.org/0000-0001-9377-6745 https://orcid.org/0000-0001-8576-1930 |
_version_ | 1826211882518708224 |
---|---|
author | Chen, Yu Fan Liu, Miao Everett, Michael F How, Jonathan P |
author2 | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics |
author_facet | Massachusetts Institute of Technology. Department of Aeronautics and Astronautics Chen, Yu Fan Liu, Miao Everett, Michael F How, Jonathan P |
author_sort | Chen, Yu Fan |
collection | MIT |
description | Finding feasible, collision-free paths for multiagent systems can be challenging, particularly in non-communicating scenarios where each agent's intent (e.g. goal) is unobservable to the others. In particular, finding time efficient paths often requires anticipating interaction with neighboring agents, the process of which can be computationally prohibitive. This work presents a decentralized multiagent collision avoidance algorithm based on a novel application of deep reinforcement learning, which effectively offloads the online computation (for predicting interaction patterns) to an offline learning procedure. Specifically, the proposed approach develops a value network that encodes the estimated time to the goal given an agent's joint configuration (positions and velocities) with its neighbors. Use of the value network not only admits efficient (i.e., real-time implementable) queries for finding a collision-free velocity vector, but also considers the uncertainty in the other agents' motion. Simulation results show more than 26% improvement in paths quality (i.e., time to reach the goal) when compared with optimal reciprocal collision avoidance (ORCA), a state-of-the-art collision avoidance strategy. |
first_indexed | 2024-09-23T15:12:56Z |
format | Article |
id | mit-1721.1/114720 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T15:12:56Z |
publishDate | 2018 |
publisher | Institute of Electrical and Electronics Engineers (IEEE) |
record_format | dspace |
spelling | mit-1721.1/1147202022-09-29T13:26:58Z Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning Chen, Yu Fan Liu, Miao Everett, Michael F How, Jonathan P Massachusetts Institute of Technology. Department of Aeronautics and Astronautics Massachusetts Institute of Technology. Department of Mechanical Engineering Massachusetts Institute of Technology. Laboratory for Information and Decision Systems Chen, Yu Fan Liu, Miao Everett, Michael F How, Jonathan P Finding feasible, collision-free paths for multiagent systems can be challenging, particularly in non-communicating scenarios where each agent's intent (e.g. goal) is unobservable to the others. In particular, finding time efficient paths often requires anticipating interaction with neighboring agents, the process of which can be computationally prohibitive. This work presents a decentralized multiagent collision avoidance algorithm based on a novel application of deep reinforcement learning, which effectively offloads the online computation (for predicting interaction patterns) to an offline learning procedure. Specifically, the proposed approach develops a value network that encodes the estimated time to the goal given an agent's joint configuration (positions and velocities) with its neighbors. Use of the value network not only admits efficient (i.e., real-time implementable) queries for finding a collision-free velocity vector, but also considers the uncertainty in the other agents' motion. Simulation results show more than 26% improvement in paths quality (i.e., time to reach the goal) when compared with optimal reciprocal collision avoidance (ORCA), a state-of-the-art collision avoidance strategy. Ford Motor Company 2018-04-13T18:42:36Z 2018-04-13T18:42:36Z 2017-07 2018-03-21T16:59:10Z Article http://purl.org/eprint/type/ConferencePaper 978-1-5090-4633-1 978-1-5090-4634-8 http://hdl.handle.net/1721.1/114720 Chen, Yu Fan, Miao Liu, Michael Everett, and Jonathan P. How. “Decentralized Non-Communicating Multiagent Collision Avoidance with Deep Reinforcement Learning.” 2017 IEEE International Conference on Robotics and Automation (ICRA), May 2017, Singapore, Singapore, 2017. https://orcid.org/0000-0003-3756-3256 https://orcid.org/0000-0002-1648-8325 https://orcid.org/0000-0001-9377-6745 https://orcid.org/0000-0001-8576-1930 http://dx.doi.org/10.1109/ICRA.2017.7989037 2017 IEEE International Conference on Robotics and Automation (ICRA) Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) arXiv |
spellingShingle | Chen, Yu Fan Liu, Miao Everett, Michael F How, Jonathan P Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning |
title | Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning |
title_full | Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning |
title_fullStr | Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning |
title_full_unstemmed | Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning |
title_short | Decentralized non-communicating multiagent collision avoidance with deep reinforcement learning |
title_sort | decentralized non communicating multiagent collision avoidance with deep reinforcement learning |
url | http://hdl.handle.net/1721.1/114720 https://orcid.org/0000-0003-3756-3256 https://orcid.org/0000-0002-1648-8325 https://orcid.org/0000-0001-9377-6745 https://orcid.org/0000-0001-8576-1930 |
work_keys_str_mv | AT chenyufan decentralizednoncommunicatingmultiagentcollisionavoidancewithdeepreinforcementlearning AT liumiao decentralizednoncommunicatingmultiagentcollisionavoidancewithdeepreinforcementlearning AT everettmichaelf decentralizednoncommunicatingmultiagentcollisionavoidancewithdeepreinforcementlearning AT howjonathanp decentralizednoncommunicatingmultiagentcollisionavoidancewithdeepreinforcementlearning |