Space Manipulator Collision Avoidance Using a Deep Reinforcement Learning Control
Recent efforts in on-orbit servicing, manufacturing, and debris removal have accentuated some of the challenges related to close-proximity space manipulation. Orbital debris threatens future space endeavors driving active removal missions. Additionally, refueling missions have become increasingly vi...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2023-08-01
|
Series: | Aerospace |
Subjects: | |
Online Access: | https://www.mdpi.com/2226-4310/10/9/778 |
_version_ | 1797581813279883264 |
---|---|
author | James Blaise Michael C. F. Bazzocchi |
author_facet | James Blaise Michael C. F. Bazzocchi |
author_sort | James Blaise |
collection | DOAJ |
description | Recent efforts in on-orbit servicing, manufacturing, and debris removal have accentuated some of the challenges related to close-proximity space manipulation. Orbital debris threatens future space endeavors driving active removal missions. Additionally, refueling missions have become increasingly viable to prolong satellite life and mitigate future debris generation. The ability to capture cooperative and non-cooperative spacecraft is an essential step for refueling or removal missions. In close-proximity capture, collision avoidance remains a challenge during trajectory planning for space manipulators. In this research, a deep reinforcement learning control approach is applied to a three-degrees-of-freedom manipulator to capture space objects and avoid collisions. This approach is investigated in both free-flying and free-floating scenarios, where the target object is either cooperative or non-cooperative. A deep reinforcement learning controller is trained for each scenario to effectively reach a target capture location on a simulated spacecraft model while avoiding collisions. Collisions between the base spacecraft and the target spacecraft are avoided in the planned manipulator trajectories. The trained model is tested for each scenario and the results for the manipulator and base motion are detailed and discussed. |
first_indexed | 2024-03-10T23:09:58Z |
format | Article |
id | doaj.art-e3319716a36b41f29e95932ef456ef6f |
institution | Directory Open Access Journal |
issn | 2226-4310 |
language | English |
last_indexed | 2024-03-10T23:09:58Z |
publishDate | 2023-08-01 |
publisher | MDPI AG |
record_format | Article |
series | Aerospace |
spelling | doaj.art-e3319716a36b41f29e95932ef456ef6f2023-11-19T09:04:43ZengMDPI AGAerospace2226-43102023-08-0110977810.3390/aerospace10090778Space Manipulator Collision Avoidance Using a Deep Reinforcement Learning ControlJames Blaise0Michael C. F. Bazzocchi1Department of Mechanical and Aerospace Engineering, Clarkson University, Potsdam, NY 13699, USADepartment of Mechanical and Aerospace Engineering, Clarkson University, Potsdam, NY 13699, USARecent efforts in on-orbit servicing, manufacturing, and debris removal have accentuated some of the challenges related to close-proximity space manipulation. Orbital debris threatens future space endeavors driving active removal missions. Additionally, refueling missions have become increasingly viable to prolong satellite life and mitigate future debris generation. The ability to capture cooperative and non-cooperative spacecraft is an essential step for refueling or removal missions. In close-proximity capture, collision avoidance remains a challenge during trajectory planning for space manipulators. In this research, a deep reinforcement learning control approach is applied to a three-degrees-of-freedom manipulator to capture space objects and avoid collisions. This approach is investigated in both free-flying and free-floating scenarios, where the target object is either cooperative or non-cooperative. A deep reinforcement learning controller is trained for each scenario to effectively reach a target capture location on a simulated spacecraft model while avoiding collisions. Collisions between the base spacecraft and the target spacecraft are avoided in the planned manipulator trajectories. The trained model is tested for each scenario and the results for the manipulator and base motion are detailed and discussed.https://www.mdpi.com/2226-4310/10/9/778space manipulatordeep reinforcement learningtrajectory planningcollision avoidancefree-floating manipulatorfree-flying manipulator |
spellingShingle | James Blaise Michael C. F. Bazzocchi Space Manipulator Collision Avoidance Using a Deep Reinforcement Learning Control Aerospace space manipulator deep reinforcement learning trajectory planning collision avoidance free-floating manipulator free-flying manipulator |
title | Space Manipulator Collision Avoidance Using a Deep Reinforcement Learning Control |
title_full | Space Manipulator Collision Avoidance Using a Deep Reinforcement Learning Control |
title_fullStr | Space Manipulator Collision Avoidance Using a Deep Reinforcement Learning Control |
title_full_unstemmed | Space Manipulator Collision Avoidance Using a Deep Reinforcement Learning Control |
title_short | Space Manipulator Collision Avoidance Using a Deep Reinforcement Learning Control |
title_sort | space manipulator collision avoidance using a deep reinforcement learning control |
topic | space manipulator deep reinforcement learning trajectory planning collision avoidance free-floating manipulator free-flying manipulator |
url | https://www.mdpi.com/2226-4310/10/9/778 |
work_keys_str_mv | AT jamesblaise spacemanipulatorcollisionavoidanceusingadeepreinforcementlearningcontrol AT michaelcfbazzocchi spacemanipulatorcollisionavoidanceusingadeepreinforcementlearningcontrol |