Robust Control and Learning for Autonomous Spacecraft Proximity Operations with Uncertainty

As the number of spacecraft and debris objects in orbit rapidly increases, active debris removal and satellite servicing efforts are becoming critical to maintain a safe and usable orbital environment. At the same time, future unmanned solar system exploration missions are targeting challenging dest...

Full description

Bibliographic Details
Main Author: Oestreich, Charles E.
Other Authors: Linares, Richard
Format: Thesis
Published: Massachusetts Institute of Technology 2022
Online Access:https://hdl.handle.net/1721.1/139042
_version_ 1826193592234803200
author Oestreich, Charles E.
author2 Linares, Richard
author_facet Linares, Richard
Oestreich, Charles E.
author_sort Oestreich, Charles E.
collection MIT
description As the number of spacecraft and debris objects in orbit rapidly increases, active debris removal and satellite servicing efforts are becoming critical to maintain a safe and usable orbital environment. At the same time, future unmanned solar system exploration missions are targeting challenging destinations for scientific data collection. For practical realization of these technologies, the involved spacecraft must be highly autonomous and able to perform complex proximity operations maneuvers in a safe manner. This requires that the guidance and control system must reliably address inevitable sources of uncertainty while performing the maneuvers. This thesis seeks to improve the flexibility and performance of autonomous spacecraft in uncertain scenarios by leveraging robust control theory and reinforcement learning. A novel algorithm, termed online tube-based model predictive control, is proposed and applied to a simulated mission involving the intercept of an tumbling target with unknown inertial properties. This algorithm demonstrates superior performance and exhibits less reliance on initial knowledge of the uncertainty when compared to standard robust control methods. Separately, reinforcement learning is utilized to develop a policy (to be employed as a feedback control law) for six-degree-of-freedom docking with a rotating target. The policy provides near-optimal performance in a simulated Apollo transposition and docking maneuver with uncertainty in the initial conditions. Both of these methods enhance the level of autonomy in their respective scenarios while also maintaining practical computational run-times. As such, this thesis represents an incremental step towards making missions based on highly autonomous proximity operations a reality.
first_indexed 2024-09-23T09:41:33Z
format Thesis
id mit-1721.1/139042
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T09:41:33Z
publishDate 2022
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1390422022-01-15T03:58:43Z Robust Control and Learning for Autonomous Spacecraft Proximity Operations with Uncertainty Oestreich, Charles E. Linares, Richard Gondhalekar, Ravi Massachusetts Institute of Technology. Department of Aeronautics and Astronautics As the number of spacecraft and debris objects in orbit rapidly increases, active debris removal and satellite servicing efforts are becoming critical to maintain a safe and usable orbital environment. At the same time, future unmanned solar system exploration missions are targeting challenging destinations for scientific data collection. For practical realization of these technologies, the involved spacecraft must be highly autonomous and able to perform complex proximity operations maneuvers in a safe manner. This requires that the guidance and control system must reliably address inevitable sources of uncertainty while performing the maneuvers. This thesis seeks to improve the flexibility and performance of autonomous spacecraft in uncertain scenarios by leveraging robust control theory and reinforcement learning. A novel algorithm, termed online tube-based model predictive control, is proposed and applied to a simulated mission involving the intercept of an tumbling target with unknown inertial properties. This algorithm demonstrates superior performance and exhibits less reliance on initial knowledge of the uncertainty when compared to standard robust control methods. Separately, reinforcement learning is utilized to develop a policy (to be employed as a feedback control law) for six-degree-of-freedom docking with a rotating target. The policy provides near-optimal performance in a simulated Apollo transposition and docking maneuver with uncertainty in the initial conditions. Both of these methods enhance the level of autonomy in their respective scenarios while also maintaining practical computational run-times. As such, this thesis represents an incremental step towards making missions based on highly autonomous proximity operations a reality. S.M. 2022-01-14T14:46:23Z 2022-01-14T14:46:23Z 2021-06 2021-06-16T13:26:57.083Z Thesis https://hdl.handle.net/1721.1/139042 0000-0003-2896-3100 In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology
spellingShingle Oestreich, Charles E.
Robust Control and Learning for Autonomous Spacecraft Proximity Operations with Uncertainty
title Robust Control and Learning for Autonomous Spacecraft Proximity Operations with Uncertainty
title_full Robust Control and Learning for Autonomous Spacecraft Proximity Operations with Uncertainty
title_fullStr Robust Control and Learning for Autonomous Spacecraft Proximity Operations with Uncertainty
title_full_unstemmed Robust Control and Learning for Autonomous Spacecraft Proximity Operations with Uncertainty
title_short Robust Control and Learning for Autonomous Spacecraft Proximity Operations with Uncertainty
title_sort robust control and learning for autonomous spacecraft proximity operations with uncertainty
url https://hdl.handle.net/1721.1/139042
work_keys_str_mv AT oestreichcharlese robustcontrolandlearningforautonomousspacecraftproximityoperationswithuncertainty