Optimal Tasking of Ground-Based Sensors for Space Situational Awareness Using Deep Reinforcement Learning

Space situational awareness (SSA) is becoming increasingly challenging with the proliferation of resident space objects (RSOs), ranging from CubeSats to mega-constellations. Sensors within the United States Space Surveillance Network are tasked to repeatedly detect, characterize, and track these RSO...

Full description

Bibliographic Details
Main Authors: Siew, Peng Mun, Linares, Richard
Other Authors: Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
Format: Article
Published: Multidisciplinary Digital Publishing Institute 2022
Online Access:https://hdl.handle.net/1721.1/145993
_version_ 1826192730366148608
author Siew, Peng Mun
Linares, Richard
author2 Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
author_facet Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
Siew, Peng Mun
Linares, Richard
author_sort Siew, Peng Mun
collection MIT
description Space situational awareness (SSA) is becoming increasingly challenging with the proliferation of resident space objects (RSOs), ranging from CubeSats to mega-constellations. Sensors within the United States Space Surveillance Network are tasked to repeatedly detect, characterize, and track these RSOs to retain custody and estimate their attitude. The majority of these sensors consist of ground-based sensors with a narrow field of view and must be slew at a finite rate from one RSO to another during observations. This results in a complex combinatorial problem that poses a major obstacle to the SSA sensor tasking problem. In this work, we successfully applied deep reinforcement learning (DRL) to overcome the curse of dimensionality and optimally task a ground-based sensor. We trained several DRL agents using proximal policy optimization and population-based training in a simulated SSA environment. The DRL agents outperformed myopic policies in both objective metrics of RSOs’ state uncertainties and the number of unique RSOs observed over a 90-min observation window. The agents’ robustness to changes in RSO orbital regimes, observation window length, observer’s location, and sensor properties are also examined. The robustness of the DRL agents allows them to be applied to any arbitrary locations and scenarios.
first_indexed 2024-09-23T09:28:03Z
format Article
id mit-1721.1/145993
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T09:28:03Z
publishDate 2022
publisher Multidisciplinary Digital Publishing Institute
record_format dspace
spelling mit-1721.1/1459932023-06-30T15:49:41Z Optimal Tasking of Ground-Based Sensors for Space Situational Awareness Using Deep Reinforcement Learning Siew, Peng Mun Linares, Richard Massachusetts Institute of Technology. Department of Aeronautics and Astronautics Space situational awareness (SSA) is becoming increasingly challenging with the proliferation of resident space objects (RSOs), ranging from CubeSats to mega-constellations. Sensors within the United States Space Surveillance Network are tasked to repeatedly detect, characterize, and track these RSOs to retain custody and estimate their attitude. The majority of these sensors consist of ground-based sensors with a narrow field of view and must be slew at a finite rate from one RSO to another during observations. This results in a complex combinatorial problem that poses a major obstacle to the SSA sensor tasking problem. In this work, we successfully applied deep reinforcement learning (DRL) to overcome the curse of dimensionality and optimally task a ground-based sensor. We trained several DRL agents using proximal policy optimization and population-based training in a simulated SSA environment. The DRL agents outperformed myopic policies in both objective metrics of RSOs’ state uncertainties and the number of unique RSOs observed over a 90-min observation window. The agents’ robustness to changes in RSO orbital regimes, observation window length, observer’s location, and sensor properties are also examined. The robustness of the DRL agents allows them to be applied to any arbitrary locations and scenarios. 2022-10-26T17:26:23Z 2022-10-26T17:26:23Z 2022-10-16 2022-10-26T11:07:53Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/145993 Sensors 22 (20): 7847 (2022) PUBLISHER_CC http://dx.doi.org/10.3390/s22207847 Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/ application/pdf Multidisciplinary Digital Publishing Institute Multidisciplinary Digital Publishing Institute
spellingShingle Siew, Peng Mun
Linares, Richard
Optimal Tasking of Ground-Based Sensors for Space Situational Awareness Using Deep Reinforcement Learning
title Optimal Tasking of Ground-Based Sensors for Space Situational Awareness Using Deep Reinforcement Learning
title_full Optimal Tasking of Ground-Based Sensors for Space Situational Awareness Using Deep Reinforcement Learning
title_fullStr Optimal Tasking of Ground-Based Sensors for Space Situational Awareness Using Deep Reinforcement Learning
title_full_unstemmed Optimal Tasking of Ground-Based Sensors for Space Situational Awareness Using Deep Reinforcement Learning
title_short Optimal Tasking of Ground-Based Sensors for Space Situational Awareness Using Deep Reinforcement Learning
title_sort optimal tasking of ground based sensors for space situational awareness using deep reinforcement learning
url https://hdl.handle.net/1721.1/145993
work_keys_str_mv AT siewpengmun optimaltaskingofgroundbasedsensorsforspacesituationalawarenessusingdeepreinforcementlearning
AT linaresrichard optimaltaskingofgroundbasedsensorsforspacesituationalawarenessusingdeepreinforcementlearning