Space-Based Sensor Tasking Using Deep Reinforcement Learning

Abstract To maintain a robust catalog of resident space objects (RSOs), space situational awareness (SSA) mission operators depend on ground- and space-based sensors to repeatedly detect, characterize, and track objects in orbit. Although some space sensors are capable of monitoring lar...

Full description

Bibliographic Details
Main Authors: Siew, Peng M., Jang, Daniel, Roberts, Thomas G., Linares, Richard
Other Authors: Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
Format: Article
Language:English
Published: Springer US 2022
Online Access:https://hdl.handle.net/1721.1/146368
_version_ 1826211805320445952
author Siew, Peng M.
Jang, Daniel
Roberts, Thomas G.
Linares, Richard
author2 Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
author_facet Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
Siew, Peng M.
Jang, Daniel
Roberts, Thomas G.
Linares, Richard
author_sort Siew, Peng M.
collection MIT
description Abstract To maintain a robust catalog of resident space objects (RSOs), space situational awareness (SSA) mission operators depend on ground- and space-based sensors to repeatedly detect, characterize, and track objects in orbit. Although some space sensors are capable of monitoring large swaths of the sky with wide fields of view (FOVs), others—such as maneuverable optical telescopes, narrow-band imaging radars, or satellite laser-ranging systems—are restricted to relatively narrow FOVs and must slew at a finite rate from object to object during observation. Since there are many objects that a narrow FOV sensor could choose to observe within its field of regard (FOR), it must schedule its pointing direction and duration using some algorithm. This combinatorial optimization problem is known as the sensor-tasking problem. In this paper, we developed a deep reinforcement learning agent to task a space-based narrow-FOV sensor in low Earth orbit (LEO) using the proximal policy optimization algorithm. The sensor’s performance—both as a singular sensor acting alone, but also as a complement to a network of taskable, narrow-FOV ground-based sensors—is compared to the greedy scheduler across several figures of merit, including the cumulative number of RSOs observed and the mean trace of the covariance matrix of all of the observable objects in the scenario. The results of several simulations are presented and discussed. Additionally, the results from an LEO SSA sensor in different orbits are evaluated and discussed, as well as various combinations of space-based sensors.
first_indexed 2024-09-23T15:11:41Z
format Article
id mit-1721.1/146368
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T15:11:41Z
publishDate 2022
publisher Springer US
record_format dspace
spelling mit-1721.1/1463682023-06-30T18:05:35Z Space-Based Sensor Tasking Using Deep Reinforcement Learning Siew, Peng M. Jang, Daniel Roberts, Thomas G. Linares, Richard Massachusetts Institute of Technology. Department of Aeronautics and Astronautics Abstract To maintain a robust catalog of resident space objects (RSOs), space situational awareness (SSA) mission operators depend on ground- and space-based sensors to repeatedly detect, characterize, and track objects in orbit. Although some space sensors are capable of monitoring large swaths of the sky with wide fields of view (FOVs), others—such as maneuverable optical telescopes, narrow-band imaging radars, or satellite laser-ranging systems—are restricted to relatively narrow FOVs and must slew at a finite rate from object to object during observation. Since there are many objects that a narrow FOV sensor could choose to observe within its field of regard (FOR), it must schedule its pointing direction and duration using some algorithm. This combinatorial optimization problem is known as the sensor-tasking problem. In this paper, we developed a deep reinforcement learning agent to task a space-based narrow-FOV sensor in low Earth orbit (LEO) using the proximal policy optimization algorithm. The sensor’s performance—both as a singular sensor acting alone, but also as a complement to a network of taskable, narrow-FOV ground-based sensors—is compared to the greedy scheduler across several figures of merit, including the cumulative number of RSOs observed and the mean trace of the covariance matrix of all of the observable objects in the scenario. The results of several simulations are presented and discussed. Additionally, the results from an LEO SSA sensor in different orbits are evaluated and discussed, as well as various combinations of space-based sensors. 2022-11-14T12:55:24Z 2022-11-14T12:55:24Z 2022-11-11 2022-11-13T04:15:58Z Article http://purl.org/eprint/type/JournalArticle https://hdl.handle.net/1721.1/146368 Siew, Peng M., Jang, Daniel, Roberts, Thomas G. and Linares, Richard. 2022. "Space-Based Sensor Tasking Using Deep Reinforcement Learning." PUBLISHER_CC en https://doi.org/10.1007/s40295-022-00354-8 Creative Commons Attribution https://creativecommons.org/licenses/by/4.0/ The Author(s) application/pdf Springer US Springer US
spellingShingle Siew, Peng M.
Jang, Daniel
Roberts, Thomas G.
Linares, Richard
Space-Based Sensor Tasking Using Deep Reinforcement Learning
title Space-Based Sensor Tasking Using Deep Reinforcement Learning
title_full Space-Based Sensor Tasking Using Deep Reinforcement Learning
title_fullStr Space-Based Sensor Tasking Using Deep Reinforcement Learning
title_full_unstemmed Space-Based Sensor Tasking Using Deep Reinforcement Learning
title_short Space-Based Sensor Tasking Using Deep Reinforcement Learning
title_sort space based sensor tasking using deep reinforcement learning
url https://hdl.handle.net/1721.1/146368
work_keys_str_mv AT siewpengm spacebasedsensortaskingusingdeepreinforcementlearning
AT jangdaniel spacebasedsensortaskingusingdeepreinforcementlearning
AT robertsthomasg spacebasedsensortaskingusingdeepreinforcementlearning
AT linaresrichard spacebasedsensortaskingusingdeepreinforcementlearning