Autonomous Flight Arcade: Reinforcement Learning for End-to-End Control of Fixed-Wing Aircraft
In this paper, we present the Autonomous Flight Arcade (AFA), a suite of robust environments for end-to-end control of fixed-wing aircraft and quadcopter drones. These environments are playable by both humans and artificial agents, making them useful for varied tasks including reinforcement learning...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2022
|
Online Access: | https://hdl.handle.net/1721.1/139297 |
_version_ | 1811085980942204928 |
---|---|
author | Wrafter, Daniel |
author2 | Rus, Daniela L. |
author_facet | Rus, Daniela L. Wrafter, Daniel |
author_sort | Wrafter, Daniel |
collection | MIT |
description | In this paper, we present the Autonomous Flight Arcade (AFA), a suite of robust environments for end-to-end control of fixed-wing aircraft and quadcopter drones. These environments are playable by both humans and artificial agents, making them useful for varied tasks including reinforcement learning, imitation learning, and human experiments. Additionally, we show that interpretable policies can be learned through the Neural Circuit Policy architecture on these environments. Finally, we present baselines of both human and AI performance on the Autonomous Flight Arcade environments. |
first_indexed | 2024-09-23T13:19:00Z |
format | Thesis |
id | mit-1721.1/139297 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T13:19:00Z |
publishDate | 2022 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/1392972022-01-15T03:27:43Z Autonomous Flight Arcade: Reinforcement Learning for End-to-End Control of Fixed-Wing Aircraft Wrafter, Daniel Rus, Daniela L. Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science In this paper, we present the Autonomous Flight Arcade (AFA), a suite of robust environments for end-to-end control of fixed-wing aircraft and quadcopter drones. These environments are playable by both humans and artificial agents, making them useful for varied tasks including reinforcement learning, imitation learning, and human experiments. Additionally, we show that interpretable policies can be learned through the Neural Circuit Policy architecture on these environments. Finally, we present baselines of both human and AI performance on the Autonomous Flight Arcade environments. M.Eng. 2022-01-14T15:02:18Z 2022-01-14T15:02:18Z 2021-06 2021-06-17T20:14:50.020Z Thesis https://hdl.handle.net/1721.1/139297 In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology |
spellingShingle | Wrafter, Daniel Autonomous Flight Arcade: Reinforcement Learning for End-to-End Control of Fixed-Wing Aircraft |
title | Autonomous Flight Arcade: Reinforcement Learning for End-to-End Control of Fixed-Wing Aircraft |
title_full | Autonomous Flight Arcade: Reinforcement Learning for End-to-End Control of Fixed-Wing Aircraft |
title_fullStr | Autonomous Flight Arcade: Reinforcement Learning for End-to-End Control of Fixed-Wing Aircraft |
title_full_unstemmed | Autonomous Flight Arcade: Reinforcement Learning for End-to-End Control of Fixed-Wing Aircraft |
title_short | Autonomous Flight Arcade: Reinforcement Learning for End-to-End Control of Fixed-Wing Aircraft |
title_sort | autonomous flight arcade reinforcement learning for end to end control of fixed wing aircraft |
url | https://hdl.handle.net/1721.1/139297 |
work_keys_str_mv | AT wrafterdaniel autonomousflightarcadereinforcementlearningforendtoendcontroloffixedwingaircraft |