Efficient Imitation Learning for Robust, Adaptive, Vision-based Agile Flight Under Uncertainty

Existing robust model predictive control (MPC) and vision-based state estimation algorithms for agile flight, while achieving impressive performance, still demand significant onboard computation, preventing deployment on robots with tight Cost, Size, Weight, and Power (CSWaP)constraints. The existin...

全面介绍

书目详细资料
主要作者: Tagliabue, Andrea
其他作者: How, Jonathan P.
格式: Thesis
出版: Massachusetts Institute of Technology 2024
在线阅读:https://hdl.handle.net/1721.1/155345
_version_ 1826205320877178880
author Tagliabue, Andrea
author2 How, Jonathan P.
author_facet How, Jonathan P.
Tagliabue, Andrea
author_sort Tagliabue, Andrea
collection MIT
description Existing robust model predictive control (MPC) and vision-based state estimation algorithms for agile flight, while achieving impressive performance, still demand significant onboard computation, preventing deployment on robots with tight Cost, Size, Weight, and Power (CSWaP)constraints. The existing imitation learning strategies that can train computationally efficient deep neural network policies from those algorithms have limited robustness and/or are impractical (large number of demonstrations, training time), limiting rapid policy learning once new mission specifications or flight data become available. This thesis details efficient imitation learning strategies that make policy learning from MPC more practical while preserving robustness to uncertainties. First, this thesis contributes a method for efficiently learning trajectory tracking policies from robust MPC, enabling learning of a policy that achieves real-world robustness from a single real-world or simulated mission. Second, it presents a strategy for learning from MPCs with time-varying operating points, exploiting nonlinear models, and enabling acrobatic flights. The obtained policy has an onboard inference time of only 15 𝜇s and can perform a flip on a UAV subject to uncertainties. Third, it extends the previous approaches to vision-based policies, enabling onboard sensing-to-action with milliseconds-level latency, reducing the computational cost of vision-based state estimation, while using data from a single real-world mission. Fourth, it presents a method to reduce control errors under uncertainties, demonstrating rapid adaptation to unexpected failures and uncertainties while avoiding the challenging reward tuning/design of existing methods. Finally, this thesis evaluates the proposed contributions in simulation and hardware, including flights on an insect-scale (sub-gram), soft-actuated, flapping-wing UAV. The methods developed in this thesis achieve the world’s first deployment of policies learned from MPC on sub-gram soft-actuated aerial robots.
first_indexed 2024-09-23T13:10:50Z
format Thesis
id mit-1721.1/155345
institution Massachusetts Institute of Technology
last_indexed 2024-09-23T13:10:50Z
publishDate 2024
publisher Massachusetts Institute of Technology
record_format dspace
spelling mit-1721.1/1553452024-06-28T03:29:08Z Efficient Imitation Learning for Robust, Adaptive, Vision-based Agile Flight Under Uncertainty Tagliabue, Andrea How, Jonathan P. Karaman, Sertac Gilitschenski, Igor Massachusetts Institute of Technology. Department of Aeronautics and Astronautics Existing robust model predictive control (MPC) and vision-based state estimation algorithms for agile flight, while achieving impressive performance, still demand significant onboard computation, preventing deployment on robots with tight Cost, Size, Weight, and Power (CSWaP)constraints. The existing imitation learning strategies that can train computationally efficient deep neural network policies from those algorithms have limited robustness and/or are impractical (large number of demonstrations, training time), limiting rapid policy learning once new mission specifications or flight data become available. This thesis details efficient imitation learning strategies that make policy learning from MPC more practical while preserving robustness to uncertainties. First, this thesis contributes a method for efficiently learning trajectory tracking policies from robust MPC, enabling learning of a policy that achieves real-world robustness from a single real-world or simulated mission. Second, it presents a strategy for learning from MPCs with time-varying operating points, exploiting nonlinear models, and enabling acrobatic flights. The obtained policy has an onboard inference time of only 15 𝜇s and can perform a flip on a UAV subject to uncertainties. Third, it extends the previous approaches to vision-based policies, enabling onboard sensing-to-action with milliseconds-level latency, reducing the computational cost of vision-based state estimation, while using data from a single real-world mission. Fourth, it presents a method to reduce control errors under uncertainties, demonstrating rapid adaptation to unexpected failures and uncertainties while avoiding the challenging reward tuning/design of existing methods. Finally, this thesis evaluates the proposed contributions in simulation and hardware, including flights on an insect-scale (sub-gram), soft-actuated, flapping-wing UAV. The methods developed in this thesis achieve the world’s first deployment of policies learned from MPC on sub-gram soft-actuated aerial robots. Ph.D. 2024-06-27T19:46:23Z 2024-06-27T19:46:23Z 2024-05 2024-05-28T19:36:31.450Z Thesis https://hdl.handle.net/1721.1/155345 In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology
spellingShingle Tagliabue, Andrea
Efficient Imitation Learning for Robust, Adaptive, Vision-based Agile Flight Under Uncertainty
title Efficient Imitation Learning for Robust, Adaptive, Vision-based Agile Flight Under Uncertainty
title_full Efficient Imitation Learning for Robust, Adaptive, Vision-based Agile Flight Under Uncertainty
title_fullStr Efficient Imitation Learning for Robust, Adaptive, Vision-based Agile Flight Under Uncertainty
title_full_unstemmed Efficient Imitation Learning for Robust, Adaptive, Vision-based Agile Flight Under Uncertainty
title_short Efficient Imitation Learning for Robust, Adaptive, Vision-based Agile Flight Under Uncertainty
title_sort efficient imitation learning for robust adaptive vision based agile flight under uncertainty
url https://hdl.handle.net/1721.1/155345
work_keys_str_mv AT tagliabueandrea efficientimitationlearningforrobustadaptivevisionbasedagileflightunderuncertainty