Measurement Maximizing Adaptive Sampling with Risk Bounding Functions
© 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. In autonomous exploration a mobile agent must adapt to new measurements to seek high reward, but disturbances cause a probability of collision that must be traded off against expected reward. This...
Main Authors: | , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Association for the Advancement of Artificial Intelligence (AAAI)
2021
|
Online Access: | https://hdl.handle.net/1721.1/137367 |
_version_ | 1826217868724797440 |
---|---|
author | Ayton, Benjamin James Williams, Brian C Camilli, Richard |
author2 | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory |
author_facet | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Ayton, Benjamin James Williams, Brian C Camilli, Richard |
author_sort | Ayton, Benjamin James |
collection | MIT |
description | © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. In autonomous exploration a mobile agent must adapt to new measurements to seek high reward, but disturbances cause a probability of collision that must be traded off against expected reward. This paper considers an autonomous agent tasked with maximizing measurements from a Gaussian Process while subject to unbounded disturbances. We seek an adaptive policy in which the maximum allowed probability of failure is constrained as a function of the expected reward. The policy is found using an extension to Monte Carlo Tree Search (MCTS) which bounds probability of failure. We apply MCTS to a sequence of approximating problems, which allows constraint satisfying actions to be found in an anytime manner. Our innovation lies in defining the approximating problems and replanning strategy such that the probability of failure constraint is guaranteed to be satisfied over the true policy. The approach does not need to plan for all measurements explicitly or constrain planning based only on the measurements that were observed. To the best of our knowledge, our approach is the first to enforce probability of failure constraints in adaptive sampling. Through experiments on real bathymetric data and simulated measurements, we show our algorithm allows an agent to take dangerous actions only when the reward justifies the risk. We then verify through Monte Carlo simulations that failure bounds are satisfied. |
first_indexed | 2024-09-23T17:10:27Z |
format | Article |
id | mit-1721.1/137367 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T17:10:27Z |
publishDate | 2021 |
publisher | Association for the Advancement of Artificial Intelligence (AAAI) |
record_format | dspace |
spelling | mit-1721.1/1373672022-10-03T10:56:17Z Measurement Maximizing Adaptive Sampling with Risk Bounding Functions Ayton, Benjamin James Williams, Brian C Camilli, Richard Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory Woods Hole Oceanographic Institution © 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. In autonomous exploration a mobile agent must adapt to new measurements to seek high reward, but disturbances cause a probability of collision that must be traded off against expected reward. This paper considers an autonomous agent tasked with maximizing measurements from a Gaussian Process while subject to unbounded disturbances. We seek an adaptive policy in which the maximum allowed probability of failure is constrained as a function of the expected reward. The policy is found using an extension to Monte Carlo Tree Search (MCTS) which bounds probability of failure. We apply MCTS to a sequence of approximating problems, which allows constraint satisfying actions to be found in an anytime manner. Our innovation lies in defining the approximating problems and replanning strategy such that the probability of failure constraint is guaranteed to be satisfied over the true policy. The approach does not need to plan for all measurements explicitly or constrain planning based only on the measurements that were observed. To the best of our knowledge, our approach is the first to enforce probability of failure constraints in adaptive sampling. Through experiments on real bathymetric data and simulated measurements, we show our algorithm allows an agent to take dangerous actions only when the reward justifies the risk. We then verify through Monte Carlo simulations that failure bounds are satisfied. 2021-11-04T16:44:09Z 2021-11-04T16:44:09Z 2019-07 2021-05-05T12:50:25Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/137367 Ayton, Benjamin James, Williams, Brian C and Camilli, Richard. 2019. "Measurement Maximizing Adaptive Sampling with Risk Bounding Functions." Proceedings of the AAAI Conference on Artificial Intelligence, 33. en http://dx.doi.org/10.1609/AAAI.V33I01.33017511 Proceedings of the AAAI Conference on Artificial Intelligence Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Association for the Advancement of Artificial Intelligence (AAAI) MIT web domain |
spellingShingle | Ayton, Benjamin James Williams, Brian C Camilli, Richard Measurement Maximizing Adaptive Sampling with Risk Bounding Functions |
title | Measurement Maximizing Adaptive Sampling with Risk Bounding Functions |
title_full | Measurement Maximizing Adaptive Sampling with Risk Bounding Functions |
title_fullStr | Measurement Maximizing Adaptive Sampling with Risk Bounding Functions |
title_full_unstemmed | Measurement Maximizing Adaptive Sampling with Risk Bounding Functions |
title_short | Measurement Maximizing Adaptive Sampling with Risk Bounding Functions |
title_sort | measurement maximizing adaptive sampling with risk bounding functions |
url | https://hdl.handle.net/1721.1/137367 |
work_keys_str_mv | AT aytonbenjaminjames measurementmaximizingadaptivesamplingwithriskboundingfunctions AT williamsbrianc measurementmaximizingadaptivesamplingwithriskboundingfunctions AT camillirichard measurementmaximizingadaptivesamplingwithriskboundingfunctions |