Learning models of sequential decision-making with partial specification of agent behavior

Artificial agents that interact with other (human or artificial) agents require models in order to reason about those other agents’ behavior. In addition to the predictive utility of these models, maintaining a model that is aligned with an agent’s true generative model of behavior is critical for e...

Full description

Bibliographic Details
Main Authors: Unhelkar, Vaibhav Vasant, Shah, Julie A
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
Format: Article
Language:English
Published: 2020
Online Access:https://hdl.handle.net/1721.1/125889
Description
Summary:Artificial agents that interact with other (human or artificial) agents require models in order to reason about those other agents’ behavior. In addition to the predictive utility of these models, maintaining a model that is aligned with an agent’s true generative model of behavior is critical for effective human-agent interaction. In applications wherein observations and partial specification of the agent’s behavior are available, achieving model alignment is challenging for a variety of reasons. For one, the agent’s decision factors are often not completely known; further, prior approaches that rely upon observations of agents’ behavior alone can fail to recover the true model, since multiple models can explain observed behavior equally well. To achieve better model alignment, we provide a novel approach capable of learning aligned models that conform to partial knowledge of the agent’s behavior. Central to our approach are a factored model of behavior (AMM), along with Bayesian nonparametric priors, and an inference approach capable of incorporating partial specifications as constraints for model learning. We evaluate our approach in experiments and demonstrate improvements in metrics of model alignment.