Semi-Parametric Efficient Policy Learning with Continuous Actions

© 2019 Neural information processing systems foundation. All rights reserved. We consider off-policy evaluation and optimization with continuous action spaces. We focus on observational data where the data collection policy is unknown and needs to be estimated. We take a semi-parametric approach whe...

Full description

Bibliographic Details
Main Authors: Chernozhukov, Victor, Lewis, Greg, Syrgkanis, Vasilis, Demirer, Mert
Other Authors: Sloan School of Management
Format: Article
Language:English
Published: The IFS 2021
Online Access:https://hdl.handle.net/1721.1/137326
Description
Summary:© 2019 Neural information processing systems foundation. All rights reserved. We consider off-policy evaluation and optimization with continuous action spaces. We focus on observational data where the data collection policy is unknown and needs to be estimated. We take a semi-parametric approach where the value function takes a known parametric form in the treatment, but we are agnostic on how it depends on the observed contexts. We propose a doubly robust off-policy estimate for this setting and show that off-policy optimization based on this estimate is robust to estimation errors of the policy function or the regression model. Our results also apply if the model does not satisfy our semi-parametric form, but rather we measure regret in terms of the best projection of the true value function to this functional space. Our work extends prior approaches of policy optimization from observational data that only considered discrete actions. We provide an experimental evaluation of our method in a synthetic data example motivated by optimal personalized pricing and costly resource allocation.