Addressing misspecification in contextual optimization
We study the predict-then-optimize framework approach, which combines machine learning and a downstream optimization task. This approach entails forecasting unknown parameters of an optimization problem and then resolving the optimization task based on these predictions. For example, consider an ene...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2024
|
Online Access: | https://hdl.handle.net/1721.1/156138 |
_version_ | 1826208169162964992 |
---|---|
author | Bennouna, Omar |
author2 | Ozdaglar, Asuman |
author_facet | Ozdaglar, Asuman Bennouna, Omar |
author_sort | Bennouna, Omar |
collection | MIT |
description | We study the predict-then-optimize framework approach, which combines machine learning and a downstream optimization task. This approach entails forecasting unknown parameters of an optimization problem and then resolving the optimization task based on these predictions. For example, consider an energy allocation problem when the energy cost in different areas is uncertain. Despite the absence of precise energy cost values at the time of problem-solving, machine learning models are employed to predict these costs, and the resulting optimization problem, which consists for example of minimizing energy costs while meeting some minimal requirements, is solved using state-of-the-art optimization algorithms. When the chosen hypothesis set is well-specified (i.e. it contains the ground truth predictor), the SLO (Sequential Learning and Optimization) approach performs best among state of the art methods, and has provable performance guarantees. In the misspecified setting (i.e. the hypothesis set does not contain the ground truth predictor), the ILO (Integrated Learning and Optimization) approach seems to have better behavior in practice, but does not enjoy theoretical optimality guarantees. We focus on the misspecified setting. In this case, there is no known algorithm that rigorously solves this prediction problem. We provide a tractable ILO algorithm which successfully finds an optimal solution in this setting. Our approach consists of minimizing a surrogate loss which enjoys theoretical optimality guarantees as well as good behavior in practice. In particular, we show that our approach experimentally outperforms SLO and previous ILO methods in the misspecified setting. |
first_indexed | 2024-09-23T14:01:37Z |
format | Thesis |
id | mit-1721.1/156138 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T14:01:37Z |
publishDate | 2024 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/1561382024-08-15T03:22:45Z Addressing misspecification in contextual optimization Bennouna, Omar Ozdaglar, Asuman Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science We study the predict-then-optimize framework approach, which combines machine learning and a downstream optimization task. This approach entails forecasting unknown parameters of an optimization problem and then resolving the optimization task based on these predictions. For example, consider an energy allocation problem when the energy cost in different areas is uncertain. Despite the absence of precise energy cost values at the time of problem-solving, machine learning models are employed to predict these costs, and the resulting optimization problem, which consists for example of minimizing energy costs while meeting some minimal requirements, is solved using state-of-the-art optimization algorithms. When the chosen hypothesis set is well-specified (i.e. it contains the ground truth predictor), the SLO (Sequential Learning and Optimization) approach performs best among state of the art methods, and has provable performance guarantees. In the misspecified setting (i.e. the hypothesis set does not contain the ground truth predictor), the ILO (Integrated Learning and Optimization) approach seems to have better behavior in practice, but does not enjoy theoretical optimality guarantees. We focus on the misspecified setting. In this case, there is no known algorithm that rigorously solves this prediction problem. We provide a tractable ILO algorithm which successfully finds an optimal solution in this setting. Our approach consists of minimizing a surrogate loss which enjoys theoretical optimality guarantees as well as good behavior in practice. In particular, we show that our approach experimentally outperforms SLO and previous ILO methods in the misspecified setting. S.M. 2024-08-14T20:10:13Z 2024-08-14T20:10:13Z 2024-05 2024-07-10T12:59:28.356Z Thesis https://hdl.handle.net/1721.1/156138 In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology |
spellingShingle | Bennouna, Omar Addressing misspecification in contextual optimization |
title | Addressing misspecification in contextual optimization |
title_full | Addressing misspecification in contextual optimization |
title_fullStr | Addressing misspecification in contextual optimization |
title_full_unstemmed | Addressing misspecification in contextual optimization |
title_short | Addressing misspecification in contextual optimization |
title_sort | addressing misspecification in contextual optimization |
url | https://hdl.handle.net/1721.1/156138 |
work_keys_str_mv | AT bennounaomar addressingmisspecificationincontextualoptimization |