Learning to Solve Optimization Problems With Hard Linear Constraints

Constrained optimization problems have appeared in a wide variety of challenging real-world problems, where constraints often capture the physics of the underlying system. Classic methods for solving these problems relied on iterative algorithms that explored the feasible domain in the search for th...

Full description

Bibliographic Details
Main Authors: Meiyi Li, Soheil Kolouri, Javad Mohammadi
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10147810/
_version_ 1797798136568086528
author Meiyi Li
Soheil Kolouri
Javad Mohammadi
author_facet Meiyi Li
Soheil Kolouri
Javad Mohammadi
author_sort Meiyi Li
collection DOAJ
description Constrained optimization problems have appeared in a wide variety of challenging real-world problems, where constraints often capture the physics of the underlying system. Classic methods for solving these problems relied on iterative algorithms that explored the feasible domain in the search for the best solution. These iterative methods often became the computational bottleneck in decision-making and adversely impacted time-sensitive applications. Recently, neural approximators have shown promise as a replacement for the iterative solvers that can output the optimal solution in a single feed-forward providing rapid solutions to optimization problems. However, enforcing constraints through neural networks remains an open challenge. In this paper, we have developed a neural approximator that maps the inputs to an optimization problem with hard linear constraints to a feasible solution that is nearly optimal. Our proposed approach consists of five main steps: 1) reducing the original problem to optimization on a set of independent variables, 2) finding a gauge function that maps the <inline-formula> <tex-math notation="LaTeX">$\ell _{\infty} $ </tex-math></inline-formula>-norm unit ball to the feasible set of the reduced problem, 3) learning a neural approximator that maps the optimization&#x2019;s inputs to a virtual optimal point in the <inline-formula> <tex-math notation="LaTeX">$\ell _{\infty} $ </tex-math></inline-formula>-norm unit ball, and 4) gauge mapping to project the virtual optimal point in the <inline-formula> <tex-math notation="LaTeX">$\ell _{\infty} $ </tex-math></inline-formula>-norm unit ball onto the feasible space, then 5) finding the values of the dependent variables from the independent variable to recover the solution to the original problem. We can guarantee hard feasibility through this sequence of steps. Unlike the current learning-assisted solutions, our method is free of parameter-tuning (compared to penalty-based methods) and removes iterations altogether. We have demonstrated the performance of our proposed method in quadratic programming in the context of optimal power dispatch (critical to the resiliency of our electric grid) and constrained non-convex optimization in the context of image registration problems. Our results have supported our theoretical findings and demonstrate superior performance in terms of computational time, optimality, and the feasibility of the solution compared to existing approaches.
first_indexed 2024-03-13T03:58:59Z
format Article
id doaj.art-5d2a49418a7c453191f23142f49b8f5d
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-03-13T03:58:59Z
publishDate 2023-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-5d2a49418a7c453191f23142f49b8f5d2023-06-21T23:00:13ZengIEEEIEEE Access2169-35362023-01-0111599956000410.1109/ACCESS.2023.328519910147810Learning to Solve Optimization Problems With Hard Linear ConstraintsMeiyi Li0https://orcid.org/0000-0002-0178-7883Soheil Kolouri1https://orcid.org/0000-0001-8495-5362Javad Mohammadi2https://orcid.org/0000-0003-0425-5302Department of Civil, Architectural and Environmental Engineering, The University of Texas at Austin, Austin, TX, USADepartment of Computer Science, Vanderbilt University, Nashville, TN, USADepartment of Civil, Architectural and Environmental Engineering, The University of Texas at Austin, Austin, TX, USAConstrained optimization problems have appeared in a wide variety of challenging real-world problems, where constraints often capture the physics of the underlying system. Classic methods for solving these problems relied on iterative algorithms that explored the feasible domain in the search for the best solution. These iterative methods often became the computational bottleneck in decision-making and adversely impacted time-sensitive applications. Recently, neural approximators have shown promise as a replacement for the iterative solvers that can output the optimal solution in a single feed-forward providing rapid solutions to optimization problems. However, enforcing constraints through neural networks remains an open challenge. In this paper, we have developed a neural approximator that maps the inputs to an optimization problem with hard linear constraints to a feasible solution that is nearly optimal. Our proposed approach consists of five main steps: 1) reducing the original problem to optimization on a set of independent variables, 2) finding a gauge function that maps the <inline-formula> <tex-math notation="LaTeX">$\ell _{\infty} $ </tex-math></inline-formula>-norm unit ball to the feasible set of the reduced problem, 3) learning a neural approximator that maps the optimization&#x2019;s inputs to a virtual optimal point in the <inline-formula> <tex-math notation="LaTeX">$\ell _{\infty} $ </tex-math></inline-formula>-norm unit ball, and 4) gauge mapping to project the virtual optimal point in the <inline-formula> <tex-math notation="LaTeX">$\ell _{\infty} $ </tex-math></inline-formula>-norm unit ball onto the feasible space, then 5) finding the values of the dependent variables from the independent variable to recover the solution to the original problem. We can guarantee hard feasibility through this sequence of steps. Unlike the current learning-assisted solutions, our method is free of parameter-tuning (compared to penalty-based methods) and removes iterations altogether. We have demonstrated the performance of our proposed method in quadratic programming in the context of optimal power dispatch (critical to the resiliency of our electric grid) and constrained non-convex optimization in the context of image registration problems. Our results have supported our theoretical findings and demonstrate superior performance in terms of computational time, optimality, and the feasibility of the solution compared to existing approaches.https://ieeexplore.ieee.org/document/10147810/Learning to optimizehard constraintsmachine learningoptimal power flowimage registration
spellingShingle Meiyi Li
Soheil Kolouri
Javad Mohammadi
Learning to Solve Optimization Problems With Hard Linear Constraints
IEEE Access
Learning to optimize
hard constraints
machine learning
optimal power flow
image registration
title Learning to Solve Optimization Problems With Hard Linear Constraints
title_full Learning to Solve Optimization Problems With Hard Linear Constraints
title_fullStr Learning to Solve Optimization Problems With Hard Linear Constraints
title_full_unstemmed Learning to Solve Optimization Problems With Hard Linear Constraints
title_short Learning to Solve Optimization Problems With Hard Linear Constraints
title_sort learning to solve optimization problems with hard linear constraints
topic Learning to optimize
hard constraints
machine learning
optimal power flow
image registration
url https://ieeexplore.ieee.org/document/10147810/
work_keys_str_mv AT meiyili learningtosolveoptimizationproblemswithhardlinearconstraints
AT soheilkolouri learningtosolveoptimizationproblemswithhardlinearconstraints
AT javadmohammadi learningtosolveoptimizationproblemswithhardlinearconstraints