Accelerated algorithms for constrained optimization and control
Nonlinear optimization with equality and inequality constraints is a ubiquitous problem in several optimization and control problems in large-scale systems. Ensuring feasibility along with reasonable convergence to optimal solution remains an open and pressing problem in this area. A class of hi...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2023
|
Online Access: | https://hdl.handle.net/1721.1/152459 |
_version_ | 1826188005534072832 |
---|---|
author | Parashar, Anjali |
author2 | Annaswamy, Anuradha M. |
author_facet | Annaswamy, Anuradha M. Parashar, Anjali |
author_sort | Parashar, Anjali |
collection | MIT |
description | Nonlinear optimization with equality and inequality constraints is a ubiquitous problem in several optimization and control problems in large-scale systems. Ensuring feasibility along with reasonable convergence to optimal solution remains an open and pressing problem in this area.
A class of high-order tuners was recently proposed in adaptive control literature with an effort to lead to accelerated convergence for the case when no constraints are present. In this thesis, we propose a new high-order tuner based algorithm that can
accommodate the presence of equality and inequality constraints. We leverage the linear dependence in solution space to guarantee that equality constraints are always satisfied. We further ensure feasibility with respect to inequality constraints for the specific case of box constraints by introducing time-varying gains in the high-order tuner while retaining the attractive accelerated convergence properties. Theoretical guarantees pertaining to stability are also provided for time-varying regressors. These theoretical propositions are validated by applying them to several categories of optimization problems, in the form of academic examples, power flow optimization and neural network optimization.
We devote special attention to analyze a special case of neural network optimization, namely, linear neural network training problem, to understand the dynamics of nonconvex optimization governed by gradient flow and provide lyapunov stability guarantees for LNNs. |
first_indexed | 2024-09-23T07:53:09Z |
format | Thesis |
id | mit-1721.1/152459 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T07:53:09Z |
publishDate | 2023 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/1524592023-10-19T04:01:07Z Accelerated algorithms for constrained optimization and control Parashar, Anjali Annaswamy, Anuradha M. Massachusetts Institute of Technology. Department of Mechanical Engineering Nonlinear optimization with equality and inequality constraints is a ubiquitous problem in several optimization and control problems in large-scale systems. Ensuring feasibility along with reasonable convergence to optimal solution remains an open and pressing problem in this area. A class of high-order tuners was recently proposed in adaptive control literature with an effort to lead to accelerated convergence for the case when no constraints are present. In this thesis, we propose a new high-order tuner based algorithm that can accommodate the presence of equality and inequality constraints. We leverage the linear dependence in solution space to guarantee that equality constraints are always satisfied. We further ensure feasibility with respect to inequality constraints for the specific case of box constraints by introducing time-varying gains in the high-order tuner while retaining the attractive accelerated convergence properties. Theoretical guarantees pertaining to stability are also provided for time-varying regressors. These theoretical propositions are validated by applying them to several categories of optimization problems, in the form of academic examples, power flow optimization and neural network optimization. We devote special attention to analyze a special case of neural network optimization, namely, linear neural network training problem, to understand the dynamics of nonconvex optimization governed by gradient flow and provide lyapunov stability guarantees for LNNs. S.M. 2023-10-18T17:07:50Z 2023-10-18T17:07:50Z 2023-06 2023-09-28T15:47:30.780Z Thesis https://hdl.handle.net/1721.1/152459 In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology |
spellingShingle | Parashar, Anjali Accelerated algorithms for constrained optimization and control |
title | Accelerated algorithms for constrained optimization and control |
title_full | Accelerated algorithms for constrained optimization and control |
title_fullStr | Accelerated algorithms for constrained optimization and control |
title_full_unstemmed | Accelerated algorithms for constrained optimization and control |
title_short | Accelerated algorithms for constrained optimization and control |
title_sort | accelerated algorithms for constrained optimization and control |
url | https://hdl.handle.net/1721.1/152459 |
work_keys_str_mv | AT parasharanjali acceleratedalgorithmsforconstrainedoptimizationandcontrol |