Polyhedral DC Decomposition and DCA Optimization of Piecewise Linear Functions

For piecewise linear functions <inline-formula> <math display="inline"> <semantics> <mrow> <mi>f</mi> <mo>:</mo> <msup> <mi mathvariant="double-struck">R</mi> <mi>n</mi> </msup> <mo>↦</mo...

Full description

Bibliographic Details
Main Authors: Andreas Griewank, Andrea Walther
Format: Article
Language:English
Published: MDPI AG 2020-07-01
Series:Algorithms
Subjects:
Online Access:https://www.mdpi.com/1999-4893/13/7/166
Description
Summary:For piecewise linear functions <inline-formula> <math display="inline"> <semantics> <mrow> <mi>f</mi> <mo>:</mo> <msup> <mi mathvariant="double-struck">R</mi> <mi>n</mi> </msup> <mo>↦</mo> <mi mathvariant="double-struck">R</mi> </mrow> </semantics> </math> </inline-formula> we show how their abs-linear representation can be extended to yield simultaneously their decomposition into a convex <inline-formula> <math display="inline"> <semantics> <mover accent="true"> <mi>f</mi> <mo>ˇ</mo> </mover> </semantics> </math> </inline-formula> and a concave part <inline-formula> <math display="inline"> <semantics> <mover accent="true"> <mi>f</mi> <mo>^</mo> </mover> </semantics> </math> </inline-formula>, including a pair of generalized gradients <inline-formula> <math display="inline"> <semantics> <mrow> <mover accent="true"> <mi>g</mi> <mo>ˇ</mo> </mover> <mo>∈</mo> <msup> <mi mathvariant="double-struck">R</mi> <mi>n</mi> </msup> <mo>∋</mo> <mover accent="true"> <mi>g</mi> <mo>^</mo> </mover> </mrow> </semantics> </math> </inline-formula>. The latter satisfy strict chain rules and can be computed in the reverse mode of algorithmic differentiation, at a small multiple of the cost of evaluating <i>f</i> itself. It is shown how <inline-formula> <math display="inline"> <semantics> <mover accent="true"> <mi>f</mi> <mo>ˇ</mo> </mover> </semantics> </math> </inline-formula> and <inline-formula> <math display="inline"> <semantics> <mover accent="true"> <mi>f</mi> <mo>^</mo> </mover> </semantics> </math> </inline-formula> can be expressed as a single maximum and a single minimum of affine functions, respectively. The two subgradients <inline-formula> <math display="inline"> <semantics> <mover accent="true"> <mi>g</mi> <mo>ˇ</mo> </mover> </semantics> </math> </inline-formula> and <inline-formula> <math display="inline"> <semantics> <mrow> <mo>−</mo> <mover accent="true"> <mi>g</mi> <mo>^</mo> </mover> </mrow> </semantics> </math> </inline-formula> are then used to drive DCA algorithms, where the (convex) inner problem can be solved in finitely many steps, e.g., by a Simplex variant or the true steepest descent method. Using a reflection technique to update the gradients of the concave part, one can ensure finite convergence to a local minimizer of <i>f</i>, provided the Linear Independence Kink Qualification holds. For piecewise smooth objectives the approach can be used as an inner method for successive piecewise linearization.
ISSN:1999-4893