TY - JOUR
T1 - A smooth inexact penalty reformulation of convex problems with linear constraints
AU - TATARENKO, TATIANA
AU - NEDICH, ANGELIA
N1 - Publisher Copyright: © 2021 Society for Industrial and Applied Mathematics Publications. All rights reserved.
PY - 2021
Y1 - 2021
N2 - In this work, we consider a constrained convex problem with linear inequalities and provide an inexact penalty reformulation of the problem. The novelty is in the choice of the penalty functions, which are smooth and can induce a nonzero penalty over some points in a feasible region of the original constrained problem. The resulting unconstrained penalized problem is parametrized by two penalty parameters which control the slope and the curvature of the penalty function. With a suitable selection of these penalty parameters, we show that the solutions of the resulting penalized unconstrained problem are feasible for the original constrained problem, under some assumptions. Also, we establish that, with suitable choices of penalty parameters, the solutions of the penalized unconstrained problem can achieve a suboptimal value which is arbitrarily close to the optimal value of the original constrained problem. For the problems with a large number of linear inequality constraints, a particular advantage of such a smooth penalty-based reformulation is that it renders a penalized problem suitable for the implementation of fast incremental gradient methods, which require only one sample from the inequality constraints at each iteration. We consider applying SAGA proposed in [A. Defazio, F. Bach, and S. Lacoste-Julien, SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives, in Proceedings of NIPS, 2014, pp. 1646-1654] to solve the resulting penalized unconstrained problem. Moreover, we propose an alternative approach using a sequence of penalized problem. The approach is based on time-varying penalty parameters and, thus, does not require knowledge of some problem-specific constants that may be difficult to estimate. For a strongly convex objective function, under some conditions on the penalty parameters, we prove that a single-loop full gradient-based algorithm applied to the corresponding time-varying penalized problems converges to the solution of the original constrained problem.
AB - In this work, we consider a constrained convex problem with linear inequalities and provide an inexact penalty reformulation of the problem. The novelty is in the choice of the penalty functions, which are smooth and can induce a nonzero penalty over some points in a feasible region of the original constrained problem. The resulting unconstrained penalized problem is parametrized by two penalty parameters which control the slope and the curvature of the penalty function. With a suitable selection of these penalty parameters, we show that the solutions of the resulting penalized unconstrained problem are feasible for the original constrained problem, under some assumptions. Also, we establish that, with suitable choices of penalty parameters, the solutions of the penalized unconstrained problem can achieve a suboptimal value which is arbitrarily close to the optimal value of the original constrained problem. For the problems with a large number of linear inequality constraints, a particular advantage of such a smooth penalty-based reformulation is that it renders a penalized problem suitable for the implementation of fast incremental gradient methods, which require only one sample from the inequality constraints at each iteration. We consider applying SAGA proposed in [A. Defazio, F. Bach, and S. Lacoste-Julien, SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives, in Proceedings of NIPS, 2014, pp. 1646-1654] to solve the resulting penalized unconstrained problem. Moreover, we propose an alternative approach using a sequence of penalized problem. The approach is based on time-varying penalty parameters and, thus, does not require knowledge of some problem-specific constants that may be difficult to estimate. For a strongly convex objective function, under some conditions on the penalty parameters, we prove that a single-loop full gradient-based algorithm applied to the corresponding time-varying penalized problems converges to the solution of the original constrained problem.
KW - Convex minimization
KW - Incremental methods
KW - Inexact penalty
KW - Linear constraints
UR - http://www.scopus.com/inward/record.url?scp=85114386517&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85114386517&partnerID=8YFLogxK
U2 - 10.1137/18M1209180
DO - 10.1137/18M1209180
M3 - Article
SN - 1052-6234
VL - 31
SP - 2141
EP - 2170
JO - SIAM Journal on Optimization
JF - SIAM Journal on Optimization
IS - 3
ER -