### Contents

Back to Constrained Optimization or Continuous Optimization

The ** quadratic programming (QP) problem** involves minimizing a quadratic function subject to linear constraints. A general formulation is

\[ \begin{array}{lcll}

\mbox{minimize} & \frac{1}{2} x^T Q x + c^T x & & \\

\mbox{subject to} & a_i^T x = b_i & \forall i \in \mathcal{E} \\

& a_i^T x \geq b_i & \forall i \in \mathcal{I}

\end{array}

\]

where \(Q \in R^{n\times n}\) is symmetric, and the index sets \(\mathcal{E}\) and \(\mathcal{I} \,\) specify the equality and inequality constraints, respectively. Quadratic programs are an important class of problems on their own and as subproblems in methods for general constrained optimization problems, such as sequential quadratic programming (SQP) and augmented Lagrangian methods.

The difficulty of solving the QP problem depends largely on the nature of the matrix \(Q\). For *convex* QPs, the matrix \(Q\) is positive semidefinite on the feasible set; there are polynomial-time algorithms to solve these relatively easy problems. For *non-convex* QPs in which the matrix \(Q\) has negative eigenvalues, the objective function may have more than one local minimizer; in this case, the problem is NP-complete. An extreme example is the problem

\[ \begin{array}{lllll}

\mbox{minimize} & - x^T x & & & \\

\mbox{subject to} & -1 \leq & x_i & \leq 1, & \forall i=1,\ldots,n

\end{array}

\]

which has a minimizer at any \(x\) with \(|x_i| \,=1\) for \(i = 1,..., n \,\). In this extreme case, there are \(\, 2^n_{} \,\) local minimizers.

## Optimality conditions

The necessary optimality conditions for vector \(x^*_{}\) to be a local minimizer are

(1) that it should be *primal feasible*:

\(a_i^T x^* = b_i\) for \(i \in \mathcal{E}\) and \(a_i^T x^* \geq b_i\) for \(i \in \mathcal{I}\),

(2) that it should be *dual feasible*:

\(Q x^* + c = \sum_{i \in \mathcal{E} \cup \mathcal{I}} a_i y_i^* \,\) and \(y_i^* \geq 0\) for \(i \in I\),

for some vector of Lagrange multipliers \(y^*_{} \,\), and

(3) that the *complementary slackness* condition holds:

\(( a_i^T x^* - b_i ) y_i^* = 0\) for all \(i \in \mathcal{I}\).

These requirements are commonly known as the *Karush-Kuhn-Tucker (KKT) conditions*.

A KKT point is a local minimizer if and only if \(s^T H s \geq 0\) for all vectors \(s \in \mathcal{S}\), where

\[

\mathcal{S} = \left\{\begin{array}{ll}

s : & a_i^T s = 0 \, \mbox{for } \, i \in \mathcal{E} \, \mbox{and } \, i \in \mathcal{I} \, \mbox{such that } \, a_i^T x^* = b_i \, \mbox{and } \, y^*_i > 0 \, \mbox{and } \\

& a_i^T s \geq 0 \, \mbox{for } \, i \in \mathcal{I} \, \mbox{such that } \, a_i^T x^* = b_i \, \mbox{and } \, y^*_i = 0 \end{array} \right\}

\]

This second-order condition is trivially satisfied for convex problems but may be hard (NP-complete) to check for non-convex ones if there are many \(i \in \mathcal{I}\) for which \(a_i^T x^* = b_i\) and \(y^*_i = 0\).

## Algorithms

The Quadratic Programming Algorithms page provides information on algorithms for quadratic programming problems.

Linear programming is a special case of quadratic programming when the matrix \(Q = 0\). Linear least squares problems are QPs; Levenberg-Marquardt and Gauss-Newton are specialized methods for solving them. If there are no constraints in the QP, then the problem is similar to solving a system of equations, for which there are many techniques (e.g. Conjugate Gradient Method or direct factorizations such as LU or Cholesky Decompositions).

Quadratic programming (QP) problems can be viewed as special types of more general problems, so they can be solved by software packages for these more general problems. Quadratically constrained quadratic programming (QCQP) problems generalize QPs in that the constraints are *quadratic* instead of linear. Second order cone programming (SOCP) problems generalize QCQPs, and nonlinear programming (NLP) problems generalize SOCPs.

## Online and Software Resources

- Optimization Online Nonlinear Programming area
- N. Gould and P. Toint's QP page
- H. Mittelman's list of QP solvers
- NIST's Guide to Available Mathematical Software (GAMS) has lists of codes for convex QP and non-convex QP. Most are in Fortran or C.

## Test Problems

- Maros and Meszaros's collection of QP data files in QPS format
- Maros and Meszaros's data files as part of CUTEr
- Online QP benchmark collection
- Testcases listed on Hans Mittelmann's website

## References

- Altman, A. and Gondzio, J. 1999. Regularized symmetric indefinite systems in interior point methods for linear and quadratic optimization.
*Optimization Methods and Software***11**: 275-302. - Fletcher, R. 1971. A general quadratic programming algorithm.
*Journal of the Institute of Mathematics and its Applications***7**: 76-91. - Friedlander, M. P. and Orban, D. 2012. A primal-dual regularized interior-point method for convex quadratic programs.
*Mathematical Programming Computation***4**: 71-107. - Gill, P.E., Murray, W., Saunders, M. A. and Wright, M. H. 1991. Inertia-controlling methods for general quadratic programming.
*SIAM Review***33**: 1-36. - Goldfarb, D. and Idnani, A. U. 1983. A numerically stable dual method for solving strictly convex quadratic programs.
*Mathematical Programming***27**: 1-33. - Gould, N. I. M. and Toint, P. L. 2002. An iterative working-set method for large-scale non-convex quadratic programming.
*Applied Numerical Mathematics***43**: 109-128. - Gould, N. I. M., Orban, D., Sartenaer, A. and Toint, P. L. 2001. Superlinear convergence of primal-dual interior point algorithms for nonlinear programming.
*SIAM Journal on Optimization***11**: 974-1002. - Kozlov, M. K., Tarasov, S. P. and Khachiyan, L. G. 1979. Polynomial solvability of convex quadratic programming.
*Soviet Mathematics Doklady***20**: 1108-1111 - Murty, K. G. and Kabadi, S. N. 1987. Some NP-complete problems in quadratic and nonlinear programming.
*Mathematical Programming***39**: 117-129. - Nocedal, J. and Wright, S. J. 2006.
*Numerical Optimization*, Springer. - Potra, F. A. and Stoer, J. 2009. On a class of superlinearly convergent polynomial time interior point methods for sufficient LCP.
*SIAM Journal on Optimization***20**: 1333-1363. - Stoer, J. and Wechs, M. 1998. Infeasible-interior-point paths for sufficient linear complementarity problems and their analyticity.
*Mathematical Programming***83**: 407-423. - Stoer, J. and Wechs, M. 1999. On the analyticity properties of infeasible-interior-point paths for monotone linear complementarity problems.
*Numerische Mathematik***81**: 631-645. - Vanderbei, R. J. 1999. LOQO: An interior point code for quadratic programming.
*Optimization Methods and Software***12**: 451–484. - Vavasis, S. A. 1991.
*Nonlinear Optimization: Complexity Issues*, Oxford University Press, Oxford, England. - Zhang, Y. 1994. On the convergence of a class of infeasible interior-point methods for the horizontal linear complementarity problem.
*SIAM Journal on Optimization***4**: 208-227. - Zhao, G. and Sun, J. 1999. On the rate of local convergence of high-order-infeasible-path- following algorithms for \(p_*\)-linear complementarity problems.
*Computational Optimization and Applications***14**: 293-307.