### Contents

Back to Continuous Optimization

## Basic Concepts

The (linear) ** semidefinite programming (SDP)** problem is essentially an ordinary linear program where the nonnegativity constraint is replaced by a semidefinite constraint on matrix variables. The standard form for the primal problem is:

\[ \begin{array}{ll}

\mbox{minimize} & C X \\

\mbox{subject to} & A_k \bullet X = b_k \, k=1,\ldots,m \\

& X \geq 0

\end{array} \]

where \(C\), \(A_k\) and \(X\) are all symmetric \(n \times n\) matrices, \(b_k\) is a scalar, and the constraint \(X \geq 0\) means that \(X\), the unknown matrix, must lie in the closed, convex cone of positive semidefinite. Here, \(\bullet\) refers to the standard inner product on the space of symmetric matrices, i.e., for symmetric matrices \(A\) and \(B\), \(A \bullet B = \mbox{trace}(AB)\). The dual version of the problem is

\[ \begin{array}{ll}

\mbox{maximize} & b^T y \\

\mbox{subject to} & \sum_{k=1}^m y_k A_k + Z = C \\

& Z \geq 0

\end{array} \]

where the dual variable \(y\) is an \(m\)-vector of Lagrange multipliers corresponding to the equality constraints in the primal, and \(Z\) is a symmetric \(n\times n\) dual slack matrix.

Note that semidefinite programming subsumes second order cone programming (SOCP) since the SOCP constraints can be written as linear matrix inequalities.

SDP reduces to LP when all the matrices are diagonal. SDP (also LP) is a special instance of a more general problem class called *conic linear programs*, where one seeks to minimize a linear objective function subject to linear constraints and a *cone constraint*. Both the semidefinite cone (for SDP) and the non-negative orthant (for LP) are homogeneous, self-dual cones - there are only 5 such nonisomorphic categories of cones. Another example of a homogeneous, self-dual cone is the quadratic cone, \(Q\), in \(n\) 1 dimensions defined by

\[Q = \left\{ (x_0,x_1,\ldots,x_n) \in R^{n+1} | x_0^2 \geq \sum_{i=1}^n x_i^2 \right\},\]

which gives rise to quadratically constrained problems.

One of the main aspects in which SDP differs from LP is that the non-negative orthant is a polyhedral cone, whereas the semidefinite cone is not. Thus, developing simplex- type algorithms for SDP is a topic of current research. However, it is fairly straightforward to design polynomial time primal-dual interior-point algorithms for these problems. For a tutorial on SDP, see Vandenberghe and Boyd (1996). For a survey on SDP, see Wolkowicz, Saigal, and Vandenberghe (2000).

## Algorithms and Applications

SDP is a generalization of linear programming to the space of block diagonal, symmetric, positive semidefinite matrices. Research and development in SDP has been stimulated by the extension of interior-point methods from LP to the semidefinite case.

### Interior-Point Algorithms

Interior-point algorithms come in two flavors: (i) potential reduction and (ii) path-following. Further, an interior-point method can be primal only, dual only, or primal-dual. Primal-dual path-following algorithms are popular and are implemented in many currently available codes.

As with LP, primal-dual algorithms solve SDP by solving the Karush-Kuhn-Tucker (KKT) system:

\[ \begin{array}{lll}

A_k \bullet X & = & b_k, \; k=1, \ldots,m \\

\sum_{k=1}^m y_k A_k + Z & = & C \\

X \cdot Z & = & 0

\end{array} \]

while simultaneously ensuring that the iterates \(X\) and \(Z\) are symmetric and strictly positive definite. The last equation is the *complementarity condition* for SDP. Primal-dual algorithms use Newton's method to solve a relaxed version of this system: they replace the right-hand side of the complementarity condition by \(\mu I\), where \(\mu > 0\) is a parameter that is driven to zero, and generate a sequence that exhibits global convergence to a solution, if one exists. Again, a difference arises with LP in that applying Newton's method to the KKT system usually results in asymmetric corrections for \(X\). A symmetrization of the complementarity condition is thus warranted. Different symmetrizations lead to different Newton corrections (search directions). Once a search direction is computed, a step is taken along this direction so that the new iterate remains strictly positive definite, and the value of \(\mu\) is decreased. This process is repeated until \(\mu\) is sufficiently small. For more details on primal-dual interior-point methods, see Wright (1996) and Wolkowicz, Saigal and Vandenberghe (2000).

### Applications

SDP has many engineering applications, ranging from control theory to structural design. In particular, many hard optimization problems (with integer constraints) can be relaxed to a problem with convex quadratic constraints which, in turn, can be formulated as an SDP. This SDP provides a polynomial time approximation to the original, hard problem. Usually, approximations from SDP relaxations are better than those from linear programming. We illustrate our point with one application from graph theory: the ''max-cut'' problem which has been instrumental in the recent surge in SDP research. This application also shows how SDP arises as a relaxation of a problem using quadratic approximations. Suppose that \(G = (V, E, W)\) is a weighted, undirected graph with vertices \(V = \{ v_i \}_{i=1}^n\) and edges \((v_i,v_j) \in E\) with weights \(w_{ij}\). The max-cut problem consists of finding the index set \(I \subset \{1,\ldots,n\}\) such that the weight of the edges with one end point in \(I\) and the other end point in \(V \backslash I\) is maximized. This can be formulated as the following optimization problem:

\[

(MC) \quad t := \max_x \frac{1}{2} \sum_{i<j} (1 - x_i x_j), \; x \in \{-1,+1\}^n

\]

We associate \(x_i=1\) with \(i \in I\) and \(x_i=-1\) otherwise. The max-cut problem is known to be NP-hard. We replace the integer constraints with quadratic constraints of the form \(x_i^2 = 1\) and move these constraints to the objective to obtain a Lagrangian relaxation of the form

\[

(MC-LR) \quad t(\lambda) := \max_x x^T Q x + \sum_i \lambda_i(1 - x_i^2),

\]

where \(Q\) is a symmetric \(n \times n\) matrix and the \(\lambda_i\) are Lagrange multipliers. The optimal value of (MC-LR) provides an upper bound for the optimal value of (MC), i.e., \(t \leq t(\lambda)\). In particular, this bound has to hold for the \(\lambda\) that minimizes \(t(\lambda)\), so we have

\[\begin{array}{lll}

t & \leq & \min_{\lambda} \; \max_x x^T Q x + \sum_i \lambda_i(1 - x_i^2) \\

& = & \min_{\lambda} \; \max_x x^T(Q - \mbox{Diag}(\lambda)) x + \lambda^T e

\end{array}\]

where \(e\) is the vector of all ones and \(\mbox{Diag}(\lambda)\) is a diagonal matrix whose \(i\)-th diagonal entry is \(\lambda_i\).

We now note that the inner maximization problem is infinite valued unless the Hessian of the Lagrangian is negative semidefinite, i.e., we have the hidden semidefinite constraint \(Q - \mbox{Diag}(\lambda) \leq 0\). Moreover, once we add this semidefinite constraint to the outer minimization problem, the inner maximization is attained at \(x = 0\). We have eliminated the \(x\) variable and the maximization part of the problem.

Therefore, the upper bound for (MC) can be obtained by solving the following SDP

\[\begin{array}{ll}

\mbox{minimize} & \lambda^T e \\\

\mbox{subject to} & Q - \mbox{Diag}(\lambda) \leq 0

\end{array} \]

whose dual is

\[\begin{array}{ll}

\mbox{maximize} & Q \bullet X \\

\mbox{subject to} & \mbox{diag}(X) = e, \\

& X \geq 0

\end{array}\]

Here, \(\mbox{diag}(X)\) is a vector whose \(i\)-th entry is the \(i\)-th diagonal entry of the matrix \(X\). This pair of SDPs can now be solved with a primal-dual interior-point algorithm.

## Online Resources

- Christoph Helmberg's Semidefinite Programming page
- Bibliography on Semidefinite Programming
- SDPLIB, a collection of semidefinite programming problems in SDPA sparse format
- DIMACS library of mixed semidefinite-quadratic programs
- Michal Kocvara's collection of large sparse SDPs

## Software Resources

The NEOS Server offers a number of Semidefinite Programming Solvers, which are listed below with a brief description. Christoph Helmberg's SDP Page also contains software resources.

### Semidefinite Programming Software on the NEOS Server

- CSDP

Availability: Public domain

Source available: Yes

Synopsis: Callable C code, primal-dual path following method, implements the XZ search direction, Mehrotra predictor-corrector, supports sparse problems. - DSDP

Availability: Public domain

Source available: Yes

Synopsis: Implemented in C and callable as a subroutine library or Matlab Mex function, this solver implements an interior-point method called the dual-scaling direction and carefully exploits sparsity and low rank structure in the data. - penbmi

Availability: Free developer license available for research

Source available: No

Synopsis: Callable from C/C++, Fortan or Matlab. Implements an algorithm combining ideas of penalty and barrier methods with augmented Lagrangian method. - pensdp

Availability: Free developer license available for research

Source available: No

Synopsis: Implemented in C and callable as a subroutine library or Matlab Mex function. Implements a generalized augmented Lagrangian method. - SDPA

Availability: Public domain

Source available: No

Synopsis: C code, primal-dual path following method, implements several standard search directions, Mehrotra predictor-corrector, supports sparse problems, linear algebra based on the Meschach library. - sdplr

Availability: Public domain

Source available: Yes

Synopsis: C code, implements nonlinear, first-order algorithm based on idea of low-rank factorization. - SDPT3

Availability: Public domain

Source available: Yes

Synopsis: Matlab code, primal-dual path following method, implements several standard search directions, Mehrotra predictor-corrector. - SeDuMi

Availability: Public domain

Source available: Yes

Synopsis: Matlab and C code. Implements primal-dual predictor-corrector scheme and self-dual embedding technique.

## References

- Alizadeh, F. 1991. Interior-point methods in semidefinite programming with applications to combinatorial optimization.
*SIAM Journal on Optimization*,**5**(1):13-51 - Alizadeh, F., Haeberly, J.-P. A., and Overton, M. L. 1998. Primal-dual interior-point methods for semidefinite programming: convergence rates, stability and numerical results.
*SIAM Journal on Optimization**,***8**(3):746 - 768. - Lewis, A. S. and Overton, M. L. 1996. Eigenvalue optimization.
*Acta Numerica*,**5**:149-190. - Optimization Online Linear, Cone and Semidefinite Programming area
- Vandenberghe, L. and Boyd, S. 1996. Semidefinite programming.
*SIAM Review*,**38**:49-95. - Wolkowicz, H., Saigal, R., and Vandenberghe, L. 2000.
*Handbook of Semidefinite Programming*. Kluwer Academic Publishers, Boston, MA. - Wright, S. J. 1996.
*Primal-Dual Interior-Point Methods*. SIAM, Philadephia.