Back to Linear Programming
The announcement by Karmarkar in 1984 that he had developed a fast algorithm that generated iterates that lie in the interior of the feasible set (rather than on the boundary, as simplex methods do) opened up exciting new avenues for research in both the computational complexity and mathematical programming communities. Since then, there has been intense research into a variety of methods that maintain strict feasibility of all iterates, at least with respect to the inequality constraints. Although dwarfed in volume by simplex-based packages, interior-point products have emerged and have proven to be competitive with, and often superior to, the best simplex packages, especially on large problems.
The NEOS Server offers several solvers that implement interior-point methods, including bpmpd, MOSEK, and OOQP.
Here, we discuss only primal-dual interior point algorithms, which are effective from a computational perspective as well as the most amenable to theoretical analysis. To begin, the dual of the standard form linear programming problem can be written as
\[\max\left\{b^T y \; : \; s = c - A^T y \geq 0 \right\}.\]
Then, the optimality conditions for \((x,y,s)\) to be a primal-dual solution triplet are that
where
and \(e\) is the vector of all ones.
Interior-point algorithms generate iterates \((x_k,y_k,s_k)\) such that \(x_k > 0\) and \(s_k > 0\). As \(k \to \infty\), the equality-constraint violations \(\|A x_k - b \|\) and \(\| a^T y_k + s_k - c\|\) and the duality gap \(x_k^T s_k\) are driven to zero, yielding a limiting point that solves the primal and dual linear programs.
Primal-dual methods can be thought of as a variant of Newton's method applied to the system of equations formed by the first three optimality conditions. Given the current iterate \((x_k,y_k,s_k)\) and the damping parameter \(\sigma_k \in [0,1]\), the search direction \((w_k,z_k,t_k)\) is generated by solving the linear system
where
\[\mu_k = x_k^T s_k / n\]
The new point is then obtained by setting
where \(\alpha_k^D\) and \(\alpha_k^P\) are chosen to ensure that \(x_{k+1} > 0\) and \(s_{k+1} > 0\).
When \(\sigma_k=0\), the search direction is the pure Newton search direction for the nonlinear system,
\(A x = b, \; A^T y + s = c, \; S X e = 0\)
and the resulting method is an ''affine scaling'' algorithm. The effect of choosing positive values of \(\sigma_k\) is to orient the step away from the boundary of the nonnegative orthant defined by \(x \geq 0, \; s \geq 0\), thus allowing longer step lengths \(\alpha_k^P, \; \alpha_k^D\) to be taken. ''Path-following'' methods require \(\alpha_k^P, \; \alpha_k^D\) to be chosen so that \(x_k\) and \(s_k\) are not merely positive but also satisfy the centrality condition\[(x_k,s_k) \in C_k\]
where
\[C_k = \left\{ (x,s): x_i s_i \geq \gamma \mu_k, \; i=1,\ldots,n \right\}\]
for some \(\gamma \in (0,1)\). The other requirement on \(\alpha_k^P\) and \(\alpha_k^D\) is that the decrease in \(\mu_k\) should not outpace the improvement in feasibility (that is, the decrease in \(\| A x_k - b \|\) and \(\| A^T y_k + s_k - c \|\)). Greater priority is placed on attaining feasibility than on closing the duality gap. It is possible to design path-following algorithms satisfying these requirements for which the sequence \(\{ \mu_k \}\) converges to zero at a linear rate. Further, the number of iterates required for convergence is a polynomial function of the size of the problem (typically, order \(n\) or order \(n^{3/2}\)). By allowing the damping parameter \(\sigma_k\) to become small as the solution is approached, the method behaves more and more like a pure Newton method, and superlinear convergence can be achieved.