Linear Programming

Basic Concepts

The general form of a linear programming (LP) problem is to minimize a linear objective function of continuous real variables subject to linear constraints. For the purposes of describing and analyzing algorithms, the problem is often stated in standard form as
\[ \begin{array}{lll}
\min & c^T x & & \\
\mbox{s. t.} & A x & = & b \\
& x & \geq & 0
\end{array}
\] where \(x\) is the vector of unknown variables, \(c\) is the cost vector, and \(A\) is the constraint matrix. The matrix \(A\) is generally not square; therefore, solving the LP is not as simple as just inverting the \(A\) matrix. Usually \(A\) has more columns than rows, which means that \(A\) is likely to be under-determined; as a result, there is great latitude in the choice of \(x\) that will minimize \(c^T x\) over the feasible region.

The feasible region is a polyhedron determined by the set
\[\{x \in \mathbb{R}^n \, | \, Ax = b, x \geq 0\}.\]

Any specification of values for the decision variables is a solution; a feasible solution is a solution for which all the constraints are satisfied. An optimal solution is a feasible solution that has the smallest value of the objective function for a minimization problem. An LP may have one, more than one or no optimal solutions. An LP has no optimal solutions if it has no feasible solutions or if the constraints are such that the objective function is unbounded.

In a linear program, a variable can take on any continuous (fractional) value within its lower and upper bounds. For many applications, fractional values do not make sense. Integer programming (IP) problems are optimization problems in which the objective function and all of the constraint functions are linear but some or all of the variables are constrained to take integer values. Integer programming problems often have the advantage of being more realistic than linear programming problems but they have the disadvantage of being much more difficult to solve. While it may not be obvious that integer programming is a much harder problem than linear programming, it is both in theory and in practice. The most widely used general-purpose techniques for solving IPs use the solutions to a series of LPs to manage the search for integer solutions and to prove optimality.

Solution Techniques

The importance of linear programming derives both from its many applications and from the existence of effective general purpose techniques for finding optimal solutions. These techniques are general purpose in that they take as input an LP and determine a solution without reference to any information concerning the origin of the LP or any special structure of the LP. They are fast and reliable over a substantial range of problem sizes and applications.

Although all linear programs can be converted to standard form, it is not usually necessary to do so to solve them. Most LP solvers can handle other forms such as

  • general bounds: \(l \leq x \leq u\), where \(l\) and \(u\) are vectors of known lower and upper bounds
  • two-sided constraints: \(b_1 \leq Ax \leq b_2\) for arbitrary \(b_1\) and \(b_2\)
  • maximization problems: vector \(c\) is multiplied by -1

There are two families of techniques in wide use today, simplex methods and barrier or interior point methods. Both techniques generate an improving sequence of trial solutions until a solution is reached that satisfies the conditions for an optimal solution. Simplex methods were introduced by George Dantzig in the 1940s. Simplex methods visit basic solutions computed by fixing enough of the variables at their bounds to reduced the constraints \(Ax = b\) to a square system, which can be solved for unique values of the remaining variables. Basic solutions represent extreme boundary points of the feasible region defined by \(Ax = b\), \(x >= 0\), and the simplex method can be viewed as moving from one such point to another along the edges of the boundary.

Barrier or interior-point methods by contrast visit points within the interior of the feasible region. These methods derive from techniques for nonlinear programming that were developed and popularized in the 1960s by Anthony Fiacco and Garth McCormick. Their application to linear programming dates back to Narendra Karmarkar’s innovative analysis in 1984.