# Topics in Linear Programming

Back to Linear Programming

1. How can I determine whether or not an LP has a feasible solution?
From the standpoint of computational complexity, determining whether or not an LP has a feasible solution is essentially as hard as finding an optimal solution. From a practical point of view, determining whether or not an LP has a feasible solution can be achieved by submitting the model to an LP solver with any objective function (e.g., $c = 0$). Most simplex-based solvers are usually able to detect efficiently when no feasible solution is possible.
2. How do I diagnose an infeasible LP model?
A linear program (LP) is infeasible if there exists no solution that satisfies all of the constraints. While the source of the infeasibility may be a legitimate feature of the system being modeled, more often than not the source is due to a modeling error (specification of constraints) or a data error. Most simplex-based LP solvers are usually able to detect efficiently when no feasible solution is possible, however, the source of the infeasibility is often difficult to track down.

Many LP solvers provide diagnostic reports and tools to aid in the process. A common tool is a method to compute an irreducible infeasible subsystem (IIS). An IIS is a subset of the constraints and variable bounds of the original model. If all of the constraints in the model except those in the IIS are removed, the model is still infeasible. However, removing any one member of the IIS makes it feasible. The extent to which the IIS helps in identifying the source of the infeasibility depends in part on the size of the set.

As an alternative to diagnosing the infeasibility after solving, you can explicitly build potential infeasibilities into your model. A simple approach is to add a slack variable with a high penalty cost to each constraint that might contribute to infeasibility. Then, an optimal solution that includes one or more of the slack variables with positive value indicates infeasibility in the corresponding constraint(s). Explicitly modeling constraint violations is often recommended in practice since, in reality, many constraints can be violated for a sufficiently high price.

3. How do I repair an infeasible model?
As mentioned above, many LP solvers offer tools for identifying an IIS, which may help in resolving infeasibility. Methods also exist for (1) finding an IIS cover that has at least one constraint in every IIS or (2) the maximum cardinality feasible subset system. The maximum cardinality feasible subset can be viewed as finding the minimum number of linear constraints to remove such that the remaining constraints constitute a feasible subsystem. The following papers and the references therein provide details:
• Chinneck, J. W. 1996. An Effective Polynomial-Time Heuristic for the Minimum-Cardinality IIS Set-Covering Problem. Annals of Mathematics and Artificial Intelligence 17: 127-144.
• Chinneck, J. W. 2001. Fast Heuristics for the Maximum Feasible Subsystem Problem. INFORMS Journal on Computing 13(3): 210-223.

Some LP solvers go beyond just identifying the infeasibility and offer automated approaches for repairing the model. For example:

• CPLEX has ConflictRefiner, a more general form of IIS finder, that detects minimal sets of mutually contradicting bounds and constraints and FeasOpt that accepts an infeasible model and selectively relaxes bounds and constraints in a way that minimizes a weighted penalty function defined by the user.
• Gurobi has a method feasRelax that creates a relaxation that, when solved, minimizes the amount by which the solution violates the bounds and linear constraints of the model.
• Xpress has a Infeasiblity Repair Utility that relaxes the constraints and bounds in the problem by introducing penalized deviation variables associated with selected rows and columns. Then, a weighted sum of these variables is minimized, resulting in a solution that violates the constraints and bounds minimally with respect to the preferences specified by the user.
4. How do I handle an LP with multiple objective functions?
If you have several objectives, then you may find that they cannot all be optimized by any one solution. Instead, you will need to look for a solution or solutions that achieve an acceptable tradeoff between objectives. Deciding what tradeoffs are acceptable is a topic of investigation in its own right. You might want to consult references on multiple criteria decision making or multiobjective optimization.
5. I have an LP that has large almost-independent matrix blocks that are linked by a few constraints. Are there solution methods that can exploit this structure?
There are decomposition schemes to take advantage of special structure in LPs, and there are several ways to implement the approaches. However, before you invest time and effort into programming, we recommend that you solve the whole model as is with a state-of-the-art LP solver to determine if decomposition is necessary. In many cases, an LP solver will be able to solve it efficiently. In other cases, though, memory requirements may necessitate decomposition.

Dantzig-Wolfe Decomposition, developed by George Dantzig and Philip Wolfe in 1960, is a delayed column generation method for solving large-scale LPs with special structure, specifically a set of coupling constraints and a set of independent submatrices. To implement a Dantzig-Wolfe Decomposition approach, there are several options:

• use a modeling language (e.g., AIMMS, AMPL, GAMS) script to control the solution process and make calls to an LP optimizer. There is an AMPL example for Multi-Commodity Transportation. Dantzig-Wolfe Decomposition with GAMS describes how to implement the approach using GAMS. A disadvantage of this approach is that the overhead associated with the data exchange between the modeling language and the solver can be significant and result in a long solution time, especially if the number of master problems to be solved is large.
• write a computer program to control the solution process and make calls to an LP optimizer using the solver's API. The commercial solvers tend to support interfaces for a variety of programming languages. The advantage of this approach is that the interface to the solver tends be more efficient and use less memory than with a modeling language. The disadvantage is that writing the program may take quite a bit of effort depending on your programming skills.
• use an open source tool to provide the framework for the solution process and specify customizations for your model as necessary. DIP: Decomposition for Integer Programming is an open source tool that is part of the COIN-OR initiative. Dantzig-Wolfe Solver is a solver built as an add on to GLPK.
6. Are there algorithms available to compute the convex hull of a finite number of points in $n$-dimensional space?
There are several directories with links to available algorithms and software.
7. Are there parallel algorithms for solving LPs?
As multi-core processor machines have become standard, options for parallel computation of various kinds have become a common feature of software for linear and mixed-integer programming. CPLEX, Gurobi, MOSEK, and Xpress all offer parallel barrier solvers and concurrent optimizers for LPs as well as branch-and-bound solvers for MIP that exploit multiple processors. Concurrent optimization starts multiple, independent solves on a model using different strategies for each. Optimization terminates when the first solve completes.

For mixed integer programming, there are several parallel branch-cut-price frameworks available through COIN-OR.

• BCP is a parallel framework for implementing branch, cut, and price algorithms for solving mixed-integer linear programs.
• Symphony is an open-source generic mixed-integer linear programming solver, callable library, and extensible framework for implementing customized solvers. SYMPHONY can be built in various sequential and parallel configurations for either distributed or shared memory architectures.
• CHiPPS is the COIN-OR Open Parallel Search Framework, a framework for implementing parallel algorithms based on tree search. The current CHiPPS architecture consists of three layers: (1) the Abstract Library for Parallel Search (ALPS) is the base layer, (2) the Branch, Constrain, and Price Software (BiCePS) is a data management layer, and (3) the BiCePS Linear Integer Solver (BLIS) is a concretization of the BiCePS layer for solving mixed-integer linear programs.

Performance evaluations of parallel solvers must be interpreted with care. One common measurement is the "speedup" defined as the time for solution using a single processor divided by the time using multiple processors. A speedup close to the number of processors is ideal in some sense, but it is only a relative measure. The greatest speedups tend to be achieved by the least efficient codes, and especially by those that fail to take advantage of the sparsity (predominance of zero coefficients) in the constraints. For problems having thousands of constraints, a sparse single-processor code will tend to be faster than a non-sparse multiprocessor code running on current-day hardware.

8. How do I do sensitivity analysis/post-optimality analysis?
Investigating the impact on the optimal solution of changes in the data inputs is important in practice because of the inherent uncertainty in the problem data. Sensitivity analysis considers the range within which the coefficients of the objective function and the right-hand side values of the constraints can vary without affecting the optimal solution. Post-optimality analysis deals with making changes to the problem data that affect feasibility and/or optimality and how to determine a new solution in an efficient way. Some references use the terms sensitivity analysis and post-optimality analysis interchangeably whereas others distinguish between the two.

Most LP textbooks describe how to do sensitivity analysis within the context of the simplex method tableau. The questions of interest are for what range of values of the objective function coefficients or right-hand side values does the solution remain optimal? Post-optimality analysis covers the cases when feasibility is affected -- by changes in the right-hand side values or by adding a new constraint -- or when optimality is affected -- by changes in the objective function coefficients or by adding a new variable; the choice of algorithm to obtain a new solution efficiently depends on the modifications made.

Most LP solvers have features to calculate sensitivity information for the objective function and the constraints, and most LP solvers have built-in procedures for determining the best algorithm (primal simplex, dual simplex, generalized simplex) for re-solving after modifying the problem. Note that this theory applies only to linear programming. The development of duality theory and sensitivity analysis for mixed integer programming has not received much attention since the 1970s and the 1980s. For a starting point in learning more about integer programming duality, see the following references:

• Wolsey, L. A. 1981. Integer Programming Duality: Price Functions and Sensitivity Analysis. Mathematical Programming 20: 173 - 195.
• Dawande, M. W. and Hooker, J. N. 2000. Inference-Based Sensitivity Analysis for Mixed Integer/Linear Programming. Operations Research 48(4): 623 - 634.
• Guzelsoy, M. and T. K. Ralphs. 2011. Integer Programming Duality. In Encyclopedia of Operations Research and Management Science, J. Cochran, ed. John Wiley and Sons, Inc.
9. Do I need to provide a starting point to the LP solver?
No. Simplex-based LP solvers construct an initial basic feasible solution for an LP based on the constraints and the objective function. Most solvers perform a two-phase simplex method, in which an auxiliary problem is solved in phase I to find an initial feasible solution or to identify an infeasible problem. The built-in routines tend to be efficient enough that you do not need to provide a starting basic solution unless you have solved a similar problem. The optimal basis from a similar problem can help a simplex-based solver get a good start that may reduce the number of iterations to optimality. Commercial solvers generally provide an option for saving an optimal basis and reusing it.

Interior-point-based LP solvers have their own strategies for selecting a starting point that is within the interior of the feasible region. Some codes provide an option for loading a user-supplied starting point.

10. How can I combat cycling in the Simplex Algorithms?
In a linear program with $n$ variables, a basic solution $x$ is said to be degenerate if more than $n$ of the constraints are active at $x$. As a result, a simplex algorithm pivot may result in a change of basis with no change in the basic feasible solution. A sequence of such basis changes may lead to the discovery of a cost reducing feasible direction that leads to a different basic feasible solution but the sequence may also lead back to the same basis. Cycling is the name given to this sequence of basis changes that revisits the same vertices over and over again.

Cycling was originally thought to be rare in practice, however, it has been observed particularly in solving highly-structured LPs and in solving LPs that arise as relaxations of integer programming problems. From a theoretical point of view, a number of pivot rules can be implemented that prevent cycling from occurring, for example, the lexicographic pivot rule and Bland's (smallest subscript) pivot rule. These rules are not necessarily compatible with practical implementations of the simplex method. Instead, most implementations of the simplex method use a variety of pricing strategies and other features to guard against cycling.

Should you find that your model is affected by cycling, you can try an alternate algorithm as a first step. Many solvers offer a choice of primal simplex, dual simplex, and a barrier solver for linear programming. If one algorithm gets stuck, you can try another algorithm from scratch or even with the most recent basis as a starting point. If you are not able to resolve the problem with a change in solver, you can consider a perturbation strategy in which a small change is made to the right-hand-side vector $b$. For details, consult one of the LP references, such as pages 389-390 of Numerical Optimization by Nocedal and Wright.