Bound constrained optimization problems consider the problem of optimizing an objective function subject to bound constraints on the values of the variables. In mathematical terms,
\[ \begin{array}{ll}
\mbox{minimize} & f(x) \\
\mbox{subject to} & l \leq x \leq u
\end{array}
\]
Bound constrained optimization problems play an important role in the development of algorithms and software for the general constrained problem because many algorithms reduce the solution of the general problem to the solution of a sequence of bound-constrained problems. Bound constrained optimization problems also arise on their own in applications where the parameters that describe physical quantities are constrained to be in a given range.
Optimality Conditions
Algorithms for the solution of bound-constrained problems seek a local minimizer \(x^* \,\) of \(f(x) \,\). The standard first-order necessary condition for a local minimizer \(x^* \,\) can be expressed in terms of the binding set \[B(x^*) = \{ i : x_i^* = l_i, \partial_i f(x^*) \geq 0 \} \cup \{ i : x_i^* = u_i, \partial_i f(x^*) \leq 0 \} \] at \(x^* \,\) by requiring that \[\partial_i f(x^*) = 0, \quad \forall i \not \in B(x^*).\]
A second-order sufficient condition for \(x^* \,\) to be a local minimizer of the bound-constrained problem is that the first-order condition holds and that
\[s^T \nabla^2 f(x) s > 0\] for all vectors \(s \not = 0\) with \(s_i=0, \; i \in B_s(x^*)\) where \[B_s(x^*) = B(x^*) \cap \{i : \partial_i f(x^*) \not = 0 \}\]
is the strictly binding set at \(x^* \,\).
Given any set of free variables \(F \,\), we can define the reduced gradient and the reduced Hessian matrix, respectively, as the gradient and the Hessian matrix of \(f(x) \,\) with respect to the free variables. In this terminology, the second-order condition requires that the reduced gradient be zero and that the reduced Hessian matrix be positive definite when the set \(F \,\) of free variables consists of all the variables that are not strictly binding at \(x^* \,\). Many algorithms for the solution of bound-constrained problems use unconstrained minimization techniques to explore the reduced problem defined by a set \(F_k \,\) of free variables. Once this exploration is complete, a new set of free variables is chosen with the aim of driving the reduced gradient to zero.
Algorithms/h5>
References
The books of Fletcher (1987) and Gill, Murray, and Wright (1981) contain chapters on the solution of linearly constrained problems with specific details on the solution of bound-constrained problems. Both Newton and quasi-Newton methods are discussed, but neither book discusses the gradient-projection method, since its use in codes for the solution of large-scale problems is recent. Bertsekas (1996) has a section on the solution of bound-constrained problems with gradient-projection techniques, while Conn, Gould, and Toint (1992) discuss the approach used by the LANCELOT code.
- Bertsekas, D. P. 1996. Constrained Optimization and Lagrange Multiplier Methods, Athena Scientific, Belmont, MA.
- Conn, A. R., Gould, N. I. M., and Toint, P. L. 1992. LANCELOT, Springer Series in Computational Mathematics, Springer-Verlag, Berlin.
- Fletcher, R. 1987. Practical Methods of Optimization, 2nd ed., John Wiley & Sons, Inc., New York.
- Gill, P. E., Murray, W., and Wright, M. H. 1981. Practical Optimization, Academic Press, New York.