Nondifferentiable Optimization

Back to Continuous Optimization

Nondifferentiable optimization deals with problems where the smoothness assumption on the functions is relaxed, meaning that gradients do not necessarily exist. In nondifferentiable optimization, the functions may have kinks or corner points, so they cannot be approximated locally by a tangent hyperplane or by a quadratic approximation. Nondifferentiable optimization problems arise in a variety of contexts such as applications in rectilinear data fitting, problems involving \(\ell_1\) (Euclidean) or \(\ell_{\infty}\) (Chebychev) norms, and algorithms such as exact penalty methods that change constrained problems into unconstrained problems. Because the non-smoothness manifests itself in many different ways, there are no "black box" solution techniques to be applied; instead, solution techniques are developed to handle the particular structure of problem.

Software Resources

References

  • Elhedhli, S., Goffin, J.-L. and Vial, J.-P. 2001. Nondifferentiable Optimization: Introduction, Applications, and Algorithms in Encyclopedia of Optimization, C. A. Floudas and P. M. Pardalos, eds. Kluwer Academic Publishers, Dordrecht, pp. 1705 - 1710.
  • Optimization Online Convex and Nonsmooth Optimization area
  • Rockafellar, T. 1994. Nonsmooth Optimization in Mathematical Programming: State of the Art 1994, J. R. Birge and K. G. Murty, eds. University of Michigan Press, Ann Arbor, pp. 248 - 258.

Last updated: November 13, 2013