See Also: Unconstrained Optimization Nonlinear Least-Squares Problems
The Gauss-Newton Method and Levenberg-Marquardt Method algorithms usually exhibit quadratic convergence for zero-residual (\(r(x^*)=0\)) problems. Otherwise, the convergence is only linear. It would seem that something could be gained by treating a nonlinear least squares problem as a general unconstrained minimization problem and applying quasi-Newton algorithms to it, since quasi-Newton algorithms are superlinearly convergent. A simple hybrid strategy, implemented in the PROC NLP algorithms, combines the Gauss-Newton and BFGS quasi-Newton algorithms. In this approach, the Gauss-Newton step is used if it decreases the value of by a factor of 5. If not, the BFGS step is taken. The initial approximate Hessian for BFGS is taken to be the initial Gauss-Newton matrix \(f^\prime(x_0)^Tf^\prime(x_0)\). For zero-residual problems, Gauss-Newton steps are always eventually taken, and the iterates converge quadratically. For other problems, BFGS steps are eventually used and superlinear convergence is obtained.
Another combination of the Gauss-Newton and quasi-Newton approaches appears in the algorithm implemented in the PORT 3 library. A trust-region approach is used with an approximate Hessian matrix of the form \(f^\prime(x_k)^Tf^\prime(x_k) + S_k\) where \(S_k\) is a quasi-Newton approximation to the second term in the true Hessian. Low-rank corrections are applied to \(S_k\) at each iteration, together with a scaling strategy that ensures that this matrix stays small when the residuals are small. At each iteration, a decision is made whether to take the Gauss-Newton step or the step that is computed by including the term \(S_k\).