Adapting Craig's Method for Least-Squares Problems
Home

  • Authors: Eric Hallman
  • Description: A conjugate-gradient type algorithm for solving sparse least-squares problems: \begin{align*} \min_x & \|Ax-b\|^2 \\ \text{or } \min_x & \|Ax-b\|^2 + \lambda^2 \|x\|^2, \end{align*} where the matrix \(A\) may have any dimension or rank. If \(A\) has low rank then the solution is not unique, and CRAIG+ returns the minimum-norm solution.

    LS-Craig is based on the Golub-Kahan bidiagonalization process, and the iterates it produces are convex combinations of the iterates produced by Craig's method and the algorithm LSQR. It is aimed at minimizing the 2-norm error at every iteration.

    Although Craig's method minimizes the 2-norm error for consistent problems, it does not converge when run on inconsistent systems. CRAIG+ patches this shortcoming by guaranteeing convergence whenever \(\lambda > 0\)or a nontrivial lower bound to the smallest singular value of \(A\) is known.

    Downloads:
  • Slides from Fall 2018.
  • Matlab file for LS-Craig.

  • [Last modified: 8 Apr 2019]