Numerical methods. Note 6
Home ] Matrix Diagonalization. Eigensystems. Numerical Methods. Note  « 6 »

The standard routines for all sorts of matrices can be found in EISPACK and LAPACK (newer versions).

Diagonalization. A vector v is called an eigenvector of a matrix A with an eigenvalue λ, if Avv}. Matrix diagonalization means finding all eigenvalues λi and (optionally) eigenvectors vi of the matrix A. If A is hermitian then its eigenvalues are real and its eigenvectors V={v1,...,vn} form a full basis where the matrix A is diagonal VTAV=diag{λ1,...,λn}.

Orthogonal/Similarity transformations, A→QTAQ, where QTQ=1, and generally similarity transformations, A→S-1AS, preserve eigenvalues and eigenvectors. Therefore one of the strategies is to apply a sequence of orthogonal/similarity transformations which (iteratively) turn the matrix into diagonal or triangular form.

QR/QL algorithm. If we know QR (or QL) decomposition of real symmetric matrix A, then the orthogonal transformation A → QTAQ=RQ partly turns the matrix A into diagonal form. Successive iterations eventually make it diagonal. If there are degenerate eigenvalues there will be a corresponding block-diagonal sub-matrix. For convergence properties it is better to use shifts: instead of QR[A] we do QR[A-s1] and then A → RQ+s1. The shift s can be chosen as Ann. As soon as an eigenvalue is found the matrix is deflated, that is the corresponding row and column are crossed out. If one accumulates the the successive transformation matrices Qi, into the total matrix Q=Q1..QN, such that QTAQ = diagonal, then one has all the eigenvectors as columns of the Q matrix. If only one (or few) eigenvector vk is needed one can instead solve the (singular) system (A-λk)x=0.

Tridiagonalization. Each iteration of the QR/QL is an O(n3) operation. On a tridiagonal matrix it is only O(n). Therefore the effective strategy is first to make the matrix tridiagonal and then apply the QR/QL algorithm. Tridiagonalization of a matrix is a non-iterative operation with a fixed number of steps.

Power method is an iterative method to calculate an eigenvalue and the corresponding eigenvector of a matrix A. One starts with a random vector x0 and then computes

xi=Axi-1, i=1,2,... .
The iteration converges to the eigenvector of the largest eigenvalue. The inverse power method, xi=A-1xi-1, converges to the smallest eigenvalue. Alternatively, the iteration xi=(A-λ)-1xi-1 converges to eigenvalue closest to the given number λ. In practice one would normalize the vector at each step. The eigenvalue can then be estimated as xiTxi-1.

Inverse iteration method can be used when one has a good initial guess of the eigenvalue λ (and the eigenvector x0). The trick is not to invert the matrix in the inverse power method, but rather solve the linear system (A-λ)xi=xi-1 using eg QR decomposition.

Problems

  1. The spring constant (fjederkonstant) is to be determined from an experiment where measured were the periods of harmonic oscillations of different masses hung on the spring. The masses were m={0.0698, 0.0743, 0.1194}±0.0001 kg, the corresponding periods were T={0.78,0.80,1.00}±0.01 s. Estimate the spring constant (with the error, of course).
  2. (Non-obligatory) Implement QR/QL algorithm for matrix diagonalization.
  3. Implement inverse iteration algorithm for finding eigenvectors/eigenvalues.