QR algorithm

QR algorithm is a numerical method for the calculation of all the eigenvalues ​​of the eigenvectors, and possibly a square matrix. This also QR method or the QR iteration method mentioned is based on the QR decomposition and was introduced independently by John GF Francis and Vera Nikolaevna Kublanowskaja in the year 1961-1962. Was a precursor of the LR algorithm Rutishauser Heinz (1958), but which is less stable and is based on the LU decomposition. Often, the iterates converge from the QR algorithm to the Schur form of the matrix. The original procedure is quite complicated and thus - even on today's computers - not for matrices with hundreds of thousands of rows and columns practicable.

Derived variants such as the multi- shift method of Z. Bai and James Demmel 1989 and the numerically stable variant of K. Braman, R. Byers and R. Mathias 2002 practical running times that are cubic in the size of the matrix. The latter method is implemented in the numerical software library LAPACK, in turn, is in many computer algebra system (CAS ) is used for the numerical matrix algorithms.

  • 2.1 Simple QR iteration
  • 2.2 QR- algorithm with simple shifts
  • 2.3 Simple shifts for symmetric matrices
  • 2.4 QR algorithm with double shifts
  • 2.5 QR algorithm with multiple shifts
  • 2.6 Implicit multi- shift iteration
  • 3.1 Similarity Transformations
  • 3.2 Election of the shifts, convergence
  • 3.3 The QR algorithm as an extension of the power method

Description of the QR algorithm

The aim of the calculation method

Based on a real or complex matrix or a sequence of orthogonal or unitary matrices is determined, so that the recursive matrix result largely converges to an upper triangular matrix. Since all transformations are similarity transformations in the recursion, all matrices of the matrix sequence the same eigenvalues ​​with the same multiplicities.

If, in the limit, an upper triangular matrix is reached, a Schur form of A, then let the eigenvalues ​​of the diagonal elements read. In the case of a real matrix with orthogonal transformations, however, there may also be complex eigenvalues. Then the result of the process is an upper block triangular matrix whose diagonal blocks have the size 1 for the real eigenvalues ​​or the size 2 for complex eigenvalues. An eigenvalue and its complex conjugate eigenvalue corresponds to the diagonal block of the required rotation and expansion.

General scheme of the process

Based on a matrix (respectively) is determined according to the following a sequence of matrices:

In this general form, the variants with (implicit) Shifts (English for displacement ) can be displayed and analyzed and multiple shifts.

The matrix function is often a polynomial (in this case a linear combination of matrix powers ) with real (or complex ) coefficients. In the simplest case, a linear polynomial is selected which then has the form. The algorithm is simplified in this case to the classic version ( with Easy - Shift):

Is set in each iteration, the procedure with the on subspaces ( here the full vector space ) extended power method is consistent.

The QR decomposition controlling polynomial can be fixed from the beginning or can be dynamically adapted in each iteration the current matrix. There are also attempts to use for matrix rational functions, ie Functions of the form with polynomials g and h

Deflation

Does it in an iteration that a left lower block from the columns and the rows below a predetermined threshold in the amounts of all his Komponententen, so the process continues separately on the two diagonal square sub-blocks of rows / columns as well. Are the resulting by dividing matrices small enough, the determination of the eigenvalues ​​eg by calculation of the characteristic polynomial and its zeros are finished.

Transformation to Hessenberg form

The objective of the QR algorithm to produce an upper triangular shape, or a block - triangular shape in which the size of blocks corresponding to two complex eigenvalues. Can be nearly achieved in a simple manner by means of a similarity transformation on Hessenberg form, i.e., a matrix with only one ( not identical disappearing ) lower secondary diagonals.

  • Set
  • For k = 1 to n-1 leads from
  • Notes the total transformation matrix.
  • Is now in Hessenberg form.

Due to the Hessenberg form, the determination of the characteristic polynomials of sub-arrays is facilitated. The Hessenberg form of a symmetric matrix is a tridiagonal, which further calculations much faster.

Furthermore, get the Hessenberg form in each step of the QR algorithm. This is based on the arithmetic of generalized triangular matrices in which all entries disappear below a lower secondary diagonals. A Hessenberg matrix is therefore a generalized triangular matrix with a secondary diagonals. Under multiplication add up the numbers of non-zero secondary diagonals in addition usually the larger number is retained.

Therefore, as well as the orthogonal matrix have the number of lower secondary diagonals. Well true, because, even

And the latter product has also m secondary diagonals. This is generally possible only if precisely one secondary diagonal, so is back in Hessenberg form. From the decomposition of into linear factors follows (see below) that this is always the case.

It is building on that show ( Francis 1962 after Bai / Demmel ) that already the first column of x, which is also proportional to the first column of the following matrix is completely determined. More specifically, U is an orthogonal matrix whose first column is proportional to x, the result by the transformed matrix is returned to Hesse mountain shape. Since only the x components of the rows 1, ..., m 1 are different from zero, U can be considered a modification of the identity matrix in the upper left block, with s> m.

Variants of the QR algorithm

Simple QR iteration

It is chosen. The method may then briefly followed by multiplying together the QR decomposition in the reverse order can be specified. This method is a direct generalization of the simultaneous power method for determining the first K eigenvalues ​​of a matrix. Indirectly, a simultaneous inverse power method is executed.

QR- algorithm with simple shifts

It is set. Thus, the method can alternatively followed as QR decomposition from corrected by multiplying together the shift are presented. Usually an attempt is made to approximate to the shift amount to the smallest eigenvalue. For this, the last diagonal element can be selected. The simple QR iteration is obtained by plotting all shifts are set to zero.

Own Hessenberg form, so must be the product of a matrix and also have a matrix with no side diagonals Hessenberg form. Therefore same also applies. Is therefore in the preparation of the QR algorithm placed in to Hessenberg form, it is maintained throughout the entire algorithm.

Simple shifts for symmetric matrices

A symmetric real matrix has only real eigenvalues ​​. The symmetry is preserved throughout the QR algorithm in all. For symmetric matrices Wilkinson suggested (1965 ) before, those very lowest eigenvalue of the sub-matrix

To choose as shift nearer to. Wilkinson showed that the thus-determined matrix sequence converges to a diagonal matrix whose diagonal elements are the eigenvalues ​​of. The rate of convergence is quadratic.

QR algorithm with double shifts

It can be summarized by a pair of simple shifts in one iteration. In consequence, this means that may be waived for real matrices to complex shifts. In the notation above

A QR decomposition for the quadratic polynomial evaluated. The coefficients of this polynomial are real even for a conjugate pair of complex shifts. Thus, complex eigenvalues ​​of real matrices can be approximated without appearing in the bill complex numbers.

A common choice for this double shift consists of the eigenvalues ​​of the right - bottom part of the matrix, ie the quadratic polynomial is the characteristic of this block,

QR algorithm with multiple shifts

It is a number determined is greater than 2 m, but much smaller than the size n of the matrix A. The polynomial can be used as the characteristic polynomial of the right- bottom square sub-matrix of the current matrix can be selected. Another strategy is to determine the s > m eigenvalues ​​of the right - bottom part of the matrix and select the m smallest amount eigenvalues ​​below. With this, a QR decomposition is then used by

Determined.

With a multi- shift method is often achieved by the components of the left lower block in the sequence of iterated matrices are very fast small and thus a splitting of the eigenvalue problem is achieved.

Implicit multi- shift iteration

Merging multiple shifts in the general form is very expensive. As mentioned above, the cost can thereby be reduced, that is produced in a preliminary step in the Hesse mountain shape. Since each multi - step shift from individual ( and complex ) Shifts can be assembled, the Hessenberg form is preserved throughout the algorithm.

This can be converted into a " bulge - chasing " algorithm, the QR algorithm that generates a dent in the Hessenberg form at the top diagonal end and then the diagonal down and at the bottom of the matrix " hunts ".

If Q is composed of Householder reflections, so have this block diagonal form.

Notes on Operation

Similarity transformations

Calculated in the QR algorithm to each other unitary matrices are similar, as a result of

Applies. To ensure that all matrices have the same eigenvalues ​​( with the algebraic and geometric multiplicities counted ).

Choice of shifts, convergence

A simple but not very effective choice is the choice of shifts identically zero. The iterates of the resulting algorithm, the QR algorithm in its basic form, substantially converge if all eigenvalues ​​are different in magnitude, against an upper triangular matrix with the eigenvalues ​​on the diagonal. Convergence essentially means that the elements of the lower triangle of close to zero and the diagonal elements to the eigenvalues ​​. About the elements in the upper triangle so nothing is said.

If the shifts chosen as the matrix element at the bottom right of the current iterate, ie, the algorithm converges quadratically or even cubically in appropriate circumstances. This shift is known as the Rayleigh quotient shift, as it with a Rayleigh quotient is above the inverse iteration in context.

The accounting in the real () one would like to compute the real Schur form and still be able to work with complex conjugate eigenvalues. There are different shift strategies.

An extension of simple shifts is named after James Hardy Wilkinson Wilkinson shift. For the Wilkinson shift is closer to the last matrix element lying eigenvalue of the last main section matrix is

Be used.

The QR algorithm as an extension of the power method

To analyze the algorithm, the additional matrix effects of the accumulated products and can be defined. The products of orthogonal or unitary matrices are orthogonal or unitary matrices again, just as the products of right upper triangular matrices are again right upper triangular matrices. The matrices of the QR iteration are obtained by similarity transformations of A, because

From this it concludes on QR- decompositions of powers of A:

There are thus determined in the course of the algorithm implicit QR- decompositions of powers of the matrix A. Due to the shape of these partitions is that for each j = 1,2, ..., n, the j columns of the first matrix the same subspace span as the first J columns of the matrix; In addition, the columns of are orthonormal to each other. This, however, is exactly the situation after the k-th step of a simultaneous Potenziteration. The diagonal elements of are due to the approximations of the eigenvalues ​​of A. Therefore, the convergence properties of the Potenziteration be transferred:

The simple QR algorithm converges only if all eigenvalues ​​are different from each other in their amounts. Are the eigenvalues ​​according to

Sorted, the speed of convergence is linear with a contraction factor corresponding to the minimum of the quotient.

In particular, for real matrices can only converge, this algorithm, when all the eigenvalues ​​are real ( otherwise complex conjugate pairs would exist with the same amount). This situation exists for all symmetric matrices.

The QR algorithm as simultaneous inverse Potenziteration

It is, if A is invertible, for the transpose ( for complex matrices, the Hermitian adjoint ) of the inverse of A and all of their potencies

The inverse of a right upper triangular matrix is again a right upper triangular matrix whose transpose of a left lower triangular matrix. Thus, the QR algorithm also determines a QL factorization of powers of. From the form of this decomposition is evident that for each j = 1,2, ... n, the last j columns span the same subspace of like the last j columns. In the last column of thus an inverse Potenziteration is done for, so this column converges to the dual eigenvector to the smallest eigenvalue of A. The product is therefore the lower left component of the so-called Rayleigh quotient of the inverse Potenziteration. The convergence properties are specified analogously to the above.

667358
de