Modified Richardson iteration

The Richardson method in numerical mathematics, an algorithm for the approximate solution of linear equation systems. It ranks as the Gauss- Seidel method to the class of splitting methods. As an iterative method, it approaches gradually to a solution of the linear equation system.

Richardson method

By the choice of ( identity matrix ) is obtained from the general fixed point equation

The Richardsonverfahren.

This method converges as any fixed point of this kind, if the spectral radius of the iteration matrix is strictly smaller one.

Nonrelaxed Richardsonverfahren

The iteration of relaxed Richardson method is

In this case, in each step the residual is weighted by a factor. If a symmetric positive definite matrix is considered for the optimal Relaxionsparameter

This call and the maximum and minimum eigenvalue of. For the convergence radius (equivalent to the spectral radius ) applies

It refers to the condition of the matrix. The relaxed Richardson method converges then just "fast" as the gradient method for symmetric matrices, but what you have to calculate a Relaxionsparameter. For this, you can force the Richardsonverfahren for unsymmetrical matrices with complex eigenvalues ​​still convergence, as long as their real parts are all positive.

The method is suitable as smoothers in multigrid methods.

Cyclic Richardsonverfahren

The convergence can be improved significantly when you look at multiple steps of the iteration with different parameters. It causes each steps

Cyclically. The method converges if the spectral radius of Matrixpolynoms

Is less than one, and the better the smaller it is. A real matrix with positive eigenvalues ​​, and the spectral can be estimated by the maximum of the polynomial in the real interval. Extra small is this maximum, if one so chooses, the relaxation parameter that their reciprocals are just the zeros of the appropriately shifted Chebyshev polynomial,

Then the convergence statement for symmetric matrices and a cycle of length improved to

For realistic problems, this represents a great improvement over the simple relaxed process, since only the root of the condition number is received.

For symmetric definite matrices, this method offers little advantage over the method of conjugate gradients, since it requires the estimation of the eigenvalues. In the unbalanced case, however, the parameters can be well adapted for complex eigenvalues ​​, see ref. In most cases, however, the Chebyshev iteration is preferable because it achieves the same error bound for each iteration step and not just a multiple of the cycle length.

681274
de