Successive over-relaxation

The " Successive Over- Relaxation" method ( overrelaxation ) or SOR method is an algorithm of numerical analysis for the approximate solution of linear equation systems. It is, as the Gauss- Seidel method and the Jacobi method, a special splitting procedure (A = B ( A - B ) with B = (1 / ) D L).

Description of the procedure

Given a square linear system with n equations and unknowns x. Here are

The SOR method solves this equation, starting from a starting vector by the iteration

The real Überrelaxationsparameter ensures that the method converges more quickly as the Gauss- Seidel method, which is a special case of this formula.

Algorithm

As a sketch algorithm with termination condition easily by:

In this case, an error bound was adopted as the input size of the algorithm; the approximate solution is the vectorial return size. The error bound measures here at how great the last change of the variable vector.

For sparse matrices, the effort of the method per iteration is significantly reduced.

Derivation

The SOR method is obtained by relaxation of the Gauss - Seidel method:

Thus obtained for Gauss-Seidel back. To analyze the process, the formulation offers a splitting process that makes it possible to analyze the characteristics of the process. The matrix A is written to the sum of a diagonal matrix D and two strict triangular matrices L and R:

Said linear system of equations is then equivalent to

With ω > 0, the iteration matrix is then so

And the process is consistent and converges if and only if the spectral radius is.

The above formulation for the component-wise calculation of obtained from this matrix representation when taking advantage of the triangular form of the matrix. This matrix can be inverted directly by forward substitution.

Convergence

It can be shown that the method of symmetric positive definite A converged for each. To accelerate the convergence with the Gauss-Seidel method, one uses heuristic values ​​between 1.5 and 2.0. The best choice depends on the coefficient matrix A. Values ​​can be chosen to stabilize a solution that otherwise easily diverges.

The theorem of Kahan (1958 ) shows that for ω outside the interval (0,2) there is no convergence.

739052
de