Gauss–Seidel method

In the numerical analysis the Gauss- Seidel method or single-step, (after Carl Friedrich Gauss and Ludwig Seidel ) is an algorithm for the approximate solution of linear equation systems. It is, as the Jacobi method, and SOR method, a specific splitting procedure. The method was first developed by Gauss, but not published, but only mentioned in a letter in 1823 to Gerling. Until 1874 it was before his application was known by Gauss, published by Seidel.

Developed the process since the Gaussian elimination method, an exact solver for computing error is very vulnerable. An iterative approach does not have this disadvantage.

Description of the procedure

Given a linear system of equations in variables with equations

To solve this, an iteration is performed. Given an approximate vector to the exact solution. Now, the equation for the TE - th variable is resolved, the previously calculated values ​​of the current iteration to be used in contrast to the Jacobi method, in which only the values ​​of the last iteration may be used. That is for the - th iteration:

The result of the calculation is a new approximation vector for the desired solution vector. Repeating this procedure, one gets a sequence of values ​​that the solution vector in the case of convergence, approximating more and more:

The Gauss - Seidel method is inherently sequential, ie before a equation can be solved, the results of the previous equations must be available. It is therefore not suitable for use on parallel computers.

As a sketch algorithm with termination condition yields:

The initial field of the variable vector, which can be arbitrarily selected, and an error sensor was adopted as the input parameters of the algorithm, the approximate solution is the return vector size. The error bound measures here at how great the last change of the variable vector. As a condition for the feasibility of the algorithm can be stated that the diagonal elements of zero must be different.

For sparse matrices, the effort of the method per iteration is significantly reduced.

Description of the method in matrix notation

The matrix of the linear system of equations is decomposed into a diagonal matrix for this purpose, a strict upper triangular matrix, and a strictly lower triangular matrix, it applies that:

In each iteration step then applies. After flipping results in formal

One then defines a start vector and inserts it in the iteration:

Because it is the first iteration, this has the value zero.

Diagonal dominance and convergence

The method converges linearly if the spectral radius of the iteration matrix is less than 1. In this case, the fixed point theorem of Banach and the convergence rate of the Neumann series is applicable ( to a sufficiently large power of ) and the method converges. In the opposite case diverges the proceedings if the right side of the equation system includes a share of an eigenvector to an eigenvalue with magnitude greater than 1. The smaller the spectral radius, the faster the method converges.

Determining the spectral is generally unsuitable for practical use, and therefore of the sufficient condition that the standard matrix method of matrix must be less than 1, more convenient criteria can be found. This matrix norm is also the contraction constant in the sense of Banach's fixed point theorem.

In the event that both "small" matrices with respect to the chosen matrix norm, there are the following estimate of the matrix norm for (see Neumann series for the first factor ):

The last term is also smaller than 1, although the convergence condition is that of the Jacobi method, the resulting assessment of the contraction constant of the Gauss-Seidel method is always less than or equal to the corresponding assessment of the contraction constant of the Jacobi method.

The simplest and most common form sufficient convergence criterion of diagonal dominance results for the supremum norm of the vectors and the row sum norm as the associated induced matrix norm. It requires the fulfillment of the row sum criterion, ie the inequality

The larger the smallest difference between the right and left sides of the inequality, the faster the process converges. One can try to this difference by row and Spaltenvertauschungen zoom, i.e. by re-numbering of the rows and columns. Here you can strive for example, to bring the greatest amount of the matrix elements on the diagonal. The row permutations must of course also be performed on the right side of the equation system.

Applications

For modern applications such as solving large sparse linear systems arising in the discretization of partial differential equations, the method is unsuitable. However, it is successfully used as a preconditioner in Krylov subspace methods or as smoothers in multigrid methods.

Extension to nonlinear systems of equations

The idea of ​​the Gauss - Seidel method can be extended to f systems of nonlinear equations with a multidimensional nonlinear function. As in the linear case, the i-th equation with respect to the i-th variable is achieved in the i- th step, wherein reference is made for the other variables of the previous approximation for the previous and the previously calculated new value variables:

In this case, the dissolution is generally to be understood as the application of a further iterative method for solving non-linear equations. In order to distinguish this process from the Gauss - Seidel method for linear systems of equations, is often spoken of the Gauss-Seidel process. The convergence of the process follows from the Banach fixed point theorem again to be linear.

299456
de