Generalized minimal residual method

The GMRES method ( for Generalized minimal residual method ) is an iterative numerical method for the solution of large sparse systems of linear equations. The method is suitable from the class of Krylov subspace methods, and particularly for non - symmetric matrices. In exact arithmetic, so if is calculated without rounding error, the method returns after finitely many steps the exact solution. Interestingly, however, it is as approximating method because it also can solve systems of equations with millions of unknowns in a few iterations with satisfactory accuracy with a suitable preconditioning. Thus, it represents a kind of black-box solver for sparse systems of linear equations; it was designed in 1986 by Yousef Saad and Martin H. Schultz.

The procedure

Consider the system of linear equations with a real matrix A. The system of equations is uniquely solvable, ie A has full rank. Also be given an initial guess, just about the right side b. Then the GMRES method is defined in that the m-th step of the Euclidean norm of the residual is minimized by the affine Krylov subspace.

For this purpose, an orthonormal basis of the space is calculated iteratively using the Arnoldi procedure. This allows a representation of the matrix formed by the basis vectors and a matrix, which is an upper mountain Hesse matrix to which a row has been appended, in which only the last entry is non-zero, as the

With the approach results in an efficient computable form of the norm of the residual, since the Euclidean norm is replaced by:

This denotes the first unit vector. The Hessenberg matrix H is updated in each step and then brought by a composite orthogonal transformation, usually through Givens rotations as in the below pseudo - code on a right upper triangular matrix with zeros in the last row. Here only m-1 rotations are necessary because each can put an item on the lower subdiagonal to zero. In some cases, the calculated vectors due to rounding errors lose their orthogonality. This can usually be solved by using the more complex Householder reflections instead of the rotations. Use of delivered in both cases

Are obtained with and from their counterparts by eliminating the last row. Now here is apparent, at which point the residual is minimal, namely for the uniquely determined vector y that satisfies. The residue at the m-th step is thus precisely.

A special feature of the method is that the current approximation is not initially calculated in the course of the iteration, but only the auxiliary vector y. Instead, the method provides at each step the norm of the residual. This is less than the desired accuracy, the procedure is terminated normally. Then the current approximation is computed as a linear combination of the basis vectors. Here, the components of y are simply the coefficients of the basis representation.

Alternatively, the solution of the above minimization problem is given by the vector of the affine Krylov subspace, which is perpendicular to the residual space. Thus GMRES is an oblique projection method.

Pseudocode

If an abort and tolerance TOL for the norm of the residual, calculated.

If, then END.

.

.

For

Convergence results

Due to the definition of the procedure for the minimization problem, the Euclidean norm of the residuals decreases monotonically. In exact arithmetic GMRES is even a direct solution method that after n steps supplies no later than the exact solution. If the dimension of the Krylov subspace increases by one at each step, this statement is clear, since the last step is then minimized over the whole. If this is not the case, it occurs earlier termination of the algorithm, but with the exact solution.

For general matrices, this is the strongest result that is possible, because after a block of Green Tree, Pták and Strakoš there is any monotonically decreasing sequence a matrix, so that the sequence of residuals produced by GMRES the given sequence corresponds. In particular, it is possible that the residuals remain constant and falling to zero at the very last step.

For special matrices, there are sharper convergence results. If the matrix is ​​diagonalizable, then there exists a regular matrix V and a diagonal matrix with. Then, for every polynomial of degree k with:

Herein, the condition number of the matrix in Euclidean norm and the spectrum, ie the set of eigenvalues ​​. For a normal matrix. From the inequality follows in particular that the preconditioning should merge the eigenvalues ​​into clusters.

The matrix is ​​positive definite, i.e. all of their own values ​​are greater than zero, then:

Where and are the largest or smallest eigenvalue of a matrix, respectively.

If the matrix A definite not only positive, but also symmetrical, so even applies:

All these statements are only valid for the residuals and therefore give no information about the actual error, ie the distance between the current approximation to the exact solution. To this no statements are known.

Effort and Restarted GMRES

GMRES required per iteration, a matrix -vector multiplication and a number of inner products, the number increases by one for each iteration step, as well as the number of ( vollbesetzten! ) to be stored basis vectors. This is because that the method is given by a short recursion, but is accessible to all the basis vectors in each step.

Since the effort and the storage space linearly increase with the number of iterations, it is common after k steps to discard the calculated basic and the iteration with the current approximate solution to restart ( = restart). This method is called GMRES (k ), the usual restart lengths are 20 to 40 can, however, only for special cases prove convergence here, and it can specify matrices, so that a restart does not lead to convergence.

The total cost of GMRES, as with all Krylov subspace methods for sparse matrices O ( n ) with a high constants, when less iterations are performed, as there are unknowns.

Comparison with other solvers

For symmetric matrices, the Arnoldi method for computing the orthogonal basis with the Lanczos method collapses. The corresponding Krylov subspace method is the MINRES method ( for Minimal Residual Method) by Paige and Saunders. This comes out in contrast to the generalized variant with a Dreitermrekursion. It can be shown that there are no matrices for general Krylov subspace method which works with short recursions, but at the same time as the GMRES method, an optimality condition with respect to the norm of the residual is satisfied.

Another class of methods is based on the unbalanced Lanczos method, in particular the BiCG method. Such processes are characterized by a Dreitermrekursion, but they have due to the lack of optimality no longer monotone convergence history. In addition, they provide true convergence in the case of the exact solution, however, have no guaranteed convergence more.

The third variant, methods such as CGS and BiCGStab. These also work with three- term recursion without optimality and can also prematurely without convergence. The idea behind this method is to choose sent the generating polynomials of the iteration sequence.

None of the three groups is better for all matrices, there are respective examples where a class trumps the others. Several solvers are therefore tried to collect for the given problem experience in practice.

Preconditioning

Less important than the selection of the actual solver is the choice of the preconditioner can be achieved through the decisive speed improvements. For sequential codes here offers an ILU decomposition, but depending on the problem, other preconditioners can prove to be very efficient. Since ILU is not parallelizable, in this case another example, black - domain decomposition methods are used.

270381
de