Schur complement

In linear algebra, the Schur complement denotes a matrix, which is calculated from the individual blocks of a larger matrix. The Schur complement is named after Isay Schur.

Definition

Let M be a matrix which is composed of four sub-blocks:

It should be an A -, B is a -, a C - and D is a matrix. Furthermore, it is assumed that A is invertible.

The matrix

Is called the Schur complement of A in M.

Interpretation as a result of Gaussian elimination

The Schur complement can be as a result of a step of Gaussian elimination interpreted at the level of matrix blocks: The formal application of Gaussian elimination on the block matrix corresponds to the multiplication on the left by the matrix

Where and are the - and - identity matrices, respectively. The Schur complement will appear in the lower right block of the matrix product:

Therefore, the inverse of taking the inverse of and its Schur complement can be calculated:

Or

Properties

Under the assumption that M is symmetric, M is positive definite if and only if A and the Schur complement S are positive definite.

Analogous to the above definition can also be the Schur complement of the block D form.

For two invertible matrices of the same size and with the sub-matrices or were and the corresponding Schur complements of in, or into. With the definition of the following matrix product

Application in solving linear systems of equations

The Schur complement can be used to solve linear systems of equations of the form

Be used. Here x and f denote vectors of length n and y and g vectors of length m. Advertised is this system of equations:

Provides multiplication of the first equation from the left with and adding to the second equation

If, therefore, A and S can invert, then you can resolve this equation for y and then

Calculating to obtain the solution of the original problem.

The solution of a system is reduced to the solution of a - and a system.

An important observation in this context is the fact that the inverse matrix in some iterative numerical algorithms such as Krylov subspace methods need not be explicitly formed. Such a more detailed consideration of the equations to be solved is, only the effect of the vector and, in the course of the iterative solution, required prior to the solution so that the formation of the inverse may be considered as a solution of a linear system of equations. Especially with sparse matrices is a very efficient solution is possible.

717041
de