Gram–Schmidt process

The Gram -Schmidt orthogonalization method is an algorithm from the mathematical subfield of linear algebra. He produced for each system of linearly independent vectors from a pre-Hilbert space ( a vector space with scalar product ) an orthogonal system that generates the same subspace. An extension is the Gram -Schmidt orthonormalization procedure is: Instead of a orthogonal system calculates an orthonormal system. If one uses a system of basis vectors as input to the algorithms, they calculate an orthogonal or orthonormal basis.

The two methods are named after Jørgen Pedersen Gram and Erhard Schmidt. However, they were already used earlier in the works of Pierre -Simon Laplace and Augustin- Louis Cauchy.

For the numerical calculation by a computer using floating point arithmetic, the Gram-Schmidt process are ill-suited. Due to accumulated rounding errors the computed vectors are no longer orthogonal. However, there are modifications of the method that do not have this error. More possibilities for orthogonalization based on Householder transformations or Givens rotations.

  • 3.1 Example

Preliminary

In the following, elements of the considered vector space (vectors) are designated as basic Latin alphabet, and optionally with indices such as and. It is dispensed both on set and on bold arrows. The scalar product is represented by angle brackets: . In the complex case while the convention is used that the scalar product in the first argument semilinear, the second argument is linear, ie

For all vectors and all. In the complex case, therefore, it comes with the formulas described below to the order of the factors in the scalar product on, in the real case is not.

Algorithm of orthogonalization

The following algorithm calculates the linearly independent vectors an orthogonal system of pairwise orthogonal vectors that generates the same subspace.

The individual vectors of the orthogonal system are calculated as follows:

Example

In provided with the standard scalar two linearly independent vectors are given, which generate a vector subspace:

There are now two orthogonal vectors and calculated that generate the same subspace:

Algorithm of the orthonormalization procedure

The following algorithm calculates the linearly independent vectors an orthonormal system of normalized pairwise orthogonal vectors that generates the same subspace.

The individual vectors of the orthonormal system is obtained by first calculated in each case, an orthogonal vector and is then normalized:

In general, obtained by this method does not particularly excellent system. In the example, must only a shuffling step be followed to obtain a right or left-handed.

Example

In provided with the standard scalar product of two basis vectors are given:

There will now be two vectors, and calculates forming an orthonormal basis of the.

Comments

A special feature of these two methods is that after each intermediate step the previously calculated vectors produce the same vector space as the vectors. Thus, the vectors form an orthogonal or orthonormal basis of the corresponding subspaces. In other words, the transformation matrix that expresses the coordinates of a system in the other, a right upper triangular matrix. Summarizing the orthonormal vectors as columns of a matrix Q together, as are the vectors of the initial system to a matrix A, so there is a triangular matrix R with A = QR, so it's a QR decomposition is determined. Accordingly, the result of the Gram-Schmidt orthonormalization can also be determined with other methods of QR decomposition, working with Givens rotations or Householder reflections.

If one calculates an orthonormal system of hand, it is often easier to first calculate an orthogonal system and then to normalize the individual vectors. This saves you twice normalizing and can often expect simpler values. If necessary, it is worthwhile to perform the Gaussian elimination method before creating the orthogonal / orthonormal system.

Functional principle of the method

Are the orthogonal vectors already determined, we try to deduct from an appropriate linear combination of the vectors, so that the difference vector

Is orthogonal to all vectors. This is equivalent to saying that the scalar product for all results in the value 0. Such a linear combination is obtained when for each of the expression

Is selected. A control calculation shows that this accept any scalar with the value 0:

Orthonormalization of infinite systems of vectors

In any Hilbert space, the method can also be applied to infinite systems of independent vectors, where independence is to be understood in the sense that no element lies in the completion of the linear span of the other vectors. The case of a countable system ( i.e., is a separable Hilbert space ) can be directly attributed to the finite case shown above: Given is an independent sequence, we obtain a corresponding orthonormal sequence by applying for each the above process and receive. More generally, each independent system based on the well-ordering theorem can be viewed as a result of a cardinal and ordinal numbers (in the case of a dense linear span of the independent system is precisely the dimension of ). Now denote the orthogonal projection onto a closed subspace, which always exists due to the completeness of the space, denote the normalization. Thus, an orthonormal system is given by

By transfinite induction can then show that, even for. Explicit the process by transfinite recursion can be written as follows:

Here, the sum due to the Bessel inequality is well defined (in particular many summands are always only countably nonzero).

276305
de