Matrixnorm

A matrix norm is in mathematics is a norm on the vector space of real or complex matrices. In addition to the three standard axioms definiteness, absolute homogeneity and subadditivity the Submultiplikativität is required as the fourth defining characteristic for matrix norms partially. Submultiplikative matrix norms possess some useful properties, as is for instance the spectral radius of a square matrix, ie the sum of the amount largest eigenvalue, never greater than its matrix norm. There are several ways to define matrix norms, including directly over a vector norm, as operator norm or the singular values ​​of the matrix. Matrix norms are particularly used in linear algebra and numerical mathematics.

  • 2.1 equivalence
  • 2.2 Estimation of the eigenvalues
  • 2.3 Unitary invariance
  • 2.4 self-adjoint
  • About 3.1 vector norms defined matrix norms 3.1.1 Total Standard
  • 3.1.2 Frobenius norm
  • 3.2.1 row sum norm
  • 3.2.2 spectral norm
  • 3.2.3 column sum norm
  • 3.3.1 Shadow standards
  • 3.3.2 Ky Fan norms
  • 4.1 power series of matrices
  • 4.2 Perturbation theory and conditioning

Basic concepts

Definition

Is the field of real or complex numbers, as is denoted by the set of real or complex (m × n) - matrices, which forms a vector space with matrix addition and scalar multiplication. A matrix norm is now a standard in the die space, that is, an image

Assigns a non-negative real number a matrix and the matrix and the scalar for all satisfy the following three characteristics:

  • ( Definiteness )
  • ( absolute homogeneity )
  • ( Subadditivity or triangle )

Along with a matrix norm of the space of matrices is a normed vector space. Since the space of matrices has a finite dimension, this normed space is also complete and thus a Banach space.

Submultiplikativität

Partially is required as a fourth defining characteristic that a matrix norm is submultiplicative, which means that and two matrices

Applies. For non - square matrices, this inequality is actually composed of three different norms. The space of square matrices is the matrix addition and matrix multiplication and a submultiplikativen matrix standard is a normed algebra, in particular a Banach algebra. There are also standards matrix not submultiplicative.

Compatibility with a vector norm

A matrix norm is compatible or compatible with a vector norm if for a matrix and a vector, the inequality

Applies. Also, this inequality is strictly for non - square matrices composed of three different norms. Compatibility is always important when vectors and matrices occur together in assessments. Each submultiplikative matrix norm is at least compatible with themselves as the vector norm, as any matrix norm for a matrix with only one column is composed also a vector norm.

Properties

Equivalence

All matrix standards are equivalent to each other, that is, any two matrix standards and there are two positive constants, and such that for all matrices

Applies. This equivalence is an implication that standard balls are compact always in finite-dimensional vector spaces. A standard matrix can thus be estimated by a different matrix standard upward and downward. Nothing is about the size of the constants initially testified that for many pairs of standards but the constants can be specified explicitly.

Estimating the eigenvalues

If a matrix norm compatible with any vector norm ( ie, for example submultiplicative ), then for each eigenvalue of a square matrix

Since then a corresponding to this eigenvalue eigenvector exists with, for the

Applies, which follows after division by the assessment. In particular, this applies to any matrix norm submultiplikative that the spectral radius ( the magnitude of the largest eigenvalue magnitude ) of a square matrix is never larger than its standard.

Unitary invariance

A unitary matrix norm is invariant if it is invariant under transformations of unitary ( in the real case, orthogonal transformation ), that is to say if, for all matrices and any and unitary matrices.

Applies. A matrix norm is invariant unitary if and only if they are as amounts and permutationsinvariante vector norm ( symmetric gauge functional ) of the singular values ​​of the matrix by

Can be represented.

Self-adjoint

The adjoint of a matrix norm norm is defined for square matrices as the norm of the adjoint ( in the real case transposed ) matrix, ie

A self-adjoint matrix norm is, if it is invariant under Adjungierung, that is, when

Applies. All unitarily invariant matrix norms are also self-adjoint.

Important matrix norms

About vector norms defined matrix norms

By all entries of a matrix are written below each other, a matrix can be viewed as a vector of appropriate length. This matrix norms can be defined directly via vector norms, in particular the p- norms

Since the sum of two matrices, and the multiplication of a matrix by a scalar are defined component-wise, follow the standard properties of the matrix norm directly from the corresponding properties of the underlying vector norm. Two of these so-defined matrix norms have a special meaning and name.

Total norm

The total norm of a matrix based on the maximum norm in the ( m x n ) -dimensional space and is defined as

Wherein, in contrast to the maximum norm of vectors, the amount of maximum matrix element is multiplied by the geometric mean of the number of rows and columns of the matrix. With this scaling, the total norm is submultiplicative and compatible for square matrices with all p- norms, including the maximum norm. The defined only on the amount maximal element standard

Is an example of a non submultiplikative matrix norm.

Frobenius norm

The Frobenius norm of a matrix corresponding to the Euclidean norm in the ( m x n ) -dimensional space and is defined as

The Frobenius norm is submultiplicative, compatible with the Euclidean norm is unitary invariant and self-adjoint.

About operator norms defined matrix norms

A matrix norm is a vector norm induced or natural matrix norm if it is derived as the operator norm, if so

Applies. Clearly there corresponds a matrix norm defined as the maximum stretch factor on the application of the matrix to a vector. The operator norms such matrix norms are submultiplicative and always with the vector norm from which they were derived tolerated. The operator norms are even under all compatible with a vector norm matrix norms, respectively the smallest.

Row sum norm

The row sum norm is induced by the maximum norm norm of a matrix and

Defined. The calculation of the row sum norm is thus effected by determining the amount of the sum of each row and then by selecting the maximum of these values.

Spectral norm

The spectral norm is induced by the Euclidean norm norm of a matrix, and by

Defined. Here is the matrix adjoint to ( in the real case transposed matrix ) and the maximum absolute eigenvalue of the matrix product. The spectral norm is invariant under unitary and self-adjoint.

Column sum norm

The column sum norm is induced by the standard sum norm of a matrix and

Defined. The calculation of the column sum norm is thus effected by determining the amount sum of each column and then by selecting the maximum of these values.

About singular values ​​matrix defined standards

Another possibility to derive matrix norms on vector norms, it is a singular value decomposition of a matrix

In a unitary matrix, to consider a diagonal matrix and a unitary matrix adjoint. The non- negative real items, which are then the diagonal matrix of singular values ​​and equal to the square roots of the eigenvalues ​​of. The singular values ​​are then listed in a vector whose vector norm is considered.

Shadow standards

The shade standards, more shadow p- standards a matrix are the p- standards of the vector of the singular values ​​of the matrix and is defined as

The Shadow ∞ norm corresponds to the spectral norm, the Shadow 2- norm of the Frobenius norm and the shades -1 - norm is also called trace norm. All shadows norms are submultiplicative, invariant unitary and self-adjoint. The dual to a shadow - p-norm is the standard shadow -q- norm with for.

Ky Fan norms

The Ky Fan norm of the order of a matrix is ​​the sum of the first singular values ​​and defined as

Where the singular values ​​are ordered by decreasing size. The first Ky Fan norm corresponds to the spectral norm and the r-th Ky Fan norm of the shadow -1 standard. All Ky Fan norms are invariant under unitary and self-adjoint.

Applications

Power series of matrices

Matrix norms are used, among other things, to investigate the convergence of power series of matrices. For example, the power series converges with the determination of the inverse of the square matrix as the identity matrix

If for any submultiplikative matrix norm. This statement even holds for arbitrary continuous operators on normed spaces and is known as a Neumann series. With the aid of matrix norms can also be shown that the Matrixexponential

As a function on the space of square matrices is always convergent and well defined. Furthermore, matrix norms are useful to determine the number of terms in a power series, which is necessary to calculate a matrix function up to a certain accuracy.

Perturbation theory and conditioning

Another important field of application of matrix norms is numerical error analysis. Here, the sensitivity of a numerical calculation, for example, the solution of a system of linear equations with respect to small changes in the input data, such as the entries of the matrix were investigated. The Störungslemma here provides an estimate of the inverse of a matrix to disturbed

Submultiplikativen in a matrix norm. Can be derived in the matrix entries using the set of Bauer- Fike using a matrix norm submultiplikativen also an estimate of the change in the eigenvalues ​​of a matrix due to disturbances. These estimates lead to the central concept of the numerical condition of a ( regular ) matrix

Which describes the amplification, with the error in the input data affect the numerical solution.

556502
de