Determinant

In linear algebra, the determinant is a special function that maps a square matrix or a general endomorphism a scalar. For example, the matrix

The determinant

Formulas for larger matrices are discussed below.

With the help of determinants can you tell whether a system of linear equations has a unique solution and the solution with the so-called Cramer's rule to explicitly specify. The system of equations is uniquely solvable if and only if the determinant of the coefficient matrix is nonzero. Accordingly, a square matrix with entries of a body is invertible if its determinant is nonzero.

You can assign vectors in the determinant of those square matrix whose columns are the given vectors form. With this definition, the sign of the determinant, which is associated with a base, are used to define the notion of orientation in Euclidean spaces. The absolute value of the determinant is equal to the volume of the parallelepiped (also called Spar ) which is spanned by these vectors. One conclusion is: if the linear map represented by the matrix and is an arbitrary measurable subset, then the volume of is given by. More generally, if the linear map represented by the matrix, and is an arbitrary measurable subset, the - dimensional volume of is given by.

  • 3.1 matrices up to size 3 × 3 3.1.1 spar
  • 4.1 Determinants product set
  • 4.2 multiplication by scalars
  • 4.3 existence of the inverse matrix
  • 4.4 Transposed Matrix
  • 4.5 Similar matrices
  • 4.6 triangular matrices
  • 4.7 block matrices
  • 4.8 eigenvalues ​​and characteristic polynomial
  • 9.1 Article
  • 9.2 Textbooks

History

Historically determinants ( " define " Latin determinare, " determine " ) considered before the matrices. Originally a determinant was defined as a property of a system of linear equations. The determinant " determines " whether the system of equations has a unique solution (this is exactly the case if the determinant is non-zero ). In this context, the 2 × 2 matrices of Cardano end of the 16th century and treated more of Leibniz about 100 years later. The axiomatic treatment of the determinant as a function of the independent variables are the first Karl Weierstrass in his Berlin lectures ( no later than 1864 and possibly before), to which then Ferdinand Georg Frobenius anknüpft in his Berlin lectures of the summer term in 1874 and there and among other probably the first the Laplace expansion theorem systematically leads back to these axioms.

Definition

Determinant of a square matrix ( axiomatic description)

A picture from space of square matrices in the underlying body forms each matrix to its determinant from if it satisfies the following three properties ( axioms by Karl Weierstrass ), with a square matrix column-wise as is written:

  • She is multi- linear, ie linear in each column:
  • It is alternately, i.e., when the same argument is available in two columns, the determinant is equal to 0:
  • Is normalized, i.e., the identity matrix has determinant 1:

It can be proved - and Karl Weierstrass has the ( even earlier or ) done in 1864 - that there is one and only one such normalized alternating multi- linear form on the algebra of matrices over the underlying body is - that these determinants function ( Weierstrass determinants marking). Also, the previously mentioned geometric interpretation (volume property and orientation) follows.

Leibniz formula

For a matrix, the determinant of Gottfried Wilhelm Leibniz, by now known as the Leibniz formula formula has been defined:

The sum is computed over all permutations of the symmetric group of degree n. denotes the sign of the permutation ( 1, if a straight permutation, and -1 if it is odd), and the function value of the permutation of the spot.

Whether a permutation is even or odd, it can be seen from the number of transpositions, which have been required to generate the permutation. An even number of permutations is, the permutation is straight, an odd number of permutations is, the permutation is odd.

Example

  • Two permutations, and thus just
  • A permutation and hence odd

This formula contains summands and therefore unwieldy, if larger than 3. However, it is suitable for proof of statements about determinants.

An alternate spelling of the Leibniz formula uses the Levi- Civita symbol and the summation convention:

Generalization

In the same way one can define the determinant for matrices of a matrix ring whose entries lie in a commutative ring with unity. This is done using a certain anti-symmetric multi -linear mapping: If a commutative ring and the -dimensional free module, then be

The uniquely determined mapping with the following properties:

  • Is - linear in each of the arguments.
  • Is anti-symmetric, i.e., if the two arguments are the same, thus providing zero.
  • Wherein the member is of having a 1 -th coordinate, and as zeros elsewhere.

A picture with the first two properties is also referred to as determinant function, volume or alternating linear form. This gives the determinant by identifying a natural way with the space of square matrices:

Determinant of an endomorphism

It is a - dimensional vector space over a field. ( Generally, one can also consider a commutative ring with unit element and a free module of rank over. )

The determinant of a linear map is the determinant of a matrix representation of with respect to a basis of. It is independent of the choice of basis.

The definition can be formulated as follows without using matrices: Be a determinant function. Then is determined by, the return transport of multi- linear forms by is. It is a basis of. Then we have

It is independent of the choice of and the base. Interpreted geometrically obtained from the volume of the spanned Spates by multiplying by the volume of the plane spanned by the factor Spates.

An alternative definition is the following: It is the outer -th power of and. ( Results from universal design as a continuation of the external algebra, restricted to the component of degree. ) Then a one-dimensional vector space (or a free module of rank 1), so the linear map with an element of identified be; this element is the determinant of.

Calculation

Matrices up to size 3 × 3

For a coefficient of only one existing matrix

If a matrix, then

For a matrix, the formula is

If you want to calculate this determinant by hand, so is the rule of Sarrus for a simple scheme available.

Triple product

If there is a matrix whose determinant is can also be calculated using the scalar triple product.

Gaussian elimination method for computing determinants

In general the determinants Gaussian elimination method using the following rules can be calculated:

  • Is a triangular matrix, the product of the main diagonal elements of the determinant.
  • If it follows from, by interchanging two rows or columns, then
  • If it follows from by adding a multiple of one row or column to another row or column, then.
  • If it follows from by forming a - multiple of a row or column, then.

Beginning with any square matrix is used, the last three of the four rules to convert the matrix into an upper triangular matrix, and then calculating the determinant of a product of the diagonal elements.

On this principle, the calculation is based determinants using the LU decomposition. Since both are triangular matrices, their determinants arising from the product of the diagonal elements, which are normalized at all to 1. According to the determinants of product set, the determinant gives so out of context

Laplace expansion theorem

Using the Laplace expansion theorem can " develop after a row or column " the determinant of a matrix. Are the two formulas

Wherein the sub- matrix of which is formed by coating the line -th and -th column. The product is referred to as cofactor.

Strictly speaking, the development is set to only a method to calculate the summands of the Leibniz formula in a specific order. The determinant is reduced by a dimension of each application. If desired, the method can be applied as long as needed to obtain a scalar. An example is

( Development after the first line ), or more generally

The Laplace expansion theorem can be generalized in the following way. Instead of only one row or column can be developed even after several rows or columns. The formula for this is

With the following names: and are subsets of, and is the sub- matrix of consisting of the row with the indexes and the columns of the index is composed of. and denote the complements of and. is the sum of the indices. The development by the row with the indexes of the sum over all runs, the number of said columns of indices equals the number of lines developed in accordance with those. For the development by the columns with indices from the sum runs over. The number of summands is obtained with as the binomial coefficient.

Efficiency: The effort for the calculation using the Laplace expansion theorem for a matrix of dimension is of order, while the conventional methods are only partially and even better (see for example Strassen algorithm ) can be designed. However, the Laplace expansion theorem for small matrices and matrices with many zeros can be well applied.

Properties

Determinants product set

The determinant is a multiplicative mapping in the sense that

This means that the map is a group homomorphism of the general linear group in the unit group of the body. The core of this figure is the special linear group. More generally, for the determinant of a square matrix which is the product of two (not necessarily square ) matrices is, the set of Binet - Cauchy.

More generally arises as a direct consequence of the theorem of Cauchy - Binet formula for the calculation of Minors of order a product of two matrices. If a matrix and a matrix, and is and then apply with the designations as in the generalized representation theorem

The case provides the set of Binet - Cauchy ( which for the usual determinants of product set is ) and the special case provides the formula for the ordinary matrix multiplication.

Multiplication by scalars

It is easy to see that and thus

Existence of the inverse matrix

A matrix is invertible so regularly, if a unit of the underlying ring (that is, non-zero for the body). If is invertible, then.

Transpose of matrix

A matrix and its transpose have the same determinant

Similar matrices

If and are similar, ie if an invertible matrix exists such that, then vote their determinants match, because

That is why you can be independent of a coordinate representation of the determinant of a linear self-map define (with a finite vector space ) by choosing a basis for the image by a matrix describing relative to this basis, and the determinant of this matrix takes. The result is independent of the chosen basis.

There are matrices having the same determinant but are not similar.

Triangular matrices

In a triangular matrix, all entries are below or above the main diagonal equal to the determinant of the product of all the diagonal elements:

Block matrices

For the determinant of a block matrix

With square blocks and one can under certain conditions specify formulas that take advantage of the block structure. For or follows from the generalized representation theorem:

This formula is also known as box set.

Is inverted, it follows from the decomposition

The formula

If is invertible, it can be formulated

In the special case that all four blocks have the same size and pairwise commute, the result is with the help of the determinant product set

Here denote a commutative subring of the ring of all matrices with entries from the body, so that (for example, the subring generated by these four matrices ), and let the corresponding image that maps a square matrix with entries from its determinant. This formula also applies if A is not invertible, and generalizes from matrices.

Eigenvalues ​​and characteristic polynomial

If the characteristic polynomial of the matrix

So is the determinant of.

Decomposes the characteristic polynomial into linear factors ( with not necessarily distinct):

So is particularly

Are the different eigenvalues ​​of the matrix -dimensional generalized eigenspaces, as is

Derivation

The determinant of real square matrices of fixed dimension is a polynomial function and as such is everywhere differentiable. Its derivative can be represented with the help of Jacobi's formula:

To the complementary matrix called. In particular, results for invertible that

Or simplified

If the values ​​of the matrix are sufficiently small. The special case when is equal to the identity matrix yields,

Similar terms

The Permanente is an " unsigned " analogue of the determinant, but is used much less frequently.

232925
de