Matrix (mathematics)

In mathematics we mean by a matrix (plural matrices ) is a rectangular array ( table ) of elements usually mathematical objects such as numbers. These objects can then be expected in a certain way by adding or matrices multiplied together. Matrices can have any dimensionality.

Matrices are a key concept in linear algebra and dive in almost all areas of mathematics. They provide contexts in which linear combinations play a role, clearly represents and thus facilitate computing and thought processes. In particular they are used to represent linear transformations and to describe systems of linear equations and solve.

The term matrix was introduced in 1850 by James Joseph Sylvester.

An arrangement as in the illustration, elements are displayed in rows and columns.

  • 2.1 matrix addition
  • 2.2 scalar
  • 2.3 Matrix multiplication
  • 3.1 The transposed matrix
  • 3.2 Inverse Matrix
  • 3.3 The vector - vector products
  • 5.1 associated with linear maps
  • 5.2 Forming of matrix equations
  • 6.1 Characteristics of endomorphisms
  • 6.2 Properties of bilinear forms
  • 6.3 Other constructions

Terms and first properties

Notation

As a notation, the arrangement of elements into rows and columns between a large opening and a closing parenthesis has prevailed. As a rule, you use parentheses, but there are also square used. For example, designate

A matrix with two rows and three columns. The matrix itself is (sometimes bold or handwritten occasionally single or double underlined ) capital letters, preferably A is. A general matrix with rows and columns might look like this:

Elements of the array

The elements of the matrix is also called items, or components of the matrix. They stem from an amount is generally a body or a ring. One speaks of a matrix over

If we choose for the set of real numbers, then one speaks of a real matrix, complex numbers of a complex matrix.

A particular element used to describe the two indices, usually the element in the first row and the first column is described by. Generally refers to the element in the - th row and the -th column. Therefore, the matrix is sometimes referred to. If likelihood of confusion, the two indices are separated with a comma. For example, the matrix element in the first row and the eleventh column is designated.

Individual rows and columns are often referred to as a column or row vectors. An example:

Type

The type of a matrix derived from the number of its rows and columns. A matrix with rows and columns is called a matrix (read " m by n" - or "m cross n" matrix). Votes row and column number in line, it is called a square matrix.

A matrix consisting of only one column or one row is commonly regarded as a vector. A vector with elements you can depending on the context as a single-column matrix or single-line matrix display. In addition to the terms column vector and row vector, the terms of this column matrix and line matrix are common. A matrix both column and row matrix and is regarded as a scalar.

Formal representation

A matrix is a doubly indexed family. Formally, this is a function

To each pair of indices as a function value assigns the entry. For example, the index pair is assigned as the function value of the entry. The function value is the entry in the - th row and the th column. And the variables correspond to the number of rows or columns. Not to be confused with the formal definition of a matrix as a function of is that matrices themselves describe linear maps.

The set of all matrices over the set is also written in the usual mathematical notation; this is the short notation has naturalized. Sometimes the spellings or rarely used.

Addition and multiplication

In the space of matrices elementary arithmetic operations are defined.

Matrix addition

Two matrices can be added if they are of the same type, that is, if they have the same number of rows and the same number of columns. The sum of two matrices is calculated by adding each of the entries of the two matrices:

Calculation example:

In linear algebra, the entries of the matrices elements are usually of a body such as the real or complex numbers. In this case, the matrix addition is associative, commutative, and has a zero matrix with the neutral element. In general, however, the matrix addition possesses these characteristics only if the entries are elements of an algebraic structure having these properties.

Scalar

A matrix is multiplied with a scalar by all the entries of the matrix are multiplied by the scalar:

Calculation example:

The scalar multiplication should not be confused with the scalar product. In order to be able to perform the scalar multiplication, the scalar ( lambda) and the entries of the matrix must originate from the same ring. The quantity of the templates is a module in this case ( left ) on

Matrix multiplication

Two matrices may be multiplied when the number of columns, the left corresponds to the number of lines of the right array. The product of a matrix and of a matrix is ​​a matrix whose entries are calculated by the product-sum formula, similar to the dot product, is applied to pairs of a row and a column vector of the first vector of the second matrix:

The matrix multiplication is not commutative, that is generally true. The matrix multiplication is associative, however:

A chain of matrix multiplications can therefore be bracketed differently. The problem to find a compounding, leading to a calculation of the minimum number of elementary arithmetic operations, is an optimization problem. The matrix addition and matrix multiplication also satisfy the two distributive:

For all matrices and matrices and

For all matrices and matrices

Square matrices can be multiplied by itself, similar to the potency in the real numbers is introduced abbreviated the matrix potency or etc.. Thus, it is also sensible to use square matrices as elements in polynomials. For further remarks on this see the characteristic polynomial. For ease of calculation, the Jordan normal form can be used here. Square matrices over or you can furthermore use even in power series, see Matrixexponential. A special role with respect to the matrix multiplication play the square matrices over a ring, ie. This form, even with the matrix addition and multiplication again a ring die ring is called.

Other arithmetic operations

The transposed matrix

The transpose of a matrix is the matrix that is to say to

Is

The transpose. So you write the first line of the first column, the second row as the second column, etc. The matrix is ​​as it were "mirrored" on its main diagonal.

Example:

Apply the following calculation rules:

The transposed matrix is sometimes called toppled matrix.

For matrices over the adjoint matrix is precisely the matrix transpose.

Inverse Matrix

If the determinant of a square matrix to a body is not equal to zero, i.e., there exists the inverse matrix to the matrix, for the

Applies, wherein the unit matrix. Matrices which have an inverse matrix, is referred to as invertible or regular matrices. Conversely, non-invertible matrices are called singular matrices.

Vector-vector products

The matrix product of two vectors, and is not defined as the number of columns is generally not equal to the number of rows. However, the two products and exist.

The first product is a matrix that is interpreted as a number; it is called the standard scalar product of and and is denoted by or. Geometrically, this scalar product in a Cartesian coordinate system to the product

The magnitudes of the two vectors and the cosine of the included angle of the two vectors. For example, applies

The second product is a matrix and is called dyadic or tensor product of and (written ). His columns are scalar multiples of its rows scalar multiples of. For example, applies

Vector spaces of matrices

The set of matrices over a commutative ring with 1 forms a module with the matrix addition and scalar multiplication. The trace of the matrix product

Is then the special case of a real scalar. In this die space is a Euclidean vector space. In this space, the symmetric matrices and skew-symmetric matrices are orthogonal. Is a symmetric and a skew-symmetric matrix, the following applies.

In the special case is the trace of the matrix product

A complex scalar product and the die space becomes a unitary vector space. This scalar product is also called Frobenius scalar product. The induced from the Frobenius inner product norm is Frobenius norm, and with it the die space becomes a Banach space.

Applications

Associated with linear maps

The special feature of matrices over a ring is the connection to linear maps. For each matrix can be a linear map with domain ( set of column vectors ) and range of values ​​defined by on maps each column vector. Conversely, to each linear mapping in this way exactly one matrix; here are the columns of the images of the standard basis vectors of below. This connection between linear mappings and matrices are also called ( canonical ) isomorphism

It represents, for a given and a bijection between the set of matrices and the set of linear maps Represents the matrix product goes on here in the composition ( sequential execution ) of linear maps. Because the brackets plays no role in the sequential execution of three linear maps, then this is also true for the matrix multiplication, this is so associative.

Is even a body, one can consider arbitrary finite dimensional vector spaces and ( the Dimension or ) instead of the column vector spaces. (If is a commutative ring with 1, then one can consider analogous free K -modules. ) These are the choice of bases from and to the coordinates rooms or isomorphous, because at any one vector has a unique decomposition into basis vectors

Exists and, the individuals body elements the coordinate vector

Form. However, the coordinates of vector depends on the base used, which therefore is also present in the label.

Similarly it behaves in the vector space is a linear map given, so the pictures can be the basis vectors of clearly in the basis vectors of decomposing in the form

With coordinate vector

The picture is then completely determined by the so-called projection matrix

As is true for the image of the above-mentioned vector

Ie ( " coordinate vector = matrix times vector coordinates "). ( The matrix is dependent on the base used and from that, if the multiplication is the base, is from Malpunkt left and right, " trimmed from ," and the "outside" stationary base remains. )

The sequential execution of two linear images, and ( with bases, respectively) corresponds to the matrix multiplication, ie

(also here is the base " trimmed from ").

Thus the set of linear maps from to again is isomorphic to the isomorphism, however, depends on the chosen bases and and therefore is not canonical: When choosing a different basis for or for the same linear mapping is in fact associated with a different matrix consisting of old by multiplication from the right or left with a dependent only by the participating bases invertible - involves or matrix (so-called change of basis matrix). This follows by repeated application of the multiplication rule from the previous paragraph, namely

( " Matrix = change of basis matrix times matrix times the change of basis matrix "). The identity mappings and each vector form from off or on yourself.

If a property of matrices unaffected by such a base change, so it is sensible to base independent of the corresponding linear mapping ascribe this property.

In the context of matrices commonly encountered terms are the rank and the determinant of a matrix. The rank is (if a body is ) basis independent in the sense mentioned, and one can thus speak of rank even with linear maps. The determinant is defined only for a square matrix, corresponding to the case; it remains unchanged if the same change of basis is performed in the definition and range of values ​​, both Basiswechselmatrizen are inverse to each other:

In this sense, therefore, the determinant is basis independent.

Forming of matrix equations

Proofs, derivations, etc. are often carried out in the matrix calculus Especially in the multivariate analysis.

Equations are converted as algebraic equations in principle, but the noncommutativity the matrix multiplication, and the existence of zero divisors must be considered.

Example: linear equations as simple transformation

Wanted is the solution vector of a linear system of equations

With a coefficient matrix. If the inverse matrix exists, you can multiply with her from left:

And is obtained as the solution

Special matrices

Properties of endomorphisms

The following properties of square matrices correspond to properties of endomorphisms, which are represented by them.

Properties of bilinear forms

In the following, properties of matrices lists the properties of the corresponding bilinear

Correspond. Nevertheless, these properties can also be shown for linear operators, have an independent meaning.

Other constructions

When a matrix contains complex numbers, we get the conjugate matrix by replacing its components by the complex conjugate elements. The adjoint matrix ( also Hermitian conjugate matrix ) of a matrix is denoted by and is the transposed matrix, in addition, all elements are complex conjugates in the.

The complementary matrix of a square matrix is composed of the sub- determinants, one subdeterminant is called Minor. To determine the subdeterminants the -th row and - th column of be deleted. From the resulting matrix, the determinant is then calculated. The complementary matrix having the entries, this matrix is sometimes referred to as the matrix of the cofactors.

A transition or stochastic matrix is a matrix whose entries all lie between 0 and 1 and give rise to their rows and column sums 1. They are used in the stochastic characterization of discrete time Markov chains with finite state space. A special case of this are the doubly - stochastic matrices.

Infinite-dimensional spaces

For infinite-dimensional vector spaces ( even over skew fields ) is that every linear mapping is uniquely determined by the images of the elements of a basis and these are chosen arbitrarily and can be continued to a linear map completely. Is now a base of, it can be uniquely written as a (finite) linear combination of basis vectors, that is, there exist unique coefficients, of which only finitely many are nonzero so. Correspondingly, any linear mapping can be regarded as possibly infinite matrix, but in each column ( " Number the " column and the column to insist then the reasons given by the elements of numbered coordinates) only many entries are nonzero finite, and vice versa. The appropriately defined matrix multiplication in turn corresponds to the composition of linear maps.

In functional analysis we consider topological vector spaces, ie vector spaces on which one can speak of convergence and consequently may also form infinite sums. On such, matrices with infinitely many nonzero entries can be understood in a column under certain circumstances as linear maps accordingly, with this other basic notions underlie.

A special case form Hilbert spaces. So be Hilbert spaces and orthonormal bases of respectively. Then we obtain a matrix representation of a linear operator ( for only densely defined operators it works well, if the domain has an orthonormal basis, which is always true in the case abzählbardimensionalen ) by defining the matrix elements; while the scalar product is considered Hilbert space ( in the complex case semilinear in the first argument ).

This so-called Hilbert-Schmidt scalar product can be in the infinite-dimensional case, only for a certain subclass of linear operators, the so-called Hilbert-Schmidt operators define, in which the series of the this dot product is defined to always converge.

484710
de