Matrix multiplication

The matrix multiplication or matrix multiplication is in mathematics, a multiplicative combination of matrices. To multiply two matrices to each other, the number of columns of the first matrix with the number of rows of the second matrix must match. The result of matrix multiplication is then called matrix product, matrix product, or product matrix. The matrix product is again a matrix whose entries are determined by component-wise multiplication and summation of the entries of the corresponding row of the first matrix with the corresponding column of the second matrix.

The matrix multiplication is distributive and associative with the matrix addition. However, it is not commutative, i.e., the order of the matrices may not be mixed in the product formation. The amount of the square matrices with elements of a ring together with the matrix addition of matrix and the ring of the square matrices. Next is the set of regular matrices over a unitary ring with the matrix multiplication, the general linear group. Matrices that can be converted by specific multiplications by regular matrices into one another, form in equivalence classes.

The standard algorithm for multiplying two square matrices, has a cubic running time. Whilst it is the asymptotic effort with the help of special algorithms reduce, however, the determination of optimal upper and lower complexity bounds for the matrix multiplication is still the subject of current research.

The matrix multiplication is commonly used in linear algebra. For example, the factorization of a matrix is used as a product of matrices with special properties in the numerical solution of linear systems or eigenvalue problems. Furthermore, the projection matrix of the sequential execution of two linear maps is just the matrix product of the imaging matrices of these images. Applications of matrix multiplication can be found among others in the computer science, physics and economics.

The matrix multiplication was first described by the French mathematician Jacques Philippe Marie Binet in 1812.

  • 4.1 associativity
  • 4.2 distributivity
  • 4.3 noncommutativity
  • 4.4 Further calculation rules
  • 5.1 Ring of square matrices
  • 5.2 group of regular matrices
  • 5.3 groups of the orthogonal and unitary matrices
  • 5.4 Equivalence classes of matrices
  • 6.1 Standard algorithm
  • 6.2 algorithms with better complexity
  • 6.3 Programming
  • 7.1 factorizations
  • 7.2 Composition of linear maps
  • 7.3 applications

Definition

The matrix multiplication is a binary operation on the set of matrices over a ring ( often the field of real numbers ), ie a mapping

The two matrices and another matrix assigns. The matrix multiplication is only defined for the case that the number of columns of the matrix corresponds to the number of rows of the matrix. The number of lines of the result matrix corresponds to that of the matrix and its column number to that of the matrix. Each entry of the matrix product is calculated over this

So by component-wise multiplication of the entries of the -th row of the -th column of and by summation over all these products. Frequently in the notation of a matrix multiplication of Malpunkt is omitted and we write briefly held. If the order of the factors to be stressed you pronounce " A is left- multiplied by B" for the product and "A is multiplied on the right by B" for the product.

Example

Consider the two real matrices

Since the matrix has as many columns as the matrix rows, the matrix multiplication can be performed. Having two rows and two columns has the matrix product will also have two rows and columns. To calculate the first matrix element of the result matrix, the products of the corresponding entries of the first row of the first column and of summed (the asterisks represent not yet calculated members ):

For the next element of the result matrix in the first row and second column of the first row and the second column is used in accordance with:

This computational scheme is now continuing in the second row and first column:

It is repeated in the last element in the second row and second column:

The result is the matrix product. An optical assistance and support to calculate the matrix product offers falksche scheme.

Special cases

Row vector times a column vector

If the first matrix in a single row and the second matrix with only one column, so the matrix product results in a matrix. If one interprets a single-line matrix as a row vector and a column matrix as a column vector, one obtains in the case of real vectors, the standard scalar

Two vectors, to be transposed vector represents both vectors must be equal in length and the result is then a real number. Each entry of the matrix product can thus be regarded as a scalar product of a row vector of the matrix by a column vector of the matrix.

Column vector times a row vector

If, conversely, the first matrix with only one column of length and the second matrix from only one row of length, so the matrix product results in a matrix. If again a one-column matrix as a column vector and a single-line matrix interpreted as a row vector, the resulting product of vectors as a dyadic product is

Referred to. Each entry of the resulting matrix is the product of an element of the first vector to an element of the second vector. The matrix product can be written as the sum of the dyadic product of the column vectors to the respective row vectors of.

Matrix times vector

An important special case of a matrix multiplication is created when the second array is composed of only one column. The result of the matrix multiplication is then also a single-column matrix. If again a one-column matrix interpreted as a column vector, we obtain the matrix-vector product

Where, and are. The matrix-vector product is used, for example, in matrix notation, systems of linear equations.

Vector times matrix

If, conversely, the first matrix consists of only one line, gives the vector -matrix product

Of a row vector and a matrix again a row vector.

Square of a matrix

By multiplying a square matrix with itself again results in a matrix of the same size, which is referred to as the square of the matrix, that is,

According to this, with the matrix of power, so the fold product of a matrix denoted by yourself. Matrix powers are used for example to define the Matrixexponentials and the matrix logarithm. Conversely, is a square matrix for the

Applies, the square root of the matrix. A matrix can be several, even possess an infinite number of square roots. Similarly, a matrix in which the matrix -th power results, referred to as a - th root of this matrix.

Block matrices

If the two matrices and a block structure, with the block widths of the first matrix must match the block heights of the second matrix, then the matrix product can also note blocks. The resulting matrix then has the heights of the first block and the block widths of the second matrix. In the case of two matrices, each with two sets of two blocks of results, for example

Which the resulting matrix also has two times two blocks.

Properties

Associativity

The matrix multiplication is associative, ie for matrices, and is

With the multiplication of several matrices it does not matter in which order the partial products are formed, as long as the overall ranking is not changed. It is namely for the entry at the point of the resulting matrix product

The matrix multiplication is also compatible with the multiplication of scalars, ie

Distributivity

If we add to the matrix multiplication also the component-wise matrix addition of two matrices, then the distributive laws are satisfied. That is, is valid for all matrices

And similarly for all matrices

Since the commutative law does not apply to the matrix multiplication, it follows from the first law not the second, it is here, so two different laws. The distributive laws follow directly from the distributivity of addition with multiplication in the ring over

For the first distributive law and through an analog conversion for the second distributive law.

Noncommutativity

However, the matrix multiplication is not commutative, i.e., and generally for

For the two matrix products namely and, thus, they can not match for even the dimensions forth. But even if and are square, two matrix products must not be the same as the counter-example

Shows. The noncommutativity of matrix multiplication therefore applies even if the multiplication in the ring should be commutative, as is the case for example with numbers. For special matrices, the matrix multiplication can still be commutative, see the following sections.

Further calculation rules

For the transpose of a matrix product

The order of multiplication is thus reversed by the transposition. For the adjoint of the product of complex matrices shall apply mutatis mutandis

The trace of the product of two matrices, and on the other hand is independent of the order:

The determinants of product rate also applies to the determinant of the product of two square matrices over a commutative ring

The determinant of the product of two not necessarily square matrices can be calculated with the set of Binet - Cauchy.

Algebraic Structures

Ring of square matrices

The amount of square matrices of fixed size, together with the matrix addition and matrix multiplication a noncommutative ring, the die ring. The zero element of this ring is the zero matrix. Is a unitary ring, then the corresponding matrix ring is unitary with the identity matrix as one element, for all matrices

Applies. Is the zero matrix in the matrix ring in this case acts as an absorbent member, i.e. for all matrices shall then

The ring of square matrices is thus no zero divisors, because of not necessarily follow or. Accordingly, in matrix equations may also not be shortened because of does not necessarily follow. The amount of square matrices over a field forms with the matrix addition, scalar multiplication and matrix multiplication is an associative algebra.

Group of regular matrices

The set of regular matrices over a unitary ring forms with the matrix multiplication, the general linear group. The inverse matrix for the matrix is then clearly

Defined. Then for the inverse of the product of two normal matrices

By inverting the order in which multiplication is therefore also be reversed.

Groups of the orthogonal and unitary matrices

A real square matrix is called orthogonal if

Applies. The orthogonal matrices form matrix multiplication with the orthogonal group, a subgroup of the general linear group. Corresponding to this is called a complex square matrix is unitary if

Applies. The unitary matrices form with the matrix multiplication, the unitary group, a subgroup of the general linear group.

Equivalence classes of matrices

With the help of matrix equivalence relations between matrices are defined over a field. Important equivalence relations are:

  • Equivalence: Two matrices and are called equivalent, if there are two regular matrices and so true.
  • Similarity: Two square matrices and are called similar if there is a regular matrix, so true.
  • Congruence: Two square matrices and are called congruent if there is a regular matrix, so true.

Matrices that may be converted by such a regular matrix multiplications with each other, thus forming the equivalence classes.

Algorithms

Standard algorithm

In pseudo code, the matrix multiplication can be implemented as follows:

Matmult function (A, B, L, m, n)      C = zeroes (l, n ) / / initialize result matrix C with zeros      for i = 1 to l / / loop over the rows of C          for k = 1 to n / / loop through the columns of C              for j = 1 to m / / loop through the columns of A / B lines of                  C (i, k) = C (i, k) A ( i, j) * B ( j, k) / / formation of the sum of products              end          end      end      return C The order of the three for-loops can be interchanged as desired. Since the three loops are independent of each other, the number of operations required of the order

The running time of the algorithm is therefore cubic for square matrices, ie of the order

In the matrix - chain multiplication, ie the multiplication of three or more non-square matrices, the total number of arithmetic operations can be minimized by a judicious choice of order.

Algorithms with better complexity

Asymptotically efficient two square matrices can be multiplied by the Strassen algorithm. Here, the number of multiplications needed to multiply two matrices, reduced by skillfully combining of eight to seven, what happens at the expense of additional additions. Applying this method recursively, resulting in a complexity of order

However, the Strassen algorithm is worthwhile because of the hidden in the Landau notation constants only for very large matrices. The algorithm with currently the best complexity is an improvement of the Coppersmith - Winograd algorithm with a running time of the approximate order

For practical use, however, this algorithm is not suitable. A lower bound for the complexity of matrix multiplication is

Since each of the elements of the output matrix must be generated. The determination of optimal upper and lower complexity bounds for the matrix multiplication is the subject of current research.

Programming

The matrix product is integrated into programming systems in different ways:

  • In the numerical software package MATLAB, the matrix product is represented by the simple multiplication operator *.
  • In the Fortran programming language stands for the matrix multiplication own routine format msgid available while instead the component-wise Hadamard product is realized by *.
  • In the statistical software R matrix multiplication is realized by % * %, while the Hadamard product is calculated by * also.

Use

Factorizations

In some way, the inversion of the matrix multiplication, the matrix factorization of a given product of two matrices, and, that is, the determination of a representation of the form

Such a factorization is not unique, so are placed on the matrices and additional requirements, such as orthogonality, symmetry or a particular occupation structure. Important decompositions of real or complex matrices of this type are:

  • The LU decomposition of a square matrix into a lower and an upper triangular matrix
  • Cholesky decomposition, a special LU factorization of a symmetric positive definite matrix
  • The ILU decomposition, a kind of incomplete LU decomposition especially for sparse matrices
  • The QR decomposition of a matrix into an orthogonal and an upper triangular matrix Marix
  • Schur decomposition of a square matrix in the three matrices: a unitary matrix, an upper triangular matrix and the inverse of the first matrix
  • The singular value decomposition of a matrix in the three matrices: a unitary matrix, is a diagonal matrix comprising singular values ​​and a unitary matrix adjoint

Such decompositions of matrices are often used in numerical linear algebra about to solve systems of linear equations or eigenvalue problems. Thus, for example, the row and column operations in Gaussian elimination method to specify as a product of elementary matrices.

Composition of linear maps

Are general and two finite vector spaces over the same body, then every linear map depending on the choice of a base can be represented in the two vector spaces over its projection matrix. The image of a vector under the picture in the respective bases can then the matrix-vector product

Be determined. In geometry, for example, can in this way, any rotation around the origin and each reflection at a source level by such a matrix-vector product run. Is now another vector space and another linear map, then for the projection matrix of the sequential execution of these two figures

The imaging matrix of a sequential execution of two linear maps is therefore the matrix product of the two associated imaging matrices. In this way, each rotation-reflection for example, can be represented as the product of a rotation matrix and a reflection matrix. Alternatively, a linear transformation can be performed by vector - matrix multiplication of a row vector of the transposed matrix image. The sequential execution of pictures then corresponds to a matrix multiplication from the right instead of the left.

Applications

Applications of matrix multiplication can be found, among other things:

  • Differentiable in calculus in the composition of functions of several variables by the multidimensional chain rule
  • In computer graphics in the implementation of coordinate transformations in a graphics pipeline
  • In the optical system in the calculation of light beams by means of optical components Matrizenoptik
  • In economics at the input-output analysis of a production as well as in -house material entanglement
  • In robotics in the description of kinematic chains using the Denavit -Hartenberg transformation
  • In electrical engineering at the Zweitortheorie electrical networks
  • In quantum mechanics in the context of matrix, also for " infinitely large " matrices

Related Products

In addition to the matrix product still exist a number of other products of matrices:

  • The Hadamard product of two matrices results in a matrix whose entries are simply determined by component-wise multiplication of the entries of the output matrices. Compared to the matrix product, however it is far less significant.
  • The Kronecker product of two matrices results in a large matrix obtained by considering all possible products of entries of the two output matrices.
  • The Frobenius inner product of two real or complex matrices yields a number that is calculated by componentwise multiplication of the entries of the output matrices and subsequent summation over all these products. In the complex case this entry is always a complex conjugate.

Generalizations

General matrices can be viewed over half ring, the most important properties of matrix multiplication, such as associativity and distributivity, are retained. Accordingly then forms over the semi-ring of square matrices. Is the zero matrix in Matrizenhalbring again the zero element, and also absorbing, if the zero element in the underlying half-ring is absorbent. If the semiring underlying unitary, then the identity matrix is again the identity element in Matrizenhalbring.

Important examples of semirings are distributive lattices, such as Boolean algebras. Summing up the elements of such an association as truth values ​​, so are matrices over a bandage binary relations. The matrix multiplication in this case corresponds to the composition of relations.

556469
de