Linear algebra

The linear algebra (including vector algebra ) is a branch of mathematics that deals with vector spaces and linear maps between them. This includes in particular the consideration of systems of linear equations and matrices with a.

Vector spaces and their linear maps are an important tool in many areas of mathematics. Outside of pure mathematics to find applications among others in the natural sciences and in economics (for example, in the optimization).

The linear algebra arose from two specific requirements: on the one hand, the solving of linear systems, on the other hand, the mathematical description of geometric objects, the so-called analytic geometry (hence some authors refer to as linear algebra linear geometry).

  • 7.1 invertibility
  • 7.2 determinants
  • 8.1 Calculation of powers by diagonalization
  • 8.2 Eigenvalues
  • 8.3 diagonalizability

History

In 1750, Gabriel Cramer published his namesake Cramer's rule. This one was the first time in the possession of a solution formula for many systems of linear equations. Cramer's rule also gave a decisive impetus for the development of the theory of determinants in the next fifty years.

The history of modern linear algebra dates back to the year 1843 and 1844. Conceived in 1843 William Rowan Hamilton (of the term originates Vector ) quaternions with an extension of complex numbers. 1844 published Hermann Grassmann his book The linear extension theory. Arthur Cayley led then in 1857 with the 2 × 2 matrices one of the most basic algebraic ideas.

Systems of linear equations

As systems of linear equations is called a summary of equations of the type

Such systems of equations obtained from many everyday issues, such as:

The essential abstraction step of linear algebra now is regarded, the left side as a function of the unknowns:

Then the solution of the equation system to the task: Find a so that

Applies. Superimposing letter is merely a formalism to deal with more than one figure simultaneously.

Instead we also write simply the relevant figures in the form of a rectangle and calls the object is a matrix:

It is found that the function has specific properties, it is a linear map. Is a solution to the equation system, and a solution of the equation system, it is

A solution of. This can also be written in the form. If further any real number, then; is

Analytic Geometry

The other origin of linear algebra can be found in the mathematical description of the 2 - and 3 -dimensional ( Euclidean ) space, also called " the space of intuition ." With the help of a coordinate system points can be described in space by triples of numbers. The mapping type of displacement leads to the concept of the vector that indicates the direction and amount of displacement. Many physical quantities, such as forces, always have this aspect direction.

Since one can also describe vectors by triples, blurs the distinction between vectors and points: a point corresponds to its position vector pointing from the origin after.

Many of the observed in classical geometry mapping types, such as rotations about axes through the origin and reflecting planes through the origin, belong to the class of linear operators, which was already mentioned above.

Vector spaces and linear algebra

The term of the vector space arises as an abstraction of the above examples: A vector space is a set whose elements are called vectors, together with

  • An addition of vectors
  • A multiplication of vectors with elements of a fixed body, scalar multiplication (outer multiplication) called.

This addition and scalar multiplication have to satisfy some simple properties that apply to the vectors in the space of intuition.

One could say that vector spaces are being defined in such a way that one can speak of linear maps between them.

In a way, the concept of the vector space of linear algebra is already too general. You may assign any vector space one dimension, for example, the plane has dimension 2 and the space of intuition, the third dimension but there are vector spaces whose dimension is not finite, and many of the known properties are lost. However, it has proven to be very successful to equip infinite-dimensional vector spaces with an additional topological structure; the study of topological vector spaces is the subject of functional analysis.

The remainder of this article deals with the case of finite dimensions.

Important phrases and results

Every vector space has at least one base. The two bases of a vector space have the same number of elements; the only reason why it is to speak meaningfully of the dimension of a vector space. For sums and averages of vector subspaces, the dimension formula and for the dimensions of factor spaces, the formula is valid.

Each linear map is determined by specifying the images of a basis of clearly. For linear mappings of the homomorphism theorem and the rank theorem apply. Linear mappings can be fixed with respect to the chosen bases represented by matrices. Here, the sequential execution of linear maps corresponds to the multiplication of their representing matrices.

A linear system of equations, and is solvable if the rank of the matrix is equal to the rank of the augmented coefficient matrix. In this case, the amount of solution the system is an affine sub-space of the dimension. For not too large systems of equations which determine the rank and the calculation of the solution space can be carried out using the Gaussian elimination method.

A linear mapping (ie an endomorphism ) of a finite dimensional vector space is already invertible if it is injective or surjective. Again, this is precisely the case when its determinant is nonzero. It follows that the eigenvalues ​​of a linear operator are exactly the zeros of its characteristic polynomial. Another important indication of the characteristic polynomial is the set of Cayley - Hamilton.

An endomorphism (or a square matrix ) is diagonalizable if the characteristic polynomial splits into linear factors and for each eigenvalue whose algebraic multiplicity equal to the geometric multiplicity, ie the zeros of Procedure of the eigenvalue in the characteristic polynomial is equal to the dimension of the corresponding eigenspace. Equivalent to the existence of a basis of the vector space consisting of eigenvectors of the linear mapping. Endomorphisms whose characteristic polynomial splits into linear factors, are nevertheless still trigonalisierbar, can therefore be represented by a triangular matrix. A somewhat deeper result is that the matrix representing it can be brought even in Jordan normal form.

In vector spaces where an additional dot is placed, is defined by a standard. In these Skalarprodukträumen always exist orthonormal bases that can be constructed approximately by the Gram -Schmidt orthonormalization procedure. After the projection set you can determine the best approximation of a sub-vector space by orthogonal projection in these rooms.

Regarding the diagonalizability of endomorphisms in Skalarprodukträumen the question arises whether there exists an orthonormal basis of eigenvectors. The central result of this is the spectral theorem. In particular in the real case: for any symmetric matrix is an orthogonal matrix such that a diagonal matrix. Applying this result to quadratic forms, there is the set of the principal axis transformation.

Also bilinear forms and Sesquilinearformen can be represented by matrices with fixed selected bases. A bilinear form is symmetric and positive definite, ie a scalar product if their performing matrix is ​​symmetric and positive definite. A symmetric matrix is positive definite if and only if all its eigenvalues ​​are positive. In general, for symmetric bilinear forms and Hermitian Sesquilinearformen the inertia Sylvester's theorem, which states that the number of positive and negative eigenvalues ​​of the representing matrices do not depend on the choice of basis.

Vectors and Matrices

Vectors can be described by their components (depending on application ) as ( here 3-dimensional ) column vector

Or ( here 4 -dimensional ) row vector

Be written.

In the literature vectors are distinguished in different ways by different sizes: There are lowercase, boldface lowercase letters, underlined lowercase letters, lowercase letters with an arrow above it or small Gothic letters used. This article uses lowercase.

A matrix is ​​indicated by a " raster " of numbers. Here is a matrix with 4 rows and 3 columns:

Matrices are usually denoted by capital letters.

Individual elements of a vector are given in column vectors, usually by an index: The second element of the above vector a would then a2 = 7 In row vectors sometimes a high number is used, where you have to watch whether a vector indexing or an exponent is present: With the above example, b has about b4 = 7

Matrix elements are specified by two indices. The elements are represented by lowercase letters: m2, 3 = 2 is the element in the second row of the third column. ( instead of " in the third column of the second row ", because that can be m2, 3 easier to read )

The generalized term of these formations is tensor scalars are tensors 0 - th order vectors tensors 1- th order matrices tensors 2-nd stage. A tensor n-th stage may be represented by an n-dimensional cube figures.

Matrices of special form

In linear algebra, it is often necessary to bring matrices using elementary row operations or change of basis on a special form. The important thing are the following forms:

  • Triangular matrix,
  • Diagonal matrix and
  • Matrices in Jordan form.

Endomorphisms and square matrices

In the representation of a linear map - as described in matrix - there is the special case of a linear mapping of a finite-dimensional vector space to itself ( a so-called endomorphism ). One can then use the same base for original image and image coordinates and receives a square matrix, so that the application of the linear mapping of the left multiplication corresponds with. To bring the function of and expression is used as writing or. The two-time consecutive execution of this figure corresponds to the multiplication by, etc., and you can understand all polynomial expressions with ( sums of multiples of powers of ) as linear transformations of the vector space.

Invertibility

Analogous to the calculation rule for numbers is the zeroth power of a square matrix, the diagonal matrix ( identity matrix ) in which all remaining elements with ones on the diagonal and zero, it corresponds to the identity map of each vector to itself Negative powers of a square matrix can be calculated only if the linear map given by is invertible, so no two different vectors, and maps to the same vector. In other words, you must always follow from an invertible matrix, the linear system may therefore have just the solution. For an invertible matrix with an inverse matrix exists.

Determinants

A determinant is a special function that maps a square matrix a number. This number provides information about some of the properties of the matrix. For example, it can be seen at her whether a matrix is invertible. Another important application is the calculation of the characteristic polynomial, and so that the eigenvalues ​​of the matrix.

There are closed formulas for the calculation of determinants, such as the Laplace expansion theorem or the Leibniz formula. These formulas, however, are more of theoretical value, since their effort increases dramatically with larger matrices. In practice, one can calculate determinants most easily by bringing the matrix using the Gauss algorithm in upper or lower triangular form, the determinant is simply the product of the main diagonal elements.

Example

Above terms should be clarified by an motivated by the Fibonacci sequence example.

Calculation of powers by diagonalization

The Fibonacci sequence is defined by the recursive formula, and defines what is tantamount to

And

So with the non- recursive formula

In the -th power of a matrix occurs.

The behavior of such a matrix in potentiation is not easy to detect; however, the -th power of a diagonal matrix is simply calculated by exponentiation of each diagonal entry. If there is now an invertible matrix such that is diagonal, can the potentiation of attributed to the potentiation of a diagonal matrix of the equation (the left side of this equation is then the -th power of a diagonal matrix ). General it can be through a matrix diagonalization their behavior (for exponentiation, but also in other operations ) easily recognize.

Summing as the matrix of a linear map, so the transformation matrix is the change of basis matrix to another base, so (the identity mapping each vector maps to itself). Then viz.

In the above example it is to find a transformation matrix such that

Is a diagonal matrix in which the golden ratio occurs. From this we finally obtain the formula.

Eigenvalues

How to get from the matrix to the number? At the diagonal matrix can be seen immediately that

So there is a nonzero vector is, which multiplies by multiplication with the diagonal matrix component-wise (more precisely: ver - fueled ) is: . The number is an eigenvalue of the matrix ( with eigenvector ) because of this property. In the case of diagonal matrices, the eigenvalues ​​are equal to the diagonal entries.

But at the same time is also an eigenvalue of the original matrix ( with eigenvector, because ), so the eigenvalues ​​remain unchanged when the transformation matrix. The diagonal form of the matrix thus results from their eigenvalues ​​, and to find the eigenvalues ​​of, one must examine, for which numbers the system of linear equations a nonzero solution has (or, in other words, the matrix is not invertible ).

The numbers sought are precisely those that make the determinant of the matrix to zero. This determinant is a polynomial expression (called the characteristic polynomial of ); in the case of the above 2-by- 2 matrix, this results in a quadratic equation with two solutions. The corresponding eigenvectors are solutions of the linear equation systems, and they then form the columns of the transformation matrix.

Diagonalizability

Whether a matrix is diagonalizable, depends on the number range used. is not diagonalizable, for example, over the rational numbers, because the eigenvalues ​​and irrational numbers. The diagonalizability can also be independent of the speed range fail if not " enough " eigenvalues ​​are present; so is about the Jordan form matrix

Only the eigenvalue 1 ( as a solution of the quadratic equation ) and is not diagonalizable. With a sufficiently large number range (eg, over the complex numbers ) but can diagonalize or transform into Jordan normal form of each matrix.

Since the transformation of a matrix corresponds to the change of basis of a linear map, states that last statement that you can choose a basis for a linear mapping is sufficiently large speed range is always imaged " easily ": In the case of diagonalizability is any basis vector on a multiple of itself mapped ( ie is an eigenvector ); in the case of the Jordan form in multiples of itself plus possibly the last basis vector. This theory of the linear mapping can be generalized to the body, the " sufficiently large " are not; them in other normal forms must be considered in addition to the Jordan form (eg the Frobenius normal form ).

514463
de