Cramer's rule

Cramer's rule or determinant method is a mathematical formula for the solution of a linear system of equations. It is helpful in the theoretical consideration of linear systems of equations. For the calculation of a solution but the computational cost is too high, as a rule, since this relatively large number of determinants occur. Therefore, other methods of numerical mathematics are used. Cramer's rule is named after Gabriel Cramer, who published them in 1750, but it was previously found by Leibniz.

  • 7.1 The inverse of a matrix
  • 7.2 solution of a homogeneous linear system of equations

Rule

Given a system of linear equations with the same number of equations as unknowns in the form

Or in matrix notation

Provided is also that the quadratic coefficient matrix regular ( invertible ) is. This is exactly the case when it is.

Then the system of equations has a unique solution and the components of the uniquely determined solution vector are given by

The matrix is ​​in this case formed by the - th column of the coefficient matrix is replaced by the right side of the equation system:

Examples

Linear equations of 2nd order

Extended coefficient matrix of the equation system is then:

After Cramer 's rule to the solution calculated as follows:

The vertical lines are a common notation of the determinant.

Linear equations 3rd order

Extended coefficient matrix of the equation system is then:

After Cramer 's rule, the solution of the system is calculated as follows:

The vertical lines are a common notation of the determinant.

History

Cramer's rule was published in 1750 by Gabriel Cramer in Appendix 1 of his book " Introduction à l' analyze the lignes courbes algébriques ". He gave it the explicit formulas for systems of linear equations with up to three equations, describing how to build the solution formulas for systems of equations with more equations. Since the determinant was not introduced, it is used in the apertures, each with a polynomial of the numerator and denominator. As the following excerpt from the original paper shows, these are identical to the polynomials of the Leibniz formula.

From this example one can see that Cramer is not used today notation systems of linear equations. This formula is as follows:

Cramer himself was aware that systems of linear equations are not always uniquely solvable. Étienne Bézout showed in 1764 that the denominator is zero if the system of equations is not uniquely solvable. Furthermore, Cramer gave no proof of his formula. This yielded only Augustin Louis Cauchy in 1815. Doing so, he also introduced the notation used today the cramer 's rule.

Gottfried Wilhelm Leibniz brought the Cramer's rule already in 1678 in a manuscript on paper. This, however, was not discovered until later and thus had no effect on the development of methods for solving systems of linear equations. Colin Maclaurin described in his " Treatise of Algebra", which was published in 1748, the special cases of Cramer 's rule for systems of linear equations of two or three equations. Although he had the idea to extend these formulas to systems of equations with multiple equations, but in contrast to Cramer, he found no rule how to set the correct sign in the polynomials used in this case. In the 20th century, Carl Benjamin Boyer ignited a controversy among historians of mathematics, whether or Maclaurin Cramer was the discoverer of the formula. He recommended doing a renamed Maclaurin Cramer rule.

Computational effort

To solve the Cramer 's rule is a linear system of equations with unknowns determinants must be calculated. The number of arithmetic operations to be performed so that depends only on the algorithm for computing the determinant.

Are the determinants of Cramer calculated using the Leibniz formula, then multiplications and additions are needed for each determinant. Even with a system of four equations 360 multiplications, four divisions and 115 additions are required. These are a large number of operations, compared to other procedures. Also, using more efficient algorithms for computing determinants, the computational effort for solving a system of linear equations with Cramer 's rule is much higher than for example in the Gaussian elimination method.

When calculating a matrix on a computer with floating point operations per second ( MFLOPS 100 ), the following calculation times would result in:

Evidence

For this proof, one uses a matrix obtained by replacing the th column of the identity matrix by the solution vector of the system of equations. So looks for a system of equations with four equations as follows:

Based on this example can also see that is true.

In addition, since, it follows that the product rule for determinants

Since the determinant, by assumption, is not the zero element exists.

Generalization

A generalization - and at the same time a step in the proof - the Cramer 's rule, the following sentence is: Given a square linear system of equations of the form

Is a solution to this linear system of equations, then apply

The matrix formed from the coefficient matrix by the - th column is replaced by the right side of the equation system.

It removes the limitation to a unique solution of equations. Since, moreover, occurs no more division, the rate applies to all systems of equations with coefficients in a commutative ring. This generalization is no longer called Cramer's rule.

Inferences from the cramer 's rule

The inverse of a matrix

Main article: Regular Matrix

The individual columns of the inverse of a matrix are each formed by the solution of the system with the - th unit vector on the right side. If we calculate this with the Cramer 's rule, we obtain using the adjoint formula

This formula also shows a property of matrices with entries from a commutative ring instead of a body. These are therefore invertible if it is invertible.

Solution of a homogeneous linear system of equations

With the aid of Cramer 's rule it can be shown easily that the trivial solution is the only solution of any homogeneous system of linear equations with. There are only zeros in all matrices in the -th column, the columns are not linearly independent, and it is therefore essential. Thus all directions to zero.

From the above property follows directly that the core of a system of linear equations with the zero vector, and this therefore is uniquely solvable.

206175
de