Polynomial interpolation

In the numerical analysis is meant by polynomial interpolation, the search for a polynomial which (eg from a series of measurements ) runs exactly through specified points. This polynomial is called interpolation and is said to interpolate the given points.

  • 3.5.1 solution with Lagrange
  • 3.5.2 Solution with Newton
  • 4.1 Error Estimation
  • 4.2 Error Optimization by Chebyshev
  • 4.3 Runge's phenomenon
  • 4.4 Convergence behavior

Applications

Polynomials can be integrated very easily and derive. Therefore, diving interpolating polynomials at many points in the numerical mathematics, for example, in the numerical integration and accordingly in methods for the numerical solution of ordinary differential equations.

Problem

For a given value pairs with pairwise distinct points produce a maximum polynomial of degree is sought, the all equations

Met. Such a polynomial always exists and is uniquely determined, as shown below.

In the interpolation problem is therefore in the vector space of polynomials with degree or to seek smaller, short. Is a base of, the equations yield a linear equation system for the coefficients of the basis representation. As can be one and the same polynomial but represent different depending on which base is chosen for the vector space, one can obtain very different systems of equations. If we choose for the standard basis, ie for the representation, we obtain a system of equations with the Vandermonde matrix:

This is regular if the interpolation points are different in pairs, the system of equations can then be solved uniquely. Thus, the existence and uniqueness of this polynomial is always ensured. Despite the theoretical feasibility of this kind of interpolation is not performed in practice, since the calculation of the Vandermonde matrix is () consuming and also ill-conditioned with an unsuitable choice of the interpolation points.

Solution method

The above system of equations could be solved for example using the Gaussian elimination method. The cost of this is, however, relatively large with. If you select a different basis to the standard basis for the description of the polynomial of the effort can be reduced.

Lagrange interpolation formula

Is convenient for theoretical considerations rather a representation in the Lagrange basis. The basic functions are the Lagrange polynomials

Are defined such that

Holds, where is the Kronecker delta. Thus, the matrix corresponds exactly to the unit matrix. The solution of the Interpolationsproblems can then specify simply as

With the support values ​​. This is often used to prove the existence of the solution of Interpolationsproblems. An advantage of the Lagrange base is thus that the basic functions of the supporting values ​​are independent. Thus, different sets of reference values ​​with the same reference points can be interpolated quickly when the basis functions have been determined. A disadvantage of this representation is that all basis vectors need to be calculated completely new to adding a single reference point, which is why this method for most practical purposes is too costly.

Newton algorithm

In this method, the polynomial is shown in Newton -based so that the coefficients can be efficiently determined using the scheme of divided differences. An efficient evaluation of the polynomial can then be performed using the Horner scheme.

Approach: Newton - based

As an approach to the desired interpolation to Choose the Newton basis functions and is thus shown that with Newton's interpolation formula

The system of equations of the equations then has the form

In contrast to the complicated Vandermonde matrix with choice of standard basis is thus obtained when choosing the Newton basis, a simple structured lower triangular matrix and the system of equations can be solved easily.

Determination of the coefficients: Scheme of the divided differences

The coefficients are not determined directly from the above equation system, but more efficiently by using the divided differences. By induction we prove a recursion formula of Aitken that applies to the coefficients

Here, for the divided differences defined recursively by

The notation with attached explained by the fact that often an unknown function is assumed to be interpolated with known function values ​​.

The recursive calculation of divided differences can be illustrated as follows. The desired coefficients are exactly the topmost row angular contact:

Obviously, the pairs of values ​​is in addition to another point in the above scheme only add one more line to calculate the additional coefficients. The previously determined coefficients do not need to be recalculated.

If the coefficients of the interpolating polynomial are once known, one can evaluate it efficiently using the Horner scheme. To do this, write in the form ( simple transformation of Newton's interpolation formula )

Can be so calculated recursively by

This requires an effort of.

Algorithm by Neville - Aitken

Similar to the Newtonian algorithm is calculated recursively the solution algorithm of the Neville - Aitken. These denote the uniquely determined interpolation polynomial of degree to the breakpoints being. We then have the recurrence formula of Aitken:

Prove it can be by inserting, thereby verifying that the right side of the equation satisfies the interpolation condition. The uniqueness of the interpolation then yields the assertion.

With the scheme of Neville evaluation of then can be recursively:

Comparison of solution methods

If you want to determine all the coefficients of the interpolating polynomial, so has the Newton algorithm for this purpose the least necessary expenditure of. The so certain polynomial can then be evaluated with operations in one place. Therefore, the Newton algorithm is well suited when the interpolation polynomial is to be evaluated at many points. Nor can efficiently add more points. Are the support points or support values ​​are too close together, however, there is a risk of cancellation in the determination of divided differences.

The Neville - Aitken algorithm, however, is well suited when an interpolation polynomial is to be evaluated only a few places, but he is less susceptible to extinction. Also in the Neville - Aitken algorithm is efficient new bases can be added. For example, a desired accuracy of the interpolation can be achieved at a point further by adding more nodes.

Example: interpolation of the tangent function

Interpolate the function at given points

Solution with Lagrange

The Lagrange basis functions are

So is the interpolation polynomial

Solution with Newton

The divided differences are here

And the interpolation is

If, accurate initial values ​​, the disappearance of the first and third coefficient.

Interpolationsgüte

Error estimate

Given a function, the function values ​​are interpolated at the points by the polynomial. With the minimum interval will be referred to, which contains the sampling points, and a location. It should also be () - times continuously differentiable on. Then one exists, applies to the:

In particular, therefore with respect to the supremum norm on:

Error optimization by Chebyshev

Thus, the error depends upon a derivative of and from the product, that is, the reference points. Sometimes you are in the position that supporting points one can choose himself; about, when performing a physical experiment, or even in some methods for the numerical solution of differential equations. In this case, the question is of interest, for which support points the product into the maximum norm is minimal.

Chebyshev has fully understood this question: Consider the polynomials with zeros. ( The " Chebyshev polynomials " and " Chebyshev points" ) The first Chebyshev polynomials are:

One can then prove that every polynomial of the form is limited to the interval by a normalized Chebyshev polynomial. This statement can then use the transformation

Be transferred to the case of a general interval. The proof also provides the estimate

Runge's phenomenon

Improves Interpolationsgüte if more points are added up? Generally not: With a high degree of the polynomial, it may happen that the polynomial barely resembles the function to be interpolated, which is also known as Runge's phenomenon. Polynomials strive against the limiting case. To the interpolating function behaves differently, as periodic or asymptotically constant, followed by severe oscillations near the interval boundaries. For such functions Polynominterpolationen are relatively unsuitable over the entire interval.

Chebyshev interpolation points that are closer to the interval limits, can indeed reduce the total error of interpolation, but a change of the interpolation method is recommended, for instance for spline interpolation. Runge was for this phenomenon at an example, which is named after him Runge function:

Convergence behavior

However, there are conditions under which the Interpolationsgüte improves with increasing number of vertices: If the grid nodes always " fine " and is an analytic function is interpolated. More precisely: Let be an analytic function on the interval. For an interval division

Is its norm defined by

There is a uniquely determined polynomial that interpolates at the support points for each interval division. Equally applies to a sequence of interval partitions, it follows.

However it can be also a find for each row on a continuous function, so that not even against converges ( set of Faber ).

Generalization

So far, the support points of the interpolation were assumed to be pairwise distinct. In Hermiteinterpolation that is not the case. Multiple -occurring nodes are interpreted as derivatives of the function to be interpolated.

38176
de