False position method

The Regula - falsi method ( Latin: regula falsi = " rule of falsity " ), also: Regula duarum falsarum Positionum (Latin: regula duarum falsarum positionum = " rule by two-time wrong approach " ), Falsirechnung rsp. Falsi bill or linear straddle called, is a numerical method for computing zeros of real functions. It combines methods from the secant method and the bisection.

  • 2.1 Illinois, Pegasus and Anderson / Björk procedure 2.1.1 Idea of the method
  • 2.1.2 algorithm
  • 2.1.3 Recursive implementation of the Pegasus method

The process ( primitive form)

The Regula - falsi method starts with two places ( near the root ) and whose function evaluations, have different signs. In the interval [a, b] is thus located by the intermediate value theorem ( for continuous f) a zero. Now, you reduced in several iteration interval, and so get a more accurate approximation for the zero point.

Iteration

In step k ​​is calculated:

Now, you choose ak, bk as follows:

  • Have the same sign if and as well as
  • Have the same sign if and.

Comments

  • The calculation of the effect of applying the secant method with an iteration in the th interval. In contrast to the secant method always is in this interval, but a zero.
  • Geometrically, one can by and interpreted as the zero of the secant.
  • Is of course always in the interval.
  • Convergence:

Improvement of the process

Is concave or convex in the interval, the second derivative has so everywhere in the interval of the same sign, it remains one of the interval limits for all further iterations are. The other now converges only linearly to the solution.

Remedy the following procedure.

Illinois, Pegasus and Anderson / Björk procedure

Idea of ​​the method

The improved method is based on the following idea as a basis. If the "left" interval limit in the current step does not change - that is, that the actual zero point between the "left" boundary and the approximate zero point is - you simply multiplied by a factor and returns the function value at the point thus closer to zero.

Either thereby shortening the distance between the approximation to the zero point, in the next step or the zero point is approached in the next step interval boundary between the actual root and the " right ".

In the second case "right" and "left" for the next step are then simply reversed. Since the second case, at some point - even in convex intervals - always happens, it is certain that neither of the two interval limits remains until canceled. Thus, the convergence is guaranteed superlinear.

Algorithm

The following algorithm, these methods have in common:

Here, the interval limits in the second step, and the function values ​​at the points and. are the stopping boundaries and the velocity factor. stands for an unspecified, two ary Boolean function. Useful functions would be here the disjunction of the conjunction, the identity of the first and the identity of the second operand. In the first case, one of the two stopping boundaries, both just below and in the fourth case, in the second case in the third case, so that is wrong and the procedure terminates.

The various methods differ only in the velocity factor.

Recursive implementation of the Pegasus method

The test function and the termination criteria:

EPSX, epsf are defined f (x) is defined The velocity factor for the Pegasus method:

M (f2, fz ): return ÷ f2 (f2 fz ) The actual recursive algorithm:

Better false position ( x1, x2, F1, F2):    z: = x1 - f1 * (x2 - x1 ) ÷ ( f2 - f1 ) / / calculate approximation for the zero point    fz: = f (z)    / / Below the stop limit: return z as an approximation    if | x2 - x1 | < EPSX or | fz | < epsf then return z    Swap " left and right ", look zero in [f ( xz ), f ( x2 )]: / / otherwise, zero in [f ( xz ), f ( x2 )]?    if fz · f2 <0 then return false better position ( x2, z, f2, fz )    / / Else: " shorten " and look zero in [x1, z]    better return false position ( x1, z, m (f2, fz ) · f1, fz ) The method by which the process is for the material to be examined interval started:

Better false position ( x1, x2 ): return false better position ( x1, x2, f ( x1 ), f ( x2) ) Comments

  • The convergence of the method is superlinear and comparable with that of the secant method.
  • Due to the super -linear, guaranteed convergence and a relatively low computational complexity per iteration of this method other methods are typically in dimensional problems ( such as the Newton method ) is preferable.

History

The oldest surviving document that testifies to the knowledge of the method of double false piecing, is the Indo- mathematical journal " Vaishali Ganit " ( 3rd century BC). The ancient Chinese mathematical text called The Nine chapters of mathematical art ( 200 BC - 100 AD), the algorithm also mentioned. In this text, the procedures of Examples, however, was applied only to linear equations, so that the solutions were achieved in only one iteration. Leonardo da Pisa (Fibonacci ) described the method of double false piecing in his book " Liber Abaci " (1202 AD), inspired by a method which he had learned from Arabic sources.

Adam Ries also knew the regula falsi and described the method as follows:

" Is expected to take two wrong numbers that are to be checked according to the task thoroughly in the extent that it requires the desired number. Do they lead to higher earnings, as it is right in truth, they denote by the sign plus, for too small a result, but they describe the characters -, minus called. Then subtract a loss of the other. What it remains as a residual, keep to your splitter. Then multiply crosswise both a wrong number with the loss of the other. Pull one from the other, and what is a remainder, divide by the previously calculated divisor. Thus, the solution of the problem comes out. But leads to a wrong number is too large, and the other to too small a result, so will add the two deficits. What comes out is your divider. Then multiply crosswise, add and divide. Thus, the solution of the problem comes out. "

676773
de