List of numerical analysis topics

The list of numerical methods leads method of numerical analysis on the application areas.

Systems of linear equations

  • Gaussian elimination method (or LU decomposition ): A classic direct method - for large matrices, however, too complex.
  • Cholesky decomposition: For symmetric positive definite matrices can be similar to the LU decomposition creates a symmetric decomposition with half the effort.
  • QR factorization: a direct method is also at least twice the duration as compared to the Gaussian method, however better stability characteristics. Powered by Householder transformations, it is particularly suitable for linear least squares problems.
  • Splitting procedure: Classical iterative methods. Gauss - Seidel method: Also known as a single-step process.
  • Jacobi method: Also referred to as total-step process.
  • Richardson method
  • Chebyshev iteration: a splitting procedure with additional acceleration
  • SOR method
  • SSOR method

Nonlinear systems of equations

  • Bisection: A very simple method for root finding, which is based on bisection of an interval. Converges linearly, the error roughly halve in each iteration.
  • Bisection - exclusion: Special bisection method for polynomials, which restricts all zeros with arbitrary accuracy within a start region.
  • Regula falsi, secant method: Simple iterative methods to obtain the root of one-dimensional functions.
  • Fixed point method: A class linearly convergent method for finding fixed points of functions in multidimensional.
  • Newton's method: a quadratic convergent method for finding zeros of differentiable functions. Also applicable in the multidimensional then in each iteration is a linear equation system to be solved.
  • Quasi- Newton's method: A variation on the Newton's method is used, only an approximation of the derivative in the.
  • Halley method, Euler - Chebyshev method: cubic convergent method for finding zeros of twice differentiable functions. Can also be used in multidimensional. Then two linear systems of equations have to be solved in each step.
  • Gradient: A slow process for the solution of a minimization problem.
  • Gauss -Newton method: A locally quadratically convergent method for solving nonlinear least squares problems.
  • Levenberg -Marquardt algorithm: a compound of the Gauss -Newton method with a trust-region strategy.
  • Homotopy: A method in which an arbitrary problem with a simple solution to a given problem is continuously connected. In many cases, the solution of the simple problem can be traced to a solution of the actual problem.
  • Bairstow method: A special iterative method to determine complex roots of polynomials by means of real operations.
  • Weierstrass ( Dochev -Durand - Kerner - Presic ) method Aberth Ehrlich method, separation circuit method: Special, from the Newton method derived methods for the simultaneous determination of all complex zeros of a polynomial.

Numerical Integration

  • Newton-Cotes formulas: Simple quadrature formulas based on polynomial interpolation.
  • Gaussian quadrature: Quadrature formulas with optimal order of convergence.
  • Romberg Integration: A technique to improve the Newton-Cotes formulas.
  • Midpoint rule
  • Trapezoidal rule
  • Simpson rule / Kepler Fassregel
  • Lie integration: integration method, which is based on a generalized differential operator

Approximation and Interpolation

  • Polynomial interpolation: interpolation by polynomials.
  • Spline interpolation: Interpolation by piecewise continuous polynomials.
  • Trigonometric Interpolation: interpolation by trigonometric polynomials.
  • Remez algorithm: finds the optimal approximation with respect to the supremum norm.
  • De Casteljau algorithm: The algorithm for calculating Bezier curves.

Optimization

  • Climber algorithm: general class of optimization techniques
  • Branch-and - Bound: Enumerationsverfahren for solving integer programming problems.
  • Cutting plane algorithms, branch-and -cut: a method for solving integer linear programming problems.
  • Simplex method: A direct method of linear programming.
  • Downhill simplex method: A derivative- free process of nonlinear optimization.
  • Interior-point method: An iterative method for nonlinear optimization problems.
  • The logarithmic barrier method: Also, an iterative method for nonlinear optimization problems.
  • Simulated annealing: A heuristic method for optimization problems of high complexity.

Numerical Solution of Ordinary Differential Equations

  • General linear methods: This class of methods allows a uniform representation of most methods in this schedule here
  • Eulerian Polygonzugverfahren: The simplest solution method, a 1-step -step procedure.
  • One-step process: methods which use only information from the current time step, in order to calculate the next approximation.
  • Multi-step procedure: procedure use the information from previous steps. Depending on the number of time steps the start values ​​are, for example to identify with a single step process.
  • BDF methods: a special family of multistep methods for stiff initial value problems
  • Adams - Bashforth method: Family of explicit multistep methods.
  • Adams - Moulton method: Family of implicit multistep methods.
  • Predictor-corrector method: The combination of an explicit and an implicit multistep method of the same order of error. The explicit method gives an approximation ( the so-called predictor ), the implicit method improves the approximation ( the so-called corrector ).
  • Runge- Kutta methods: family of one-step methods, including the classical Runge- Kutta method.
  • Rosenbrock - Wanner method: Family linearly implicit one-step methods for stiff initial value problems
  • Newton - Stormer-Verlet leapfrog scheme: Popular symplektisches integration methods for problems of classical dynamics, such as planetary motion, to molecular dynamics, with improved preservation of dynamic invariants.

Numerical Methods for Partial Differential Equations

  • Finite Element Method: A modern, flexible method for solving mainly of elliptic partial differential equations.
  • Finite-volume method: A modern method for the solution of conservation equations.
  • Finite Difference Method: A classic method for arbitrary partial differential equations.
  • Boundary Element Method: A method for the solution of elliptic PDGLen, with only the field edge and not the territory itself (as in the FEM) to discretize.
  • Level set method is a modern method for tracking of moving edges.
  • Finite point method: a recent calculation method only with points but with no elements.
  • Finite strip method: special, simplified form of the FEM with stripes as elements
  • Orthogonal Collocation: method for arbitrary partial differential equation, often combined with the finite difference method.

Calculation of eigenvalues

  • QR algorithm: calculation of all eigenvalues ​​, however, associated with high costs.
  • LR algorithm: Even Treppeniteration called a the QR method similar but less stable algorithm.
  • Power method: This allows the calculation of the amount the largest eigenvalue.
  • Subspace: This is a multidimensional extension of the power method and allows the simultaneous calculation of several of the largest magnitude eigenvalues.
  • Rayleigh quotient iteration: A special fast converging variant of inverse iteration with shift.
  • Lanczos method: calculation of some eigenvalues ​​of large sparse matrices.
  • Arnoldi method: calculation of some eigenvalues ​​of large sparse matrices.
  • Jacobi method: calculation of all eigenvalues ​​and eigenvectors of small symmetric matrices.
  • Jacobi -Davidson method: calculation of some eigenvalues ​​of large sparse matrices.
  • Folded Spectrum Method (spectrum convolution ): Calculation of eigenvalue and the associated eigenvector close to a shift ( from the middle of the spectrum).

Others

  • Fast Fourier Transform (FFT)
  • Wavelet transformation
  • Regularization for solving ill-posed problems, in particular the classical Tikhonov -Phillips regularization
  • Multipole method
  • Gram- Schmidt's orthogonalization
  • Heron method for calculating square roots
  • Bresenham algorithm for lines or circular interpolation in computer graphics
  • Extrapolation
  • Summation method and sequence transformations for divergent sequences and series
  • Convergence acceleration for convergent sequences and series of real or complex numbers, vectors and matrices
  • CORDIC is an efficient iterative algorithm, with which users can implement many transcendental function in microcomputers and digital circuits.
  • Numerical Mathematics
  • List (mathematics)
524087
de