Simplex algorithm

A simplex method (also simplex algorithm ) is an optimization method of numerical analysis for solving linear optimization problems. It solves such a problem after finitely many steps exactly or notes whose insolubility or unboundedness. The basic idea of ​​the simplex method was introduced in 1947 by George Dantzig; since then they have evolved through numerous improvements to the main solution methods of linear programming in practice. Simplex method are Pivot method.

Although to date, for each variant of the method, a sample was constructed in which the algorithm requires exponential term, makes a simplex algorithm, in practice, usually faster than other methods, although there are also other competitive methods of resolving single linear programs as, z. example interior-point method. The great advantage of the simplex algorithm, however, is that it for slight change of the problem - for example, adding an additional condition - can perform a "warm start " of the last used solution and therefore usually requires only a few iterations to re- solution, while others process must begin from the beginning. In addition, a simplex method takes advantage of the strength of relationships between a linear programming problem and its dual problem and solves both problems at the same time basically. Both properties are in integer linear optimization or nonlinear there important where very many similar linear tasks have to be solved in sequence.

The geometric basic idea of the algorithm is, along its edges to run from any corner of a polyhedron, which is defined by the linear optimization problem to an optimal corner. The name of the method is due to that the non-negative linear combinations of the base columns describe a simplicial cone in each iteration. A namesake of this method called Downhill Simplex method ( Nelder and Mead 1965) is also based on a simplex, but it is an iterative method for nonlinear optimization.

  • 5.1 conversion to equations
  • 5.2 Determination of an acceptable starting solution (Phase I)
  • 5.3 Improvement of the initial solution (Phase II) 5.3.1 Selection of a pivot column and line
  • 5.3.2 implementation of an exchange step
  • 5.3.3 Repetition of simplex steps
  • 7.1 Selection of the pivot element 7.1.1 Column Selection
  • 7.1.2 row selection

History

The basics of linear programming in 1939 set by the Russian mathematician Leonid Kantorovich Vitalyevich in his book "Mathematical methods in the organization and planning of production." Shortly thereafter (1941 ) the American Frank L. Hitchcock ( 1875-1957 ) presented a paper on a transportation problem. In 1947, George Dantzig published the simplex method, the linear programs were solved systematically for the first time. One of the first documented applications of the new method was the dieting problem of George Stigler, whose goal was a cost-effective as possible dietary composition for soldiers who met the specified minimum and maximum amounts of vitamins and other ingredients. At the optimal solution to this linear program with nine inequalities and 77 variables nine people were then employed, which together required about 120 man-days of computation. Interest in this work first showed the U.S. military, especially the U.S. Air Force, which wanted to optimize military operations. In subsequent years, John developed by Neumann and Oskar Morgenstern, the method further.

With the advent of computers the mid-1950s, larger problems could be solved. It special variants of the simplex method have been developed, such as the revised simplex method, which dealt very sparingly with the then scarce and expensive main memory. In 1954, William Orchard - Hays brought the first commercial implementation of this method on the market. That same year, Carlton Lemke and EML Beale published the dual simplex method, which today - has become one of the standard methods for solving linear programs - for further improvements.

The theoretical complexity of the simplex method was unclear for a long time. Until 1972 Victor constructed clover and George Minty an example in which the algorithm proceeds exponentially many all corners of a polyhedron, and thus showed the exponential term of the process. Similar examples have been found for all known variants of the method.

From the 1970s benefited the simplex algorithm - as well as other methods of linear optimization - algorithmic advances in numerical linear algebra, in particular when solving large systems of linear equations. Especially the development of numerically stable LR- decompositions for sparse matrices contributed significantly to the success and spread of the simplex method.

Since the mid- 1970s to the early 1990s, the process was significantly improved by the development of new pivoting strategies. In particular, the growing importance of the integer linear programming in the 1980s and the development of dual steepest edge pricing in the implementation of JJ Forrest and Donald Goldfarb (1992 ) made ​​the dual simplex method for serious competitors for other solution methods. Conversely, had this improvement of the dual simplex algorithm a significant role in the success of cutting plane algorithms and branch-and- Cut for solving integer linear programs. In addition, ensured new pre-processing methods in the 1990s that more and more LPs were solved. Under the - almost always met in practical applications - provided that the occurring LP matrices are sparse, today can be optimally solved linear programs with hundreds of thousands of variables and inequalities within a few hours.

The basic idea

The simplex method is used to solve linear programs of the form

Wherein a matrix with real entries, called the objective function vector and a vector with restrictions is. The aim is to find a point that satisfies the linear system of equations and a cost function value as high as possible.

Each linear program can be related by simple transformations in this form. In particular, can be a minimization problem by multiplying the objective function by (-1 ) transform into a maximization problem. Most will assume that all entries of are non-negative, which can be achieved always by multiplying individual rows of the equation system by (-1 ) and possibly introducing artificial variables limited.

Geometric can be the set of admissible solutions, ie the set of points with non-negative coordinates that satisfy the system of linear equations, as a convex polyhedron ( multi-dimensional polygon ) interpret, whose dimension is limited by the number of variables to the top. The aim is to find a point with the highest possible value of the target function. Intuitively, this corresponds to the shift of the hyperplane as far as possible, so that they just barely touches the polyhedron in the direction of the vector. All contact points are optimum.

For the set of feasible solutions, there are three options:

One can show that there is always an optimal corner, if the optimization problem has feasible solutions at all and is limited. So you can limit in the search for optimal solutions to the vertices of the polyhedron, of which there may, however, be very many.

The intuitive idea behind the simplex method now is to gradually hand over hand from one corner of the polyhedron to an adjacent corner with a better objective function value until there is no better neighbors more. Since this is a convex optimization problem with linear optimization, the locally optimal corner so found is then also globally optimal, ie, there is no other in the whole area with a better objective function value more.

Term

The number of the corners of a polyhedron can be exponential in the number of variables and inequalities. For example, can the -dimensional unit cube described by linear inequalities, but has corners. Klee and Minty constructed in 1972 in a distorted unit cube, the so-called clover - Minty cube, in which the presented by Dantzig variant of the simplex method actually visited all these corners. Similar examples have been found for all the row and column selection rules. This means that the algorithm has exponential simplex term in all known variants in the worst case.

In degenerate linear programs, as frequently occur in practice, there may be so-called cycles, in which the simplex method always considered the same corner and thus not terminated. But this can be prevented by application of lexicographical row selection rule or by deliberate numerical problems.

From a theoretical perspective, the simplex method is therefore inferior to, for example, the interior-point method with polynomial running time. From a practical point of view it has proved in many cases faster. The biggest advantage of the simplex algorithm over other methods, however, is that it for small changes in the input data in the course of the algorithm a "warm start " allowed, so you can take the last calculated base as a starting point for a few more ( primal or dual ) iterations, while, for example interior-point method in such a case must start from scratch. This happens very often when many similar linear programs must be solved in sequence, for example in the context of cutting plane algorithms, branch-and- bound or branch - and-cut.

In practice, the duration of the simplex method often depends substantially linearly on the number of rows. In fact showed Borgwardt and others in the 1980s that such cases as the clover - Minty cubes are extremely rare and that some variants of the simplex algorithm required under certain assumptions on the input average only polynomial running time. However, it is still unclear whether there is a variant with polynomial running time for all instances.

Mathematical Description

The simplex method consists of two phases:

  • Phase I determines a feasible starting solution or notes that the problem has no solution,
  • Phase II improves an existing solution on, until no improvement in the objective function is no longer possible or the unlimited nature of the problem is detected.

The Big -M method provides a way to combine both phases, ie apply the simplex even if initially given no feasible starting solution.

Determining a starting solution (Phase I)

First, a feasible initial solution must be calculated before we can start the phase II. One way is in favor of introducing an artificial variable for each row and then to consider the following linear program:

In an optimal solution of this auxiliary problem, the artificial variables are as small as possible, and they must always be non-negative. The original problem (LP) has a feasible solution if and only if there is a solution of the auxiliary problem, are used in the so no artificial variables.

The auxiliary problem (LP1 ) has allowed for easy starting solution, namely. In addition, for each feasible solution of the auxiliary problem, so that the objective function upward through is limited. Thus, this linear program has an optimal solution, which can be determined from the initial solution with Phase II of the auxiliary problem. If one finds an optimal solution with this, is obviously a feasible initial solution to the original problem (LP), with its phase II can be started. If this fails, then the original problem is not solved and you can stop.

Determining an optimal solution (Phase II)

Phase II is an iterative procedure that attempts at each step to construct from a feasible solution, a new solution with a better objective function value until it is no longer possible. This essentially corresponds to the repeated solution of a linear system of equations using Gaussian elimination. It can also possibly unlimited nature of the optimization problem are established. To explain this phase, the definition of some terms is necessary.

Bases, base solutions and corners

In Phase II of the simplex method, the term of the base plays an important role. A base is a subset of the columns of which form a regular, therefore invertible sub-matrix. is shown as an index vector of the columns of. The variables that belong to the basic columns, ie in contained, hot -basic variables, the other non-basic variables. The basic solution to a given base is the unique vector whose basic variables than surrender and the non-basic variables all have the value 0. Such a basic solution is thus a feasible solution of the equation system with a maximum non-zero entries. In the case that all components are of non-negative, is also a feasible solution of (LP), that is a point of the polyhedron.

One can show that any basic feasible solution of a corner ( extreme point ) corresponds to the polyhedron and vice versa. A basic solution is called non-degenerate ( nondegenerate ) if it has exactly the non-zero entries. In this case, there is a unique corresponding base. Are all basic solutions does not degenerate, so there is a bijective mapping between bases of the matrix and vertices of the polyhedron.

Is a basic solution, however degenerate, so it has less than non- zero entries and can belong to several bases. In this case, each base of the array defines exactly one corner of the polyhedron, but not vice versa. This case occurs when a corner is defined as greater inequalities, which is as good as ever, in practical planning problems of the case.

Iterative simplex steps

The phase II tries iteratively in each exchange step from an existing base solution, as determined, for example, in Phase I, to construct a new base solution with at least as good objective function value by removing a basic variable from the base and for a previous non- basic variable includes in the base. This is repeated until no improver exchange is possible.

In this phase, there is at the beginning of each iteration, a so-called simplex table to the current base, in which the calculations are performed. It substantially corresponds to the system of linear equations, for a base current to the base transformation.

Simplex tableau

For the formulation of the simplex tableau, there are different ways; the version presented here is based on lecture notes by Martin Grötschel. Each simplex step is based on a permissible basis. At the beginning of the step, the corresponding simplex table has the following form:

Here, the columns have been reordered so that all non-basic columns are on the left and all the columns to the right base for ease of illustration. is the -dimensional unit matrix. The - dimensional matrix and the remaining items above are defined by

It is

  • The regular sub-matrix of which consists of the columns of the current base
  • (mostly non-square ) sub-matrix of which consists of the non- base columns
  • The parts of the objective function vector consisting of the base column, and
  • The parts of the objective function vector, which consist of the non- base columns.

The right side contains the values ​​of the basic variables of the to appropriate basic solution; is the objective function value of this base solution. At the beginning of Phase I, the variables form a basic feasible with the identity matrix as the basis matrix and the associated basic solution. Therefore, at the beginning on the right side is simply the vector.

The entries of the vector mean reduced costs of the non-basic variables. The value indicates the improvement of the objective function if variable is increased by one unit. If at the beginning of a simplex step all reduced cost coefficient is negative or 0, so the current base and the associated basic solution is optimal, since the inclusion of a previous non-basic variable would worsen the objective function value in the base. Is there, however, non-basic variables with positive reduced cost, is one of them included in the base and it removes another variable from the base in the next simplex step. If the new variable can be increased within the constraints, the objective function value improves.

A single simplex step

For the actual simplex step, one then selects a column of the inserted non- basic variable and one row to be removed from the base of basic variables. Be the (matrix ) elements of the current simplex tableau. Then is called the pivot element of the simplex tableau. The column is called the pivot column, the row is called the pivot row. An exchange step is identical to a step in solving a linear equation system, in which one dissolves the pivot line for the variable and then used in the remaining equations. In an exchange step, the new simplex tableau calculated as follows:

Pivot element:

Pivot row for j not equal to s:

Pivot column for i not equal to r:

Remaining elements of the matrix and reduced costs:

Right side:

These computational steps correspond to the application of Gaussian elimination to transform the pivot column s in the tableau to a unit vector. Thus, the inverse matrix is ​​not completely recalculated to the new base, but constructed by modification of the old basis inverse.

A simplex step, which assumes a non-degenerate basic solution leads in any case to a new base solution and thus also to a new vertex of the polyhedron with a better objective function value. If, however, the basic solution degenerates, it may happen that the new base after the simplex step the same basic solution and thus also belongs to the same corner, so that the objective function value does not change. Careless choice of pivots can result in so-called cycles at which in turn always the same bases to be visited, so the algorithm runs in an infinite loop. This can be prevented but, for example, by a suitable selection of the pivot row. In practice, is another method to deal with Zykeln, a deliberate perturbation (numerical error) of the data that is expected out again after a few iterations when the algorithm has left the area in question.

For the choice of the pivot element, there are usually several ways. The originally proposed by Dantzig method selects one of the column with the largest reduced cost value, and any line that is formed in the base solution by the simplex step again admissible ( in particular non-negative ). To a pivot line must be selected at a given pivot column, wherein the minimum

Is assumed. Are all entries in column negative, the optimization problem is unbounded, since one could construct solutions with any good objective function value. In this case, you can stop.

Normally, there are both multiple columns with positive reduced cost as well as several rows where the minimum is attained, so that there is a choice. Since the choice of the pivot element can have a big impact on the computation time, numerous improved pivoting strategies are within the last 60 years have been developed over the textbook version (see below).

Example calculation

A simple example for production planning with two variables, the approach will be shown step by step. In this simple case is an optimal solution easy to find. Real problems, however, can easily consist of hundreds of thousands of variables and constraints, so that one usually can immediately view them the existence of a solution or the value of an optimal solution is not.

A company put two different products. There are 3 machines A, B, C are available. Machine A has a maximum monthly runtime (capacity) of 170 hours, machine hours and machine B 150 C 180 hours. A unit of measure ( ME) of product 1 provides a contribution margin of € 300, a product of ME 2 against 500 euros. Done, 1 ME of product 1, firstly the machine, 1 hour required for the machine A and then 1 Hour 1 ME of product 2 is successively two hours machine A, 1 hour and 3 hours Machine B Machine C

This yields the following conditions:

The company wants to maximize the contribution margin:

Is therefore called the objective function value.

Any fixed costs are independent of the production volume and therefore can be easily deducted from the total contribution margin at the end of the calculation to get the profit.

Transfer into equations

The inequalities will be first converted into equations for the simplex method. On this point, one so-called slack variables, and one representing the unused time of each machine. Obviously, the slack variables can not be negative. Thus, the problem can be formulated so as to expose the slack variables with respect to the original variables. Such a standardized system of equations is called in the English dictionary ( appointment of V. Chvátal ):

Maximize the objective function subject to the constraints:

The above equations can be transferred to the simplex tableau previously described:

-----------------------------        | X1 x2 | right side    ---------------------------------    -z | 300 500 | 0    ---------------------------------    yA | 1 2 | 170 = b1    yB | 1 1 | 150 = b2    yC | 0 3 | 180 = b3 In the header of the non-basic variables are related to the value 0, while the basic variables are in the first column. In the top row - the equation for the objective function - are the objective function coefficients, ie 500 and 300 on the right side is the current basic solution, what exactly is in this case. In the top row of the right side so is the negative of the objective function value for the current base solution, in the example the negative of the total contribution margin. This is currently 0, since nothing is produced.

Evaluating an acceptable starting solution (Phase I)

Since the constant parameters of the above equations are non-negative, you can set the independent variables of the equation system ( non-basic variables ) to zero and then specify a trivial feasible solution directly to Phase II can be started with:

The variables constitute a valid basis, the basis matrix is ​​the identity matrix. The associated basic solution is thus. This solution corresponds to the case that nothing is produced (). The residual capacity of the machines, which is indicated by the values ​​of the, here is the total capacity (170, 150 and 180 ), since the machines are not used.

Improvement of the initial solution (Phase II)

Since the just calculated starting solution is unsatisfactory, an attempt is now in Phase II, to find better feasible solutions.

Selection of a pivot column and line

In a simplex iteration a basic variable is replaced with one of the independent variables. Eligible non-basic variables are positive objective function coefficients, because their inclusion can improve the objective function value in the base. This variable should be increased until one or more of the basic variables to 0 encounter. Any of these basic variables then leaves the base and is replaced with the non- basic variable.

Variable has a positive objective function coefficients 300; by being increased, the objective function can be increased; so it comes as base - entering pivot variable in question. As long as the only non-basic variable is increased, this increase will be limited by the following conditions:

In other words,

The first of the base variables encountered in this case, zero is obtained by the ratio and is. This is the variable that should be replaced if necessary against. The new value of the objective function would be then.

Also variable has a positive objective function coefficients at 500, so come also as entering non-basic variable in question. According to the above procedure, we obtain

And thus

The minimum ratio is part of the basic variable. The new value of the objective function would be.

Implementation of an exchange step

With the exchange step, the basic variable is replaced with the non- basic variable. First, we put in the equation for the independent unknowns free

And then we replace in the remaining equations for the expression thus obtained:

This leads to the transformation rules described above for the panel according to the simplex pivot element. If the line is busy, the busy and the column, then the new system of equations is

The unknown has moved into the base, which is now independent unknowns is a basic variable and appear in the header.

This solution means that there are ME produced by product 1 ( equation with exposed ). From product 2 nothing is made ( is non-basic variable). This made ​​the company a total contribution of 45,000 euros. A machine hours per month is still ( it runs only 150 of the 170 possible hours). Machine B has no idle time. Machine C is silent hours, that is, it is not needed at all. This also follows directly from the task: Machine C is required only for the production of product 2. Since this is not manufactured, we do not need C machine.

Belonging to the above system of equations new simplex tableau is

-----------------------------        | YB x2 | right side    ---------------------------------    -z | -300 200 | -45 000    ---------------------------------    yA | -1 1 | 20 = b1    x1 | 1 1 | 150 = b2    yC | 0 3 | 180 = b3 The entries of the new simplex tableau can be calculated using the above formulas.

Repetition of the simplex steps

Since the objective function in the resulting simplex tableau still contains a positive coefficient, one can improve the solution yet. This is done by means of another simplex iteration. When selecting the pivot element only the unknown comes into question, since the objective function coefficient is positive only here. The basic variable of the pivots is given by

And thus

This line is the only basis strangers who comes for a pivot in question. The basic variable is replaced with the non- basic variable; the pivot element is. Using the same calculations as above we obtain a new system of equations

Or a new simplex tableau

-----------------------------        | YB yA | right side    ---------------------------------    -z | -100 -200 | -49 000    ---------------------------------    x2 | -1 1 | 20    x1 | 2 -1 | 130    yC | 3 -3 | 120 Since the objective function now contains no more positive coefficients, an optimal solution is reached. There are made of Product 1 and Product 2 of ME ME. This made ​​the company a total contribution of 49,000 euros. Machine A and machine B are fully utilized. C machine runs for 60 hours and therefore still an unused capacity of hours.

Entries in fraction form

The above example was dissolved in algebraic notation, by exposing the base variable with respect to the remaining independent variables in the equation system. To avoid rounding errors, you can work with fractional numbers and a common denominator for the whole system of equations select ( that this denominator was always above, is a coincidence ). To find the common denominator in each step for the overall system, we do not have to additionally analyze the entries. If the starting system is an integer (which can normally be achieved by extension), the following rule applies:

  • The counter of the selected pivot element is a common denominator for the following system

If we multiply the new tableau entries with this common denominator, we always obtain integer counter. The calculation of these counters for the new tableau entries is then unchecked divided by the old denominator, the result must be an integer.

As an example of this approach we solve the same task as above again with Dantzigs Pivot selection; this is the one chosen as the incoming pivot variable with the largest objective function coefficients:

By increasing the independent variable, the objective function can be increased; the first of the base variables encountered in this case is zero. Consequently, one can expose instead of and obtained with the following system:

By increasing the independent variable is increased to the objective function; the first of the basic variables, which then pushes to zero. Consequently, you put in place of free and look with the following system:

By increasing the independent variable is increased to the objective function; the first of the basic variables, which then pushes to zero. Consequently, you put in place of free and look with the following system:

The objective function has value and can no longer be increased; the solution is as above. We observe incidentally that Dantzigs Pivot selection has taken one more step in comparison to the initially chosen alternative to get to the solution.

Dual information in the panel

From the simplex tableau is also the information about the solution to the linear program (LP) corresponding dual linear program can be inferred. For a given base can be in addition to the associated Primallösung, which is in the right column of the tableau, also calculate a dual solution. These two vectors and always satisfy the condition

Which is necessary for optimal solutions. The vector remains on the construction of the individual Simplexiterationen always allowed for the primal LP, while the vector may violate conditions of the dual LPs. Are a basis for both the associated Primallösung and the corresponding dual solution allowed ( this is called a primal and dual basic feasible ), then both optimal for the primal or dual linear program. It can be shown that this is the case if and when the reduced cost vector is not positive. Although the goal of the simplex method is actually only the calculation of an optimal Primallösung, including an optimal dual solution falls at the end of the way from using.

An example calculation for the duality is located in Article Pivot method.

Variations and improvements

In one of the shape, which substantially corresponds to the original version of Dantzig, the Simplex algorithm is not used in practical implementations today. Over time, some variants of the simplex method have been developed that reduce the computational time and memory use when solving linear programs over the standard method, and are presented numerically more stable. The main improvements that are now standard in good LP solvers, will be presented briefly here.

Selection of the pivot element

When selecting the pivot element has some liberties in general. The pivot column and the pivot row can be chosen arbitrarily, under the conditions that

  • The pivot column has positive reduced costs and
  • The pivot row leads back to a basic feasible solution.

The choice of the pivot member is frequently a great influence on the number of iterations and the numerical stability of the process, especially in degenerated problems. The development of better pivoting strategies, especially for column selection have caused great progress in accelerating the solution process over time.

Column selection

For the selection of the pivot column (german pricing), there are various strategies that require different computational effort and good work differently depending on the input data:

  • Select the first appropriate column. This is the easiest option, but often leads to very many iterations and is therefore not used in practice.
  • The originally proposed by Dantzig method selects one of the column with the largest reduced cost value. This variation can in many variables require a lot of computing time.
  • The steepest -edge pricing is a combination of column and row selection, bringing together the biggest advance for the objective function. This variant is very complex in each iteration, but often results in a few iterations.
  • The Devex pricing is a 1974 proposed by Paula Harris approximation of steepest -edge pricing and one of the standard methods in today's LP solvers. Here, the columns of the matrix and the reduced costs before choosing a uniform standard be scaled to increase the validity of the reduced costs.
  • When partial pricing the set of variables is divided into blocks and apply one of the above methods on a previous block. Only when there is no appropriate variable is found, the next block is considered in general.
  • Pricing the multiple searches out once a suitable amount of variables, which are then preferably considered as candidates in the next iterations. Only if none of these candidates has more positive reduced cost, the other variables are considered.
  • The multiple partial pricing is a combination of the last two variants, the new candidates always determined only from a part of all available variables. This strategy is in addition to the standard today Devex pricing strategies.
  • When several hybrid pricing strategies are used alternately depending on the situation. Some LP- solvers apply in addition to still numeric criteria in the column selection to reduce the problems due to rounding errors in limits.
  • The rule of RG Bland first selects the column with the smallest index among all eligible columns. This is useful if then available for selection on several lines that will also be chosen with the smallest index. A proof that this rule prevents cycles, for example, in contain.

Row selection

Is there more suitable pivot row, you have the choice between several variants:

  • Choose the appropriate first line. Although this variant is per iteration very fast, but overall, often leads to many iterations and is numerically unstable.
  • The lexicographic selection rule selects among all eligible lines of the (unique ) lexicographically smallest of line. This rule is in terms of the speed is not particularly good, but prevents the panel are visited several times and the algorithm gets into Zykeln. For this reason, it can be used for example for a few iterations to get away from a base solution, before switching back to other selection.
  • The proposed 1973 by Paula Harris Harris - ratio test, which is now one of the standard methods allowed for reasons of numerical stability a slight inadmissibility of the new solution.
  • Randomly select under the appropriate lines. Cycles are very unlikely, but not impossible.

Variables barriers

In practice, often upper and lower bounds on the variables must be taken into account. This is especially true if linear programs to be solved, for example, in the context of a branch-and -cut process as a problem. For such simple types of inequalities such as variable bounds, there is the so-called bounded simplex method, in which the limits are taken into account directly in the individual simplex steps. Unlike the standard version, in which a non-basic variable always has the value 0, it can now also assume the value of one of its bounds. This tracking of barriers at each step results in a smaller number of lines, and thus a smaller base compared to the obvious variant to write variables barriers, and inequalities in the LP.

Dual simplex method

In addition to optimal Primallösung, so a solution for the given linear program, the above-described primal simplex method calculates an optimal dual solution, ie a solution of the corresponding dual linear program. Since the dual LP arises from the primal essentially by interchanging rows and columns can be adjusted using the simplex method solve the dual problem by using the opposite of the above variant slightly modified Tucker tableau and the algorithm described rows and columns reversed. This variant is then called dual simplex method. It was first described in 1954 by Carlton Lemke and EML Beale, but it is only since the early 1990s, an integral part of commercial LP solvers have been developed as a successful pivoting strategies for how the dual steepest -edge pricing in the version of Forrest and Goldfarb from the year 1992.

In practice, the duration of the simplex method often depends mainly on the number of rows in the LP. This is especially true for sparse matrices, such as in practice they normally occur. If a linear program has a lot of conditions, but only a few variables, it may therefore be worthwhile to instead solve the dual LP, in which the row and column number are reversed, and losing sight also supply an optimal Primallösung which incidentally included in the calculation will.

Another great advantage of the primal- dual approach is that primal and dual simplex steps can be performed in the same tableau alternately. Rather than transpose the panel explicitly, row and column operations are simply interchanged in the algorithm, depending on whether just the primal or the dual algorithm. In contrast to the primal simplex method, which always maintains a feasible Primallösung and reached an acceptable dual solution only at the end, the situation is reversed when the dual simplex method. So if the Primallösung to a base inadmissible, the associated dual solution but it is acceptable, you can try by dual simplex steps to get back to a primal feasible solution. This can be exploited in several contexts, which are described briefly here.

  • In the course of cutting planes or branch-and- cut procedure is very often changed a variable bound in a just released LP or added an inequality that is not satisfied by the old solution, and then the newly released LP. As the old basic solution is now no longer permissible, one of the basic conditions of the primal simplex tableau is violated, so that the primal simplex method must start from scratch to solve the new LP. If nothing has been changed on the objective function, but is the old dual solution further accepted so that with some dual simplex steps from the old base from start usually after a few iterations, an optimal solution for the modified LP is found. This difference is reflected in large LPs often very evident in the total running time down.
  • If you are experiencing numerical difficulties in the course of the algorithm or is there a very long time no progress in the objective function, it can be useful to temporarily allow a minor injury of variables barriers to move out of a critical area of the polytope. This can then be solved with some dual simplex steps again.
  • If the linear program has certain structures, you can directly specify a primal inadmissible, but dual feasible starting basis, without having to calculate it. In such a case, one can start from that base directly into Phase II with dual simplex steps and the phase I can save.

Revised simplex method

Although practically occurring linear programs can have hundreds of thousands of variables, the simplex method is designed to use a small part of it, namely the basic variables. Only in the case of the column select the non-basic columns must be considered, and it - depending on the pivot strategy - is often sufficient to consider only a part of it. This fact makes the revised simplex method advantage that only the current basis matrix or its inverse stores, along with some additional information from which the current basis matrix and its inverse can be calculated. This results with much less storage space than the original tableau method. This method today is the foundation of good LP solver.

In the first commercial implementation of this method by William Orchard Hays in 1954, the basis inverse has been completely recalculated in each step, which was a challenge with the then punched card machines. A little later, he implemented the so-called product form of the inverse of an idea by A. Order. In this case, only the first base Inverse was stored together with the update vectors from which the current inverse could be calculated at each step. The former record was a linear program with 71 variables and 26 inequality that has been optimally solved within eight hours. In currently used implementations rather the base matrix itself is stored using a special form of the LU decomposition, since the inverse of a sparse matrix is usually not sparse.

LR- decompositions

In the course of the simplex method very many similar systems of linear equations to be solved. Since large linear systems of equations frequently arise in other contexts ( for example, in the solution of differential equations), were developed in the 1970s in the numerical linear algebra methods for LU decomposition of a matrix. The matrix is divided into a lower and an upper triangular matrix, which is then allowed to solve many systems of equations with the same matrix but different right-hand sides efficiently.

In the case of the simplex method, this method is applied to the base matrix. Since these changes constantly during the simplex algorithm, its LU decomposition must be adjusted for each simplex step, for example using the named after its inventors Forest - Tomlin updates. This adjustment requires very little effort compared to the calculation of a completely new LU decomposition, because changing the base matrix in each iteration in one column only. In practical implementations, every few hundred iterations is usually for numerical reasons still calculates a completely new LU decomposition of the current matrix. Often the matrix can be already taken by a clever reordering of rows and columns for the most part in a triangular shape, so that only a small part of the base matrix, called the nucleus, must actually be factored. For the calculation of LU decomposition itself, there are again several variants, of which in practice mainly has enforced the LU decomposition with dynamic Markowitz Pivoting, since it is given the sparsity of matrices in the decomposition largely. This procedure has resulted in particular for large linear programming, which are almost always sparse, excessive reduction of the computation time.

Preprocessing

In the last decade, great progress has been made in the solution due to improved preprocessing. For example, there are often problems with numerical when both very large and very small numbers occur in a linear system of equations. In some cases, this can be due to preconditioning, eg equilibration of the equation system, avoid before starting the actual algorithm.

Another class of methods of the pre-processing attempts to detect or aggravating variable sensors in order to reduce the number of redundant rows and columns in the linear system of equations:

  • When a row is a linear function of the other rows, it is redundant and can be removed. However, this is - except for the special case that a row a scalar multiple of another row is - generally difficult to detect with reasonable effort.
  • Very often the variables are limited because of conditions on a specified range of values ​​or determined by other variables. For example, both variables are limited to the area defined by the equation and the nonnegativity. The knowledge of this barrier can speed up the solution process. Moreover, the value of with the value of is set. This allows you to replace in all conditions in which occurs, this variable with and remove from the LP.
  • When multiple variables were fixed to a certain range of values, it may be that some inequalities are always satisfied or can not be fulfilled. In the first case, the inequality can be removed in the second case, the inadmissibility of the LPs is immediately shown, and you can stop.

Using such methods can sometimes be significantly reduced, even on very large LPs the number of rows and columns, which is reflected in much shorter solution times.

Swell

249712
de