Greedy algorithm

Greedy algorithms and greedy algorithms form a special class of algorithms that occur in computer science. They are characterized by the fact that they gradually select the successor state of the greatest profit or promises the best result at the time of choice (eg gradient ). To make a selection among the following conditions, an evaluation function is often used.

Greedy algorithms are often fast, but many problems can be solved not optimal.

Optimization problems on independence systems

A greedy algorithm finds for an optimization problem on independence systems exactly then the optimal solution for all valuation functions, if the admissible solutions are the independent sets of a matroid. Otherwise, the algorithm leads to only a local optimum. Examples of this are the knapsack problem and the problem of the traveling salesman. In these problems, it is much more complex to find the optimal solution because the problems are NP -complete.

Algorithm for the maximization problem

For a matroid, a weight function is given. The following algorithm finds a heaviest independent set, so determined a that maximizes:

1 / / Arrange all the items in order of decreasing weight   2   3   4;   5   6 for ( k = 1, k <= n, k ) {   7 if   8   9} 10 11 edition of the solution generalizability

The algorithm also solves the maximization and minimization problems to arbitrary weight functions: In a solution to the maximization problem negative weights do not occur, elements with negative weight can safely be ignored by the algorithm. The solution of the problem of finding a minimum independent set, we can trace back to the solution of the maximization problem by replacing the weights by their additive inverses.

Term

If L is the duration of the test a lot of independence, so the running time of the algorithm is given by. At best, it is so dominated by the sorting process. If the independence test, however, is NP-complete, the algorithm is practically useless.

Algorithm for the minimization problem

For a matroid, a weight function is given. The following algorithm finds a lightest basis, ie determined under the kardinalitätsmaximalen one that minimizes:

  • Sort E so that, with
  • T: = E
  • For each i from 1 to n:
  • Give from T.

Compared to the maximization problem, generalizability

Since we assign positive weights, the problem is to search for a lightest base superset equivalent. This problem is dual to the maximization problem, and can be generalized to any analogue weighting functions and the corresponding minimization problem.

Term

If L is the duration of the test whether a subset of E is a superset of a base, so the running time of the algorithm is given by. At best, it is so dominated by the sorting process. If the base - supersets exam, however, is NP-complete, the algorithm is practically useless.

Examples

  • Kruskal algorithm for finding a minimum spanning tree
  • Algorithm of Prim for finding a minimum spanning tree ( the underlying set system - the amount of trees - but it is not independence system )
  • Dijkstra's algorithm for finding a shortest path
  • Successive integration algorithm for solving combinatorial optimization problems
264428
de