A randomized algorithm (also stochastic or probabilistic algorithm ) attempts by the choice of random intermediate results to one, to get good or approximately correct result on average. It thus forms the counterpart of the deterministic algorithm. It is not required that a randomized algorithm always finds a correct solution efficiently. Randomized algorithms are often easier to understand, easier to implement and more efficient than deterministic algorithms for the same problem. An example that illustrates this is the AKS primality test, which, although deterministic, but much less efficient and much more difficult to implement than, for example, the primality test of Solovay and Strassen.
Randomized Algorithms, never provide a false result, also referred to as Las Vegas algorithms. There are two versions of Las Vegas algorithms:
- Algorithms that always deliver the correct result, the computation time but ( in case of bad choice of random bits ) can be very large. In many cases, the algorithm is fast as in the case of expected but not the worst case. An example is a variant of quicksort, wherein the pivot element is randomly selected. The expected computation time is in an unfavorable choice of the random contrast, steps are needed.
- ( May = give up) algorithms that may fail, but should never lead to a wrong result. The quality of this type of algorithms can be described by an upper bound on the probability of failure.
Monte - Carlo
Randomized algorithms which may also lead to a wrong result, also referred to as Monte Carlo algorithms. The quality of Monte Carlo algorithms can be described by an upper bound on the error probability.
Of randomized algorithms is called only when the expected value of computation time and / or the error or can estimate the probability of failure. Algorithms where only intuitively plausible that they give good results, or algorithms where this has been proved by experiments on typical inputs, on the other hand is referred to as heuristic algorithms.
In the analysis of the expected processing time and / or probability of error is one in which generally assume that the random bits are independently generated. In implementations are usually used instead of proper random pseudo-random numbers that no longer are of course independent. Thus, it is possible that implementations behave worse than the analysis can be expected.
Error types of Monte Carlo algorithms for decision problems
In decision problems ( problems that can be described by yes-no questions) we distinguish one-and two -sided error:
- With double-sided error may input for which the correct answer is yes, be discarded and inputs in which the correct answer is no, are also accepted. The probability of error in this case must sensibly smaller than 1/2 be, since you can achieve regardless of the input, the error probability with a coin toss ( this approach is obviously not a sensible solution for the considered problem).
- For one-sided error may input for which the correct answer is yes, be discarded, in contrast, must input for which the correct answer is no to be discarded. These error probabilities are less than one sense.
Relationships between Las Vegas and Monte Carlo algorithms
Las Vegas algorithms can always be transformed into Monte Carlo algorithms: the issue? can easily be interpreted as incorrect output. If an upper bound is known only to the expected value of the computation time of the Las Vegas algorithm, you can abort the algorithm and steps? spend. The probability that this happens is limited by the Markov inequality by 1 / c.
Since Monte Carlo algorithms false solutions can spend and these false solutions need not be specially marked, it is more difficult to reshape Monte Carlo algorithms in Las Vegas algorithms. This is possible if one additionally a verifier for the solution has, so an algorithm that can test a proposed solution, whether the proposal is correct. Obtaining a Las Vegas algorithm by performing the given Monte Carlo algorithm, then checked by the verifier, if the computed solution is correct, and this process is iterated until the correct solution has been calculated. The worst-case computation time of this approach is indeed not bounded above, however, one can estimate the expected value of the number of iterations to the top. If you have no verifier is available, it is not clear in general how one can construct a Monte Carlo algorithm a Las Vegas algorithm.
The class PP
Type: & Monte Carlo algorithm The class PP ( probabilistic polynomial time ), all languages L such that there is a probabilistic Turing machine M, which is polynomial time limited and for all w, this formula applies.
In this class there exists a two-sided error. The likelihood that the result obtained is correct is about 1/2.
The class BPP
Type: & Monte Carlo algorithm The class BPP (bounded error probabilistic polynomial time ), all languages L such that there is a probabilistic Turing machine M, which is polynomial time limited and for all w in this formula applies.
In this class there exists a two-sided error. The probability that the result obtained is correct, is located within a known range.
The class RP
W ∈ L:
W ∉ L:
Type: & Monte Carlo algorithm The class RP (random polynomial time ), all languages L such that there is a probabilistic Turing machine M, which is polynomial time limited and this formula applies.
In this class, there is a one-sided error.
The class ZPP
W ∈ L:
W ∉ L:
Type: & Las Vegas algorithm The class ZPP (zero error probablistic polynomial ) are all languages L such that there is a probabilistic Turing machine M, which is polynomial time limited and this formula applies.
Reducing the Fehler-/Versagenswahrscheinlichkeit ( Probability Amplification )
The error or failure probability of randomized algorithms can be reduced by independently repeating. If, for example, an error-free algorithm with a failure probability of 1 /2 can run in total times, the probability that he only failed in all iterations. If it provides at least one iteration, a result, this is also true, so that it can also be output. In this way, one can show that any constant failure probability from the interval (up to polynomial factors in computing time ) is equally good. ( One wonders easy to see that the probability of failure, for example for a really tiny probability of failure is, you should see the algorithm in the average times let it run until it fails the first time. ) The same approach works for algorithms with one-sided error. For algorithms with two-sided error is required, however, a majority decision, so that the analysis is somewhat more difficult. It follows again that any constant error probability from the interval (up to polynomial factors in computing time ) is equally good.
Under Derandomisierung refers to the reduction in the number of random bits, which uses a randomized algorithm. This is useful for the following reason: You can make any randomized algorithm deterministically by simulating it for all assignments of random bits. However, this means that with the use of random bits, the calculation time increases by a factor, which leads in most cases to an exponential computing time. If, however, the algorithm for input only random length required, this leads only to a polynomial increase in the calculation time.
One approach to Derandomisierung is to look more closely at how the algorithm uses the random bits. In some randomized algorithms, it is sufficient that the random bits used are pairwise independent. You can now, for example, pairs generate independent random variables completely independent random variables. In a second step, you can make the algorithm with the approach described above deterministic.