﻿ Parameterized complexity

# Parameterized complexity

Parameterized Algorithmics is a relatively new topic in theoretical computer science, is examined in more detail in the which instances are to solve NP - complete problems efficiently. It is examined which factors ( parameters) depends the runtime of the algorithms substantially. For example, many graph problems are quickly solvable for graphs with small treewidth.

Formally, a problem parameterized (also: fixed parameter tractable or FPT) if an algorithm exists, with a maturity solves it by, where f is a computable function, the parameter k, p is an arbitrary polynomial and n is the input length (eg. graphene problems is the number of nodes and edges).

It is noted that a problem with different parameters, both FPT and non- FPT may be. For example, the clique problem is not FPT if the parameter is the size of a maximum clique, but FPT if the parameter is the tree-width or the maximum degree. It is also said that a problem can be parameterized in the relevant parameters, such as " The Clique problem is parameterized in the tree-width ". Clearly, is a parameter in which an NP -complete problem can be parameterized, a property that causes the exponential growth of maturities because these problems are quickly solved, except for instances where this parameter is large.

In practice, f is often an unpleasant feature like, but it is generally assumed that k is very small ( because this is in instances that occur in practice, is often the case ) can be large and n. Parameterized algorithms are in practice ( where k is small ) feasible even for n = 50-100, whereas eg, common brute-force algorithms with maturities starting at about n = 20 are no longer practical.

## FPT - reduction

Similar Polynomialzeitreduktion an FPT reduction is a function of the time in FPT itself predictable and an instance of a problem, and the mapping parameters to an instance of a problem and a parameter of the restriction. It then writes.

This is a limitation with respect to Polynomialzeitreduktionen because these can change the size of any solution. Thus, for example, corresponds to the Polynomialzeitreduktion the Independent Set problem on the vertex cover problem, the size of the minimum vertex cover of the difference between the number of nodes and the size of the maximum independent set. So here is so it is a A., unauthorized change of the solution size, so this is not an FPT - reduction. In fact, it turns out that the Independent Set problem is W - hard, while the vertex cover problem is FPT, ie that no FPT - reduction from Independent Set to the vertex cover problem exists if it is.

Because of this limitation over Polynomialzeitreduktionen the FPT - reduction yields a finer gradation of the complexity class NP and allows more precise statements about the complexity of NP- complete problems.

So how then follows that it follows from and that is. Conversely, it follows that and W [t ] - hard, that W [ t] - hard, so like and the NP-hardness of the NP-hardness of the following.

## Limited computation trees

A frequently used method of calculation are trees whose height and branching are limited by functions in k. The branch can often be limited even by constants. Such an algorithm one can use, for example, for the vertex cover problem, where the parameter k is the size of a minimum vertex cover.

One method might be to select any, not yet covered edge e and the two are incident to e nodes to branch ( in the two branches of the computation tree, each one of the nodes is added to the vertex cover ). The branch is thus 2, and since one chooses at most k nodes, the height of the computation tree is bounded by k. The choice of a not yet covered edge and the inclusion of a node into the vertex cover are sorted in all, so that the running time is a whole.

## Core problem - reduction

A decidable problem is fixed parameter tractable if and only if it exists a problem kernel reduction, which is computable in polynomial time and that reduces an instance with parameter k to an instance whose size is bounded by a function f (k). The problem core can then be solved with an algorithm that has arbitrary finite duration h. This yields a total life of h ( g (k )), which still provides together with the polynomial for the reduction of an FPT - time restrictions.

This method can be applied to the vertex cover problem, for example: If a node has a larger degree than k, it must be included in a minimum vertex cover, because if a node v is not included, all of its neighbors (since must contain all need to cover edges incident to v ), but what would be more than k nodes, although k was defined as the size of a minimum vertex cover.

After nodes have been selected in this manner, which must be definitely included in a minimum vertex cover, a maximum of nodes can be left over (k node for the vertex cover and a maximum of k neighbors), from which you can select in steps a vertex cover.

The size of the issue cores delivers an even finer gradation of the complexity class FPT. For example, the vertex cover problem easier than the Min- Multicut Problem on trees, although both are FPT, because the problem nucleus of an instance of the vertex cover problem has 2k maximum size ( by Nemhauser and Trotter ), whereas the best known problem kernel reduction for Min - Multicut on trees issue cores delivers that have a maximum size.

## Interleaving

The interleaving method is to perform a core problem - reduction before each Rekursionsaufruf. In general, the duration is greatly reduced, by, with the number of nodes in the computation tree, the term of the expansion of a node in the computation tree is ( the generation of successors) and the term of the core problem reduction.

Interleaving is especially in combination with a strong memoization method for accelerating programs for NP- hard problems. This is because the instances that occur further down in the computation tree, have small parameters, so the problem kernels are small and therefore highly likely to be repeated often, making the memoization can cut many subtrees of the computation tree.

## Color - Coding

Another method is the color- coding, in which the problem instance with k colors " colors " the n objects. When a solution of k objects consists of the body, it can often be found quickly if they k objects are colored differently. We then say that the solution colorful ( colorful engl. ) is. Since there is perfect hash functions, at most different colorations must be tried until a solution is colorful, where c is a constant.

With this method, for example, the search in a graph k parameterized by a path of length. If the node colored k varies on a path of length, then the color classes to k! Species are arranged, all edges are removed, which run between non- adjacent color classes, after which can be, for example, using the algorithm of Floyd and Warshall checked in if it's still a path from a node of the first color class to a node of the last color class are (if there is one, it has a minimum length of k). The running time of this algorithm is in, so it is an FPT algorithm.

## Courcelles theorem

Courcelles theorem states that every graph problem, which can be described by an extended - logical formula is parameterized by the tree-width as a parameter. An extended - logical formula is a logical formula with existential and universal quantifier, can quantify the amounts of nodes or edges, extended by a min and a max quantifier.

Formulas from extended logic for the following problems on a graph G = (V, E) are, for example:

For classes of graphs, which are characterized in that a planar graph is banned as a minor, all of these problems in polynomial time computable, because these graph classes each have a constant upper bound on the treewidth. This concerns, for example, the class of triangulated graphs and the class of forests.

633203
de