Gustafson's law

Gustafson's Law (also known as the law of Gustafson - Barsis ) is a law in theoretical computer science, which was erected by John Gustafson 1988. It states that a sufficiently large problem can be efficiently parallelized.

Description

Gustafson's law based on the law of Amdahl, which, starting from a fixed problem size, attempts a task to be accomplished by parallelization with processors to solve in less time. It is based on the assumption that the best possible result linear improvement in the required time (ie ) is. If one uses twice as many processors you need, at best, half the time.

Gustafson changed the law so that it uses a fixed time window in which the problem size grows with the number of processors. If one uses twice as many processors, one can at best solve a twice as big task. Although they appear the same at first glance, the statements differ significantly.

A program can never be fully executed in parallel because some parts, such as memory allocation process initialization or run only once on a processor or the flow of certain results is dependent. Therefore, we divided the run into sections which run either sequential or completely completely parallel and combines them in one group. Be the proportion of the runtime of the parallel portions of a program, then ( ) the sequential fraction and the total running time results when executed on a core from the sum of:

According to this formula, both parts are executed on a serial computer after the other, the days add up. Neglecting the overhead for communication, synchronization, and the like, so can the parallel portion of processors to run at the same time. For the acceleration factor ( SpeedUp ) applies so that

The crucial difference to Amdahl is that the parallel fraction increases with the number of processors. The sequential part here acts not restrictive, since it is insignificant with increasing. Approaches infinity, the SpeedUp grows linearly with the number of processors.

Application

Gustafson's law can be well applied to problems that are processed in real time. The real time is fixed, and the problem must then using more processors are becoming more complex.

For example, need to be calculated per second in 3D rendering at least 30 images, which is a fixed time window. However, the picture may be more realistic if you increase the amount of data, eg more polygons or detailed textures. This can be achieved, according to Gustafson using more processors.

The same is true for the logic calculations in games, physics simulation, and artificial intelligence that interacts with people, but the same is also true in theory in robotics, machine control and monitoring, and similar tasks.

Gustafson himself is currently working with AMD on Radeon graphics cards.

Criticism

There are a number of problems, which can not be useful zoom arbitrarily, or problems that need to be calculated in the shortest possible time.

Non-linear algorithms can not fully benefit from the parallelization described by Gustafson often. Lawrence Snyder explained that an algorithm with a complexity of O ( N3) obtained by doubling the concurrency a SpeedUp of only 9 %.

286938
de