Amdahl's law

The Amdahl's law (named after Gene Amdahl 1967 ) is a model in computer science over the acceleration of programs by parallel execution. According to Amdahl, the performance gain is limited mainly by the sequential part of the problem, since its execution time can not be reduced by parallelization.

Description

A program can never be fully executed in parallel, as some parts such as process initialization or memory allocation run only once on a processor or the flow of certain results is dependent. Therefore, we divided the run into sections which run either sequential or completely completely parallel, and combines them in one group. Let P be the proportion of the runtime of the parallel portions of a program, then ( 1 - P), the sequential component, and the total run time results when executed on a core from the sum of:

Suppose a program takes 20 hours on a computer with a CPU, and an hour of it is executed sequentially ( for example, initialization routines or memory allocation). The remaining 19 hours make up 95 % of the total expenditure and can be distributed to any number of processors. The total computation time can, however, even with an infinite number of processors does not fall below 1 hour, with peak acceleration ( SpeedUp ) is therefore a factor of 20

Let P of the parallel portion of a program, and N is the number of processors, which are used for calculation. Then ( 1 - P) of the sequential portion and (P / N) of the parallel portion of accelerated. The total time and therefore the total acceleration is composed of the sum of the individual members together, and therefore:

It will be observed that the acceleration depends on the increasingly sequential part of the algorithm and the processor communicate with increasing number of processors. Amdahl argued that seizures by parallelizing additional costs, such as for communication and synchronization between the processors. Thus, the inequality extends by a factor of o (N ), which takes this into account and, therefore, increases with increasing N.

The introduction of the new parameter, the curve no longer converges to 1 / (1 - P), but reaches a maximum around behind falling again. This effect is also observed in practice: Given a sufficiently large number of processors exceeds the effort to transfer the problem to synchronize and returned, the computational effort, which is removed by the additional cores.

Criticism

Amdahl's law is one of the most pessimistic to estimate the theoretical speedups, as it does not consider some positive factors. According to Amdahl maximum acceleration is linear, ie by 10 times the number of cores 10 times the computing power is maximum possible. However, the cache is built into each CPU, so that increased tenfold the amount of fast memory. In the best case, it is possible the entire problem size in the cache instead of the slow main memory to keep what a super- linear SpeedUp allows, so acceleration, which goes beyond the additional pure processing power. However, this could be due to the fact that the comparison of the base is in error, that the integrated cache is considered to be part of the processor module. A comparison without taking into account the cache size would not lead to the named super- linear SpeedUp. However, Amdahl consideration for the maximum speedup remains valid in both cases.

Furthermore, it Amdahl in his statement assumes that the problem remains the same size; So the goal is to compute a problem in the shortest possible time. In the event of calculating at the same time a major problem, there are favorable conditions, since the parallel proportion depending on the problem can be very large. The longer the calculation, the less the time for the drops and final synchronization initialization significant.

Others

The Amdahl's law also has importance in business administration, particularly in operations research in terms of resource allocations.

Gustafson's law is closely related, but takes into account some missing aspects of the law of Amdahl, Moore's law will analyze the speedup of computer chips in general.

55664
de