Moore's law

The Moore's Law (English Moore 's law; German "law" within the meaning of " law " ) states that the complexity of integrated circuits doubles regularly with minimum component costs; depending on the source are credited as the period 12 to 24 months.

Under complexity understood Gordon Moore, the law formulated in 1965, the number of circuit components on a computer chip. Occasionally, also a doubling of the integration density of the speech, which is the number of transistors per unit area. This technological progress is an essential basis of the " digital revolution ".


Gordon Moore expressed his observation in an article published on April 19, 1965 article in the journal Electronics, a few years after the invention of the integrated circuit. The term " Moore's Law " was coined in 1970 by Carver Mead. Originally Moore said doubling each year ahead, but this statement corrected in 1975 in a speech to the Society of Photo- Optical Instrumentation Engineers ( SPIE ) to a doubling every two years (see also wirth 's law ). Trigger for this was that the rapid development of semiconductor technology, the first years had slowed and Moore's former Intel colleague David House took 18 months as an estimate to the game, which is today the most widely used variant of Moore's Law and is the framework on which the semiconductor industry, its development plans Moors over several years. Real doubled the performance of new computer chips on average about every 20 months. The media is still mostly a doubling of integration density every 18 months the speech.

However, this value refers to information from the mass production of a then current generation of technology. 2005, for example, ran the competitive production of chips for the world market with structures 130-90 nm in preparation for mass production at the 65 nm technology was ( gate length depending on the technology for about 30 to 50 nm) and in the laboratory dealt you look time already, again with smaller feature sizes. So already first prototype transistors were manufactured with 10 nm gate length.

At Intel's Developer Forum (IDF ) in autumn 2007, Moore said the end of his law ahead: It 'll probably have another 10 to 15-year, to a fundamental limit is reached. However, forecasted Pat Gelsinger, chief of the Digital Enterprise division of Intel, already half a year later that Moore's law would still retain validity until 2029. In October 2009, Martin Strobel explained in his role as spokesman for Intel Germany in detail why one was confident "that Moore's law to be able to still meet quite a while ."


Moore's Law is not a scientific law of nature, but a rule of thumb, which goes back to an empirical observation. At the same time one can speak of a " self-fulfilling prophecy ", as various industries are involved in the development of better microchips. You need to focus on some common milestones (eg optical industry with improved lithographic methods ) to operate profitably to. The formulation of Moore's Law has changed a lot over time. Said Moore nor the number of components on an integrated circuit, it is now the number of transistors on an integrated circuit, the address, and sometimes even by the number of transistors per unit area.

Moore noted that the cost of a circuit component due to the process with decreasing and increasing number of components increased. At low component count, the available material was not fully exploited for higher component numbers had experimental methods are used that are not economically nor were worth it. He therefore drew its observation only to the specific cost optimum, ie that production and that component count per circuit, where the cost per circuit component failed at its lowest. This is theoretically clear requirement that the manufacturing process and which computer chip on the review of Moore's Law would have to be considered in each year.

The different formulations distort the original observation Moores partially beyond recognition. Already the variable interpretation of the doubling period of 12, 18 or 24 months caused significant differences. As computer chips vary greatly in size, it is by no means the same whether one considers the number of transistors per chip or per unit area. Omitting the cost optimum finally leads to complete alienation. Without any cost optimum production and any of the circuit can be used for confirmation of Moore's Law; whether it is a commercially available processor, extremely expensive high technology or experimental circuits that are not yet on the market that is, in this lax interpretation irrelevant. The different versions circulating the Moore's law has lost much of its objective validity.


The complexity of K as a function of time t of Moore's law follows an exponential function:

Here, the rate of increase of λ is the reciprocal of doubling time T2 multiplied by the constants:

This relationship can also be condensed form:

Are shown.

For the doubling time T2, several assumptions are used. Often T2 = 2 a used λ in this case is 0.35 per year.

Computing power

The fact that the number of transistors on a computer chip, the processing power of computers increases linearly, can not be inferred from the Moore's law. In modern processors, more and more transistors on an integrated memory (cache) are used, the only passively contributes to computing performance by speeding access to frequently used data. An example is given comparing two processors of the series of Pentium III. This is on the one hand a " Katmai " with a clock frequency of 500 MHz and an external L2 cache on the other hand a " Coppermine " in the 1 GHz version with integrated L2 cache. It has doubled the clock frequencies at 1 GHz Coppermine compared to the 500 MHz Katmai and the transistor count tripled, yet shows up in these comparatively similar processors, boosting performance by a factor of 2.2 to 2.3.

In multi-core processors more processor cores are combined on a single chip, working in parallel, thereby providing more power. Here, the doubling of the number of transistors is primarily achieved by doubling the number of processor cores. Here, too, but will not be doubling of computing power, because the parallel operation of the processor cores falls for additional coordination effort, which diminishes the achievement again. Moreover, not all parts of the program in the operating system and applications parallelized, so that it is difficult to fully utilize all cores simultaneously. An introductory overview of this subject provides the Amdahl's law.

Technical limits

In addition to criticism of the erroneous interpretation of the law itself, there were always voices who predicted the end of the validity of Moore's rule of thumb due to insurmountable technical hurdles. So far, however, all predicted hurdles were overcome before they could cause a kink of Moore's exponential curve. Development plans, which are designed to ensure compliance with the Moore's Law, to the year 2020 (according to Sematech ).

For comparison, transistors from semiconductor manufacturing with 90 nm have a gate length of 50 nm and are about the size of an influenza virus. In the production of such small structures are produced using photolithographic processes. For an "optical " photolithography for smaller structures mid-2000s, the so-called immersion lithography has been introduced in industrial production. Through this immersion method the necessary resolutions for the 65 - nm manufacturing and small could be ensured. For current (2010 ) Products ( 45 -nm node to the 28nm node ) immersion lithography is combined with other techniques such as double exposure or double patterning process, which further increase the resolution on the basis of existing techniques allows. For even smaller structures at the 16- nm technology and then will (probably) a completely new exposure method necessary: TEU. This is currently not sufficiently technically feasible EUV technology operates at a wavelength of 13 nm EUV is the common abbreviation for " extreme ultraviolet ", "soft " X-rays called. EUV can not penetrate glass, so it can not be focused with lenses, but only be reflected on a concave mirror. A concave mirror with which this is possible has recently been developed at Carl Zeiss SMT. The mirror surface must meet extreme requirements. It applies deviation values ​​smaller than 0.15 nm to achieve. This is the current " world, most accurate ' mirror'.

Although the progress in the coming years seems secure, the 'who expect a slowdown in the increase in the integration density increase. For a technical limit would be reached when a transistor reach the expansion less atoms (see tunnel current). Secondly, the financial cost of development and production of integrated circuits grows faster than the integration density, so that there would be a point at which the investments were not more would be worth it. Assuming an exponential growth of the development costs equally a limit by the availability of capital would be given. Bypassing these impending limit could by fundamentally new technologies. Candidates for the study of nanomaterials, such as graphs, three dimensional integrated circuits ( and thereby increasing the number of transistors per unit volume, and not only on a per area), and other forms of polyhydric spintronics logic, as well as optical and quantum computing. With all these technologies, the computing power or memory density would be increased without increasing in the conventional sense, the density of transistors so that the Moore's Law, while formally lose its validity, but not necessarily of its impact here.

In the area of ​​highest computer requirements ( specially: computational fluid dynamics ), however, since about 2003, a clear violation of Moore's Law observed. On the basis of a benchmark of a commercial flow simulation software performed in the period from 1996 to 2009 clearly shows that the time which each finite volume (cell) and each iteration of the solver requires approximately from 2003 not or only marginally decreases (approximately 10 % after 2 years instead of moore's law corresponding 50 %, sometimes less, to 0%). The processor manufacturers have responded with multicore CPUs. However, most of the software does not scale in proportion to the number of nuclei, i.e., having two cores is the execution time of more than half the length of a core. In the worst case, the software is not designed for multi-core machine and the acceleration is zero. However, a lot of software, such as operating system, antivirus and application software runs in parallel on today's PCs, so that there is an acceleration here. The situation is different in the field of fluid dynamics. Here most software tools are parallelized and scale at reasonable faster network technology between the cores fairly linear, that is, x times more cores also means x times as fast. Thus, the speedup in the application are limited only by the network latency between the cores limits.