As supercomputers and supercomputer, the fastest computers of their time are referred to. It does not matter on which construction of computer based. A typical feature of a modern supercomputer is its large number of processors that can access shared peripherals and a partially shared memory. Supercomputers are often used for computer simulations in the area of high performance computing.
- 5.1 milestones
- 5.2 Compare
- 5.3 correlators in comparison
There can not be letting any fast processors, all high-performance computers are parallel computers. A parallel computer is a computer in which operations are simultaneously distributed to multiple CPUs to increase the operating speed. For optimal use of a supercomputer programming must therefore be matched as closely as possible to the individual processors working in parallel.
Supercomputers are now mostly designed as Skalarrechner or vector computers. They are based on different processor architectures. Vector processors (also vector computer or array processors called ) perform a calculation simultaneously on many data ( in a vector or array ) from. Scalar processors can only process one pair of operands per instruction, however. Skalarrechner therefore often based on thousands of standard processors that are networked together ( computer cluster).
Originally, the outstanding computational performance was achieved through maximum utilization of the available technology by constructions were chosen that are too expensive were for larger scale production (eg, liquid cooling, exotic components and materials, compact design for short signal paths ), the number of processors was rather low. For some time to establish itself increasingly so-called clusters, in which a large number of (usually inexpensive ) individual computers are networked to a large computer. In a cluster in comparison to a vector computer nodes have their own periphery, and only its own local main memory. Clusters use standard components, so they initially offer cost advantages over vector computers. However, they require a much higher programming effort. It is to consider whether the programs used are suitable for being distributed over many processors. Programming languages used include FORTRAN and C.
Closely associated with the term supercomputer is the Cray. It is named after its founder Seymour Cray and presented its first supercomputer in the 1970s forth. The first officially installed supercomputer Cray -1 in 1976 managed 130 mega flops. For comparison, a normal CPU from 2008 ( Intel Core i7 -965 3.2 GHz, 4 cores) creates 33,000 mega- flops, a current high-end consumer graphics card from 2013 ( Geforce GTX Titan) theoretical 4,500,000 Megaflops.
The cost of a supercomputer from the TOP10 are currently moving at a very high double-digit, often already three -digit euro millions. To top this there are no limits. These enormous amounts of investment raises inevitably the question of what this very expensive equipment is required and whether the investment in the development of such a device, except from pure prestige, profitable.
Today's supercomputers are primarily used for simulation purposes. The more realistic a simulation of complex relationships, the more processing power is required in the rule. One advantage of supercomputers is that they can take into account more and more interdependent due to their extremely fast and so great computational power. This allows the inclusion of broader, often inconspicuous side or boundary conditions for the actual simulation, thus ensuring an ever more meaningful overall result.
The current main areas of application of supercomputers here include the areas of biology, chemistry, geology, aerospace, medicine, weather forecasting, climate research, military and physics.
In the military field supercomputers have it eg enables new atomic bomb developments through simulation, without supporting data by more underground nuclear tests to perform. The areas are characterized by the fact that it is very complex systems and subsystems that are far-reaching increasingly linked. Thus, changes in one subsystem usually more or less strong impact on adjacent or connected systems. Through the use of supercomputers, it is always easier to take into account many such consequences or even to predict, which means that any counter- measures could already be made well in advance. This applies, for example, in simulations of climate change, the predictions of earthquakes or volcanic eruptions, as well as in medicine in the simulation of new drugs on the organism. Such simulations are logically quite independent of the computing power, only as accurate as permitted by the programmed parameters or models for the calculation. The enormous amounts of investment in the steady increase in FLOPS and thus the development of ever faster supercomputers are mainly justified by the benefits and advantages of the eventual " knowledge advantage " for humanity, less from that of the overall technical progress.
Since Linux can be very flexibly adjusted and optimized, it has become widespread even with supercomputers. In the TOP500 list of fastest computer systems (as of June 2012) a total of 462 operated exclusively on Linux systems and 11 partial ( CNK / SLES 9 ) are listed under Linux driven systems. Thus, 92.4 % of the systems are running completely and 94.6 % at least partially under Linux. Almost all other systems are operated on a Unix or Unix-like systems. The largest in the desktop segment rival Microsoft Windows plays in the field of high performance computing systems with two ( 94th and 156), only a minor role ( 0.4%). In June 2011, there were four systems ( including number 40 ), which were operated under Windows.
The fastest supercomputer on the performance are now listed twice a year in the TOP500 list. As a valuation basis of the Linpack benchmark is used. The fastest supercomputers by energy efficiency or MFLOPS / W are conducted in the Green500 List in November 2007.
Selected current supercomputers (worldwide)
Selected current supercomputers ( Germany -wide)
Physics, chemistry, environmental and marine research, engineering,
The historical fastest in their time
The following table lists some of the fastest supercomputers on their own time:
- The over one million active computers in the Berkeley Open Infrastructure for Network Computing currently provide (February 2014) a peak computing power of 8 to 21 petaflops, which can vary depending on the day.
- All calculations of all computers worldwide from 1960 to 1970 could perform the Earth Simulator in about 35 minutes.
- If each of the 7 billion people in the world graduated with a calculator without any interruption at any second a bill that would require all mankind 18 years working to do what the IBM Roadrunner mastered in an hour.
- With his performance of the K computer, the meter of a light year could within about one second "count".
Correlators in comparison
- The correlator of the Atacama Large Millimeter / submillimeter Array ( ALMA) is currently conducting ( December 2012) of 17 petaflops.
- The computing power of the correlator at WIDAR Expanded Very Large Array ( EVLA ) is ( June 2010), as indicated at 40 petaflops.
- The proposed correlator of the Square Kilometre Array (SKA ) (built 2016-2023 ) is 4 Exaflops (4000 petaflops ) (Information as of June 2010 ) can perform.