Hardware acceleration

Hardware acceleration means the discharge of the main processor through delegation of specific compute-intensive tasks to specialized hardware for these tasks. This technique is especially used in the graphic representation in computers.

Technology

The architecture of a main processor is designed so that it can perform any task in a short time. The bandwidth of executable tasks is limited only by the software. However, there are some recurring tasks that can not be performed very efficiently and quickly by the general design of the processor architecture of the main processor. The possibilities of optimization limited to the processor to the software.

Remedy specific hardware blocks ( chips, processors) that are specialized in this recurring tasks. You do not need to be as flexible as the main processor optimizations in the execution can therefore be taken into account already in the hardware in the architecture of these devices. At the same time is the calculation time of the main processor for other tasks.

Some tasks can be efficiently decomposed into thousands of subtasks. These include the Fourier transformation in a given frequency band, or the rendering of a small image area. These can be processed in parallel independently. Due to massive parallelization (see also vector computers ), so the use of many small processors working in parallel for this special task, the calculation can be greatly accelerated. In many cases, the calculation speed increases almost linearly with the number of parallel processors ( Amdahlsches Act). For example, work on the graphics card Nvidia GTX260 192 stream processors simultaneously.

Under aspects of energy efficiency such parallelization can also be useful. The power consumption rises linearly with the number of the processors, while it largely increases quadratically with an increase in clock frequency. The areas of hardware acceleration can therefore be operated with a lower stroke and less energy consumption.

Graphics Cards

In the early days of computer technology, the screen display was one of the tasks of the main processor (eg Sinclair ZX81 ). First chips took over the appearance of text characters and output to a monitor. Later, the management of a dedicated graphics memory and the color graphics display ( EGA and VGA cards) followed. With the advent of Graphical User Interfaces these graphics cards took over simple, repetitive tasks, such as drawing rectangles, lines and circles. A special video card drivers received the drawing commands from the operating system and delegated it to the graphics card. Alternatively, these tasks could be taken over by the main processor in software mode - but with considerable loss of speed.

The next logical development step introduced in 1996 the so-called 3D - graphic maps, you first she took over the task pane to write very quickly triangles ( basic figure the group consisting of polygons 3D objects ) into the graphics memory and to provide you with a custom texture (eg graphics cards the Voodoo series by the manufacturer 3dfx ). In early 2000 began to shift and the calculation of the 3D lattice model on the graphics card (so-called T & L units ). Since around 2006 graphics cards are increasingly taking over the calculation of physical processes to manipulate the 3D model by so-called physics accelerators.

Other applications

From the industry is constantly logic modules are developed, take on the special tasks to relieve the main processor. This includes the decoding of MP3 data, encoding and decoding of video data, the synthesis of audio signals (audio accelerator) as a borderline case, and digital signal processors.

Opposing tendencies

However, it must be remembered that originally independent blocks were increasingly integrated into the main processor for faster access to each function unit.

The best example is certainly the math coprocessor from the 1980s to the 1990s. He could either be plugged into the motherboard in a designated base ( eg Intel 8087 and successors) and then supported the main processor in the calculation of floating point numbers. Was not present, the main processor took over the calculation of floating point numbers, but with considerable loss of speed.

Also in the first processors that had a buffer memory ( cache), there was this in an external chip on the motherboard. However, it is already common practice since the 1990s to integrate the buffer memory in the processor. Recently, even parts of the chipset, such as the memory management or the northbridge, integrated together with the main processor on a chip. The same development can be seen in integrated graphics processor units ( IGPs ) from, this takes - through the earlier acquisition of graphics card manufacturer ATI - especially the processor manufacturer AMD with its Fusion technology plays a leading role.

375507
de