Von Neumann architecture

Von Neumann architecture ( VNA ) is a reference model for computer, according to which a common memory holds both computer program instructions and data. Von Neumann systems after Flynnschen classification to the class include the SISD architectures ( Single Instruction, Single Data ), in contrast to parallel processing.

The Von Neumann architecture is the basis for the operation of most known today computer. It is named after the Austro -Hungarian, later working in the U.S. mathematician John von Neumann, whose main work was published about 1945. It is sometimes also called by the same U.S. university Princeton architecture.

  • 3.1 Advantages
  • 3.2 Von Neumann bottleneck
  • 3.3 Memory Wall
  • 3.4 Harvard architecture

Development

Von Neumann in 1945, the concept described in the first unpublished paper " First Draft of a Report on the EDVAC " in the context of the construction of the EDVAC Abacus. It was revolutionary in its day, as previously developed computers were attached to a solid program that was connected either hardware or had to be imported via punch cards. With the Von Neumann architecture, it was now possible to make changes to programs very quickly and without changes to the hardware or to run different programs in quick succession.

Many ideas of Von Neumann architecture were already worked out in 1936 by Konrad Zuse, documented in two patents in 1937 and most of 1938 was mechanically implemented in the Z1 machine. However, it is considered unlikely that von Neumann work Zuse knew when he presented his 1945 architecture.

Most of the computers in use today are based on the fundamental principle of Von Neumann architecture, that is, their properties correspond to those of a VNA. However, this typically means not that they are structured like a simple VNA VNA with the few functional groups. Over time, many of the originally conceived as a simple VNA computer architectures, such as the x86 architecture, differentiated beyond and developed more complex. This was done in order to achieve performance gains, but without breaking with the easily manageable VNA model, that is, from the software point of view to stay compatible for this to be able to continue to use its advantages.

With the trend of increasing number of parallel processing units ( multi-core ) and buses (such as HyperTransport ) this compatibility is to realize ever more complex and difficult. It is therefore to be expected that in the foreseeable future, a paradigm shift to another, parallel architecture model will be necessary in order to achieve performance gains in computer architectures. First harbingers are for example the emerging NUMA computing, wherein the memory is no longer regarded as posing a " uniform " properties.

Concept

The Von Neumann architecture is a circuit concept for the realization of universal computer ( von Neumann computer, VNR ). It implements all components of a Turing machine. However, their systematic breakdown in the corresponding functional groups allows the use of specialized binary switching works and thus a more efficient structuring of operations.

In principle, however, the fact remains that everything that is computable by a Turing machine, even on a machine with Von Neumann architecture is predictable and vice versa. The same applies to all high-level languages ​​, which are represented by a compiler or interpreter to the binary representation. Although they simplify the handling of operations, but offer no extension of the set by the Turing machine semantics. This is illustrated by the fact that the translation from a high-level language in the binary representation is in turn made ​​from a binary program without user interaction.

Components

A Von Neumann machine based on the following components, which are still used today in computers:

  • ALU ( Arithmetic Logic Unit ) - arithmetic unit, rarely also called the central processing unit or CPU, performs arithmetic operations and logic operations. (The terms CPU and processor are generally used with a different meaning. )
  • Control Unit - the control unit or empennage, interprets the instructions of a program, and accordingly connected data source, sink and necessary ALU components; the control unit also regulates the command sequence.
  • Memory - memory drive stores both programs and data are available for the display unit.
  • I / O Unit - Eingabe-/Ausgabewerk controls the input and output of data to the user (keyboard, screen) or to other systems (interfaces).

Program flow

These components work program commands from the following rules.

  • Principles of the stored program: Commands are stored in a RAM memory with linear (1- dimensional ) address space.
  • An instruction address register, called the instruction pointer or program counter points to the next instruction to be executed.
  • Commands as data is changed.
  • Commands are read from a cell of the memory, and then executed.
  • Usually then the contents of the instruction counter is increased by one.
  • There are one or more jump instructions which change the contents of the program counter to a value other than 1.
  • There are one or more branch instructions which increase depending on the value of a decision bit program counter by one, or perform a jump command.

Properties

Benefits

The strictly sequential operation of a von Neumann architecture is the crucial advantage over other parallel architectures (eg clusters ( computer ), Harvard architecture ) and the reason for the continued popularity of this architecture. From the perspective of the programmer a simple, deterministic execution is guaranteed, race conditions and data inconsistencies are excluded by the single bus over which the CPU accesses data and program.

Von Neumann bottleneck

Von Neumann bottleneck of the Von Neumann architecture refers to the architectural state of affairs, that the connection system ( data and instruction bus) is a bottleneck between the processor and the memory. Proceeding of the Von Neumann bottleneck " only one thing at a time " also describes the person responsible for this state of affairs concept of ( eng. Original: word -at- a-time thinking ), so explicit, forced Sequentialismus by the single bus, run into all actions.

The term itself, " Von Neumann bottleneck" ( eng. Von Neumann bottleneck ), was coined by John W. Backus, who introduced him in 1977 in his speech at the presentation of Turing Awards:

" Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck did Has kept us tied to word- at- a-time thinking instead of Encouraging us to think in terms of the larger conceptual units of the task at hand. Malthus programming is of basically planning and detailing the enormous traffic of words through the bottleneck of Neumann, and much of traffic did not Concerns significant data Itself, but where to find it. "

Memory Wall

Since a von Neumann architecture, in contrast to Harvard architecture is only one common bus for data and instructions is used, this limits the maximum transferable amount of data need to split. In early computers, the CPU of the computer provided the slowest unit represents, i.e., the data delivery time was only a small proportion of the total processing time to the arithmetic operation. For some time, however, the CPU processing speed grew significantly faster than the data transfer rates of the buses, or memory, which exacerbates the impact of Von Neumann bottleneck.

The concept of "memory wall" refers to this growing imbalance between the speed of the CPU and memory outside the CPU chip. From 1986 to 2000, the CPU speeds grew annually by 55% while the memory transfer speeds rose by only 10%. Following this trend is expected that memory latency is the bottleneck of computer processing power will be.

Currently, however, the CPU performance growth has slowed significantly, partly due to physical limitations and partly because current CPU designs have already hit the memory wall. Intel summarized these reasons as follows:

"First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current Increases, leading to excess power consumption and heat ... secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with Increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster ( due to the so - called Von Neumann bottleneck ), Further undercutting any gains did Increases frequency might also otherwise buy. In addition, partly due to limitations in the Means of Producing inductance within solid state devices, resistance- capacitance ( RC) delays in signal transmission are growing as feature sizes shrink, imposing additional bottleneck in frequency did Increases do not address. "

Another analysis looks primarily to the capacitive resistance as an obstacle and provides a maximum annual growth of 12.5 % on average over the CPU performance 2000-2014 advance.

Harvard architecture

One of the most important competition architectures is the Harvard architecture with a physical separation of instruction and data memory which is accessed via separate buses, thus independent parallel. The advantage of this architecture consists in that commands and data are loaded at the same time, or may be written, that potentially can be circumvented Von Neumann bottleneck. The potential disadvantage compared to the Von Neumann architecture is the all parallel systems, the possible race conditions in the data and instruction accesses and thus a non- deterministic program execution.

In addition, the physical separation of data and program ensures that access rights separation and memory protection are easy to implement. In order to prevent, for example, that software problems with program code can be overridden was (especially historically ) for the program code a read-only operation in memory ( eg, ROM, punch cards ) used for data write and readable memory ( eg, RAM, ring core memory ). Individual elements of the Harvard architecture flow since the 1980s intensified again in the usual Von Neumann computer, since a clearer separation of commands and data, the reliability increased significantly according to experience. Especially the dreaded buffer overflows, which are responsible for most security vulnerabilities in modern systems, are easier to control at higher separation of commands and data.

A disadvantage, however, that unneeded data memory can not be used as program memory ( and vice versa), so increased memory fragmentation occurs.

661404
de