Computer memory

The memory or " RAM ( engl.: core, main memory, main store, primary memory) [ ... the ] name for the memory that [ it ] contains the data needed to execute programs or program parts and just the. The main memory is a component of the central unit. Since the processor directly accesses to main memory, to a significant extent affect performance and size, the performance of the entire computer system " ( Duden computer science ).

Memory is characterized by the access speed, and ( making) the data transmission rate and the size. The access speed describes the time until a requested date can be read. The data transfer rate indicates how quickly data can be read. There can be separate for write and read operations exist. To name the memory size are two different forms of notation, the result from the number base used. Either the size of the base 10 is specified (as Dezimalpräfix; 1 KB or kB = 103 = 1000 bytes, SI notation) or to the base 2 (as Binärpräfix; 1 KiB = 210 = 1024 bytes, IEC notation). Due to the binärbasierten structure and addressing of random access memories ( byte addressed in 8- bit division, word addressed with 16- bit division, double-word addressed in 32 -bit division, etc.) is the latter variant usually the more precise form.

The term " memory" is in common usage - different from the above-mentioned definition - sometimes for temporary space needed (eg on disk files or database tables ) used (see also page file ).

The information psychology uses the term " memory" as a synonym for the human " short memory " or " short term memory ".

Basics

The computer's memory is a well structured addresses ( in tabular form ) area that can accommodate binary words of fixed size. Due to the binary addressing memory has virtually always a ' binary ' (on powers of 2 based ) size, otherwise areas remained unused.

The main memory of modern computers is volatile, ie that all data will be lost after switching off the power supply - the main reason for this lies in the technology of DRAM. Available alternatives such as MRAM, however, are for use as main memory or to slow.

The most common design for use in computers is the memory module. It is necessary to distinguish between different types of RAM. Goods in the 1980s, customary construction store in the form of ZIP, SIPP or DIP modules, as were used in the 1990s mainly SIMMs with FPM or EDO RAM. Today, computers come in primarily DIMMs eg SD, DDR, SD, DDR2 SD or DDR3 SDRAMs used.

History

The first computers did not have any memory, only some registers that were built with the same technology as the arithmetic unit, ie tubes or relays. Programs were stored on very different media, such as punched cards or hard wiring.

" In computer systems of the second generation drum memory served as main memory " ( Dworatschek ). In addition, experiments were conducted in the early days with more exotic approaches, so ( supplied with ultrasonic waves ) with a maturity of storing in mercury baths or glass rod spirals. Later, magnetic core memory have been introduced that saved the information in the form of small ferrite cores. These were strung in a cross-shaped array, with each one address line and a word line in the middle of a ferrite core crossed. The memory was not volatile, but the information was lost in reading and was subsequently written back immediately by the control logic. In addition, no current was consumed, as long as the memory was not written or read. By today's standards it is very bulky and manufacturing also very expensive.

Typical mainframe computers were the mid-1960s with 32 to 64 kibibyte (then called kB) relatively large main memories equipped (eg IBM 360-20 or 360-30 ), in the late 1970s (eg, the Telefunken TR 440 ) with 192,000 words of 52 bits (net 48 bits), so with over 1 mebibyte.

The core memory as a whole was large enough, in addition to the operating system to load the currently executing program initially from an external medium into memory and keep all the data. Programs and data are stored in this model from the perspective of the processor in the same memory, the most widely used today Von Neumann architecture has been introduced.

With the introduction of microelectronics of memory increasingly integrated circuits ( chips) has been replaced. Each bit is stored in a bistable switch ( flip-flop ) that requires at least two, but with control logic to six transistors, relatively much chip area consumed. Such memories always consume power. Typical sizes were integrated circuits (ICs ) with 1 Kibit, with eight ICs are addressed jointly. The access times were a few 100 nanoseconds and were faster than the processors were clocked at a megahertz. This allowed for a the introduction of processors with very few registers as the MOS Technology 6502 or the Texas Instruments TMS 9000, most of which carried out their calculations in memory. Secondly, it enabled the construction of home computers whose video logic used a portion of the main memory as video memory and parallel to the processor could access it.

In the late 1970s dynamic memory have been developed, which store the information in a capacitor and only require an additional field effect transistor per bit of memory. They can be built very small and require very little power. The capacitor loses the information slowly, however, the information must therefore be re-written again and again at intervals of a few milliseconds. This is done by an external logic that periodically reads the memory and re- writes back (Refresh ). Due to the higher integration in the 1980s, this refresh logic could be built inexpensively, or integrated into the processor. Typical sizes in the 1980s were 64 Kibit per IC, with eight chips were addressed jointly.

The access times of dynamic RAMs were at cheap construction also in some 100 nanoseconds and have since changed little, but the sizes are grown on some Gibit per chip. The processors are now no longer in the megahertz clocked, but in the gigahertz range, and therefore measures are needed to shorten the average access time per bit.

For this reason, the clock rate of the connection of the memory to the processor ( see front side bus ) as well as the size of the cache can be increased.

In June 2012, it was announced that ( called Hybrid Memory Cube English and short HMC ) with the so-called memory cube is to develop a new smaller and more powerful design for memory, in which a stack of a plurality of dies to be used. Specially Hybrid Memory Cube consortium, the company was founded, in which, among other things, the developer and manufacturer ARM, HP and Hynix have joined.

Physical and virtual memory

To expand the physical memory, modern operating systems on mass storage allocate additional virtual memory ( place, allocate ). This memory is also called swap space.

In order to realize this extension transparent to the operating system of a virtual memory space in which both the physical and virtual memory is available operated. Parts of this virtual memory space - one or more pages of memory - are thereby mapped into either the existing physical RAM or swap space ( swap space ). The utilization rate of each page determines which memory pages swapped out and only on mass storage devices and which exist in fast RAM. These functions are supported by today's CPUs, the amount of total memory supported has increased significantly over the course of development.

The swap space is a very inexpensive but associated with extremely poor performance extension to the physical memory represents a mismatch between the two memory types is to frequent " swapping ", ie, to detect the movement of data between mass and physical memory. Compared to the memory needs the hard drive with about 20 milliseconds for a long time, to provide the data. The access time to memory is, however, only about 25 nanoseconds, which corresponds to an eight- hundred-thousandth of it.

Cache

Accesses to the main memory by the main processor are usually optimized over one or more buffer or cache RAMs (aka " cache "). The cache holds and uses the computer the most frequently raised storage areas, representative of the original main- memory areas. The cache is very fast relative to other stores because it is possible directly connected to the processor (or directly on the die is in modern processors ). However, he is usually great only a few megabytes.

With a small memory footprint programs or portions thereof can run almost exclusively in the cache without having the main memory must be addressed.

The cache is implemented as a content addressable memory, that is to decide whether the data of an address already stored in the cache or not to be fetched from memory. Then another portion of the cache is applied. The cache is always filled with several consecutive words, for example, always with at least 256 bits (so-called burst mode ), as it is very likely that in the near future also data should be read before or after the currently required.

Performance of memory modules

The performance of memory modules ( clock and switching time behavior, english timing ) measures mainly in the absolute latency. The theoretical bandwidth is only relevant when the burst transfer.

A common misconception is that higher numerical timings a worse performance would result. However, this applies only at the same clock as the absolute latency of the factors (effective ) results in clock and switching timing (timing).

Examples

Calculation

Formula:

Example:

This leads to the consequence that DDR2 and DDR3 SDRAM, even though they have higher switching times ( timing ) as DDR SDRAM, can be partially faster and provide a much higher bandwidth.

Some storage vendors do not keep the official specifications of JEDEC and offer modules with much higher clock rates or better shift timing ( timing ) to. While DDR3 -1600 CL9 -9-9 an official specification is, it is at DDR2 -1066 CL4 - 4.4.12 not standards-compliant memory modules. This faster memory are often referred to as memory modules for overclockers. Due to the novelty of DDR3 is expected to be significantly faster memory modules are offered because of the continuous improvement of production processes in the near future. These are initially but working outside the official specification. The JEDEC could take these memory modules in the official specification, but this happens often appear years after the first availability.

Practice

In practice, FSB1333 processors from Intel can receive a maximum of 10 GiB / s of data with its front side bus. This is maxed out in the usual dual-channel mode with two memory - bars already of DDR2/667 ( 10.6 GiB / s). Conventional processors from AMD and Nehalem ( " i7 " ) processors from Intel are not subject to this restriction, since the memory controller sits directly on the CPU and not like the 775 in the Northbridge.

Connection of the working memory

The connection of physical memory is done by address and data bus. The data bus will perform the actual data transfer. 64 bits which are transferred at once in all current PC architectures. The address bus is used to select the requested memory cells; from its bus width (in bits), the maximum addressable memory of a processor -dependent. At each address byte consists of 8 bits is standard in today systems always (not just one bit ) is stored, so that this " address space " according to the following table shows:

One of the main differences between the two current PCs processor generations " 32- bit" and " 64-bit" So is the previously mentioned maximum addressable memory, which can however be extended to part with the help of Physical Address Extension something about the usual level. However, the number of bits of a processor generation generally meant the width of the data bus that does not necessarily coincide with the width of the address bus. Alone, however, the width of the address bus determines the size of the address space. For this reason, for example, has the " 16-bit " processor 8086 not only KiB 64 (theoretical 16- bit address bus ), but 1 MIB ( actual 20-bit address bus ) address.

The bus from the cache of modern computers the memory is running fast, so high clock rate and data transfer at the rising and falling clock edge (DDR: Double Data Rate). He is synchronous and with great word length, for example 64 bits per address. If multiple memory slots used on the motherboard of a PC, so consecutive addresses are stored in different slots. This allows overlapping access ( Interleaved) with burst accesses.

Within the memory chips all address lines are stored in shift registers. A 1- Mibit chip can for example have 1024 lines with 1024 bits. The first time you access a fast, internal 1024-bit register is filled with the data of a line. For burst accesses, the data of the following addresses are then already in the shift register and can be read with very low access time of this.

By these measures it is possible to drastically reduce the average access time per bit to less than one nanosecond, while the access to a particular bit in the memory is still approximately 25 nanoseconds.

It makes sense, therefore, transmits not only the requested bit to the processor, but like a so-called " cache line ", which today is 512 bits (see processor cache ).

Manufacturer

The largest memory chip manufacturers are:

  • Nanya
  • Hynix Semiconductor
  • Micron Technology
  • Promos
  • Samsung
  • Toshiba
  • Winbond
  • Etron
  • Intel
  • Kingston

These producers share 97 percent market share. Provider of memory modules such as Corsair, Kingston Technology, MDT, OCZ, A-Data, etc. ( so-called third-party manufacturer) Buy chips at those producers and solder them on their boards, for which they design their own layout. They also program the SPD timings according to their own specifications, which can be adjusted quite sharper than the original equipment manufacturer.

For dual-channel operation of two modules, make sure of these third-party manufacturers that module pairs (known as KIT ) are bought, lest the BIOS refused the operation because the manufacturer soldered meantime other chips on its modules. For modules of the original manufacturer that is not necessary, since the specifications that they comply with current data sheets, which, inter alia, Define and identify the SPD timings. These modules therefore always operate in dual channel mode, even if they originate from different sources.

As an intermediary between the large memory chip and module manufacturers on the one hand and the trade and consumers on the other hand providers as established eg CompuRAM and Kingston in Germany that offer specified for the most common systems memory modules. This is necessary because some systems operate by artificial restrictions by the manufacturer only with storage that meets proprietary specifications. These specifications are not intended to improve performance, but only the incompatibility with normal memory, which only meets the JEDEC specifications in order to earn money with the licenses can.

74756
de