Shannon–Hartley theorem

The Shannon - Hartley Act describes in communications technology, the theoretical upper limit of the bit rate of a transmission channel as a function of bandwidth and signal-to- noise ratio over the a certain probability of error-free data transmission is possible. It is named after Claude Elwood Shannon and Ralph Hartley.

In practice, the achievable bit rate of the channel characteristics such as capacity and of methods such as the channel coding is influenced. The Shannon - Hartley Act provides the theoretical maximum that can be reached with a hypothetical optimal channel coding without provide information about which method to achieve this optimum.

  • 3.1 Examples of base signals 3.1.1 Cardinal series
  • 3.1.2 QAM
  • 3.1.3 OFDM
  • 3.2.1 Geometry of the signal points
  • 3.2.2 Random configuration
  • 3.2.3 Estimation of the transmission error

Introduction

You could theoretically transmit data in unlimited quantities over a perfect transmission channel. But real existing channels as well as disorders such as interference, thermal noise, finite resistance of the conductor, etc., are subject to both limited in their bandwidth, the maximum transmission rate is limited. The transfer rate is limited by two factors:

Put simply determines the bandwidth, how many times in the transmission by a cable, the voltage per unit time can be changed, and the signal -to-noise ratio, how many different voltage levels it can still be safely distinguished at the receiver.

Precision expresses the Shannon - Hartley Act the fact that, for a disturbed by disturbances such as noise transmission channel an achievable by means of channel coding error-free data transmission with probability δ > 0 is possible, so the realized bit rate CR is less than that by the Shannon -Hartley Act formed border CS. It is in this case made ​​as a special no statement concerning the actual channel coding and technical process of this case can be achieved. If the realized Bitrate CR on the border of CS, the probability for error-free δ = 0, which means that no error-free data transmission is independent of the method used is possible.

Mathematical Description

Noise -free transmission channel

The maximum data transfer rate of CN in an undisturbed transmission channel with bandwidth B is given by:

B is the bandwidth in hertz, the data transmission rate in symbols per second ( baud).

In binary symbol alphabet with only two characters is the bit rate, measured in bit / s ( bps), equal to the symbol rate, measured in baud. Standing L symbols which can be ld ( L ) bits per symbol represent:

Wherein the term ld ( · ) denotes the base 2 logarithm.

Example: With a bandwidth of 1000 Hz can be maximum of 2000 baud transfer. The symbols consist of a bit, such as " 0" or "1", it reaches a data rate of 2000 bit / s If it is 26 characters of the German alphabet ( no special characters ), the data rate of 9400 bit / s by a factor of ld ( 26) is larger. By choosing sufficiently many different symbols can be achieved in a noise-free, band-limited transmission channel an arbitrarily high bit rate.

Transmission channel with noise

Claude Shannon generalized this theorem. For a perturbed with additive white Gaussian noise channel, abbreviated AWGN channel, the Shannon - Hartley Act assumes the following form:

CS represents the maximum (in this channel model ) Bit rate ( bits per second), S is the signal power. The parameter n represents the constant noise power spectral, the ratio of S / N is also referred to as signal -to-noise ratio ( SNR)., The noise power N can be expressed by the power of the noise power density N0 of the bandwidth B.

The maximum possible bit rate provides an upper limit on the condition of white noise dar. In channel models that do not correspond to the AWGN channel, as well as in different spectral noise power densities other contexts arise.

Example: a line to the SNR of 20 dB can be in an available bandwidth of 1000 Hz, a maximum of 6.7 kbit / s.

Account: (1) conversion of SNR in S / N: SNR = 10 · log10 (S / N) → 20 dB = 10 · log10 (x ) ↔ 2 dB = log10 (x ) ↔ 10 ² = x ↔ x = 100 → S / N = 100

(2 ) Calculation of the transmission capacity: CS = fmax · log2 ( 1 S / N) = 1000 Hz · ln (1 100 ) ÷ ln ( 2) = 1000 bit · ln ( 101) ÷ ln ( 2) bit / s ≈ 6658 bit / s ≈ 6.7 kbit / s

This bit rate can be achieved for example by appropriate channel coding such as turbo codes nearly in practice.

Other values ​​for the estimation ( values ​​rounded):

Confines

If N0 only increases the bandwidth B at constant signal power S and constant spectral noise density, the maximum possible bit rate CS Make up

Increase. This means that it is possible by an infinity extended bandwidth B of the transmission channel to increase the maximum possible bit rate is limited.

In real transmission systems allow the energy Eb, which must be expended for the transmission of one bit, vary. To transfer this requires a signal power S for the duration T. The actual bit rate CR is always less than the maximum possible bit rate CS:

This equation describes the Shannon limit ( engl. Shannon limit) as a function of Eb/N0: x ( Eb/N0 ). The ratio CR / = B x describes the number of bit / second per hertz of bandwidth with a given transmission technique can be transmitted in response to the SNR, and is referred to as the spectral efficiency. In the right illustration, the curves of different transmission methods with colors are blue, red and green are shown.

As a limiting case in equating the above inequality and if the spectral efficiency approaches 0 bit / s per Hz bandwidth, is given by forming the lower limit of the SNR:

This expresses the fact that at a ratio of Eb/N0 = -1.6 dB in an AWGN channel error-free data transmission is possible. At this ratio, it is not about the S / N but the energy Eb, which must be minimal expended to transfer the amount of information of one bit of spectral noise power density N0. It is drawn in the diagram shown for the ratios of Eb/N0 for different channel codings as a vertical black line in the right border.

That limit applies to small spectral efficiency with x << 1 Such signals are referred to as performance- limited signals in which the bandwidth large, but the available power is limited. For example, the communication is power limited to spacecraft, while the radiation is effected on a large frequency band. This section of the Shannon limit is shown in the right figure in the color black.

For the spectral efficiency x >> 1, however, the bandwidth B is the limiting factor, these signals are referred to as the band-limited signals. For example, typical bandwidth-limited signals, terrestrial digital radio links with spectrally efficient modulation techniques, such as 1024 - QAM.

Shannon geometric- stochastic approach

In " Communication in the presence of noise" Claude Elwood Shannon modeled the transmission channel as a real vector space. Each transmitted symbol is a coordinate in this vector space. Since the time any number of symbols can be transferred, the vector space is infinite dimensional. Each coordinate corresponds to a base signal, ie a real-valued, depending on the time function. The simplicity of the model half the base signals are repeated periodically, with the copies differ only by a time shift. For example, could be identical to the k-th base signal to a time shift of the nT (k nD ) th base signal. Where D is the number of " elementary " base signals whose sequence is repeated with period T. One can then say that in the period nT a number of nD symbols can be transmitted.

It is assumed that the coordinates of the associated base signals are orthogonal to each other and span a total of an orthonormal basis of the signal vector space. Any signal is then an (infinite ) linear combination of these basis signals. The coefficients of these linear combinations, corresponding to the transmitted symbols can now be recovered by forming the dot products of the signal to the base signals.

In the key, theory conductive example of bandlimited transmission channels, the symbol rate is limited by the maximum frequency W to 2 W. In a time interval T that is of finite length, only a finite number of symbols D are transmitted. These span a subspace of dimension D in the signal vector space. It is the maximum according to the sampling dimension D = 2WT accepted.

Examples of base signals

The following are some math, that is idealized, transmission channels with their systems listed of basis functions that satisfy the above assumptions for a signal vector space. These are all band- limited, next to the " elementary " baseband channel, systems of basic signals for channels with non-zero minimum frequency can be specified.

Cardinal series

Shannon used as the simplest signal model the baseband signals having a maximum frequency W. After the WKS sampling theorem ( for Whittaker - Kotelnikov -Shannon, see Nyquist-Shannon sampling theorem ) can in this channel just 2WT symbols in the period T to be transmitted, the base signals are sinc functions

N = ..., -1, 0, 1, ... These have at their center or each maximum, ie the symbol rate is 2 W. This orthonormal system is the ideal theoretical modeling of the frequency- limited PCM method ( pulse code modulation).

QAM

The ideal QAM system ( quadrature amplitude modulation ) symbol rate to transmit data W to the frequency band [ FW / 2, F F / 2]. The central carrier frequency F must be an integer multiple of bandwidth W. The symbols are complex numbers here, ie points in the Gaussian plane. It 2WT real numbers in the period T are thus transmitted again. Per complex symbol there must be two basic functions, they can be combined to form a complex-valued function:

N = ..., -1, 0, 1, ... Each signal is then obtained as the sum over.

OFDM

The ideal OFDM system (Orthogonal Frequency Division Multiplexing) is operating at the symbol rate F / M a complex vector of dimension M to the frequency band [ FW / (2M ), F F F / (2M) ]. F must be an integer multiple of the data rate, W / m. So there must be real-valued 2M base signals per vektorwertigem symbol that can be combined into M complex functions

J = 0, ..., M -1, n = ..., -1, 0, 1, ...

Since the sinc function is technically impossible to achieve, you have to find other solutions. By frequency filtering the orthogonality of the basic signals is destroyed, there occur mutual interference within the symbol (ICI) and inter-symbol (ISI). Increasing the clock rate of the signal generation without increasing the data rate, so you can use the extra freedom to shape a frequency- limited signal even without filtering. A variant of this uses wavelet packet trees.

Transmission in noisy noisy channel

Let the real base signals numbered with a single index and a period T fixed so that = 2WT base signals are within that period D. Smooth, on the transmission channel, limited noise can be simulated by linear combinations of just these basic signals with normally distributed, independent random coefficients of variance.

A code of length D, ie a tuple of real numbers, is sent as a continuous signal. During transmission, this interference is linearly superimposed, the received signal is disturbed

Geometry of the signal points

Be the signal on an average power limited, power directly corresponds to the amplitude squared. This is allowed because in the end only ratios of different services are compared, other factors constant, therefore shorten. Since the base signals are orthonormal, the continuous signal is the sum of squares of its coefficients as power, ie.

In other words, the code is a point on a D -dimensional sphere with the radius.

The sum of squares of D independent error is after the law of large numbers close to its expected value DN. Thus, the received code with very high probability is within a sphere of radius with the transmitted code as a center. Since the disturbances are assumed to be independent of signal, the square sum of the received code is most probably close to the expected value, ie, near the sphere with the radius around the zero point.

Random configuration

It is a configuration of M = 2DB fixed randomly selected codes medium power P, which should correspond to M different digital messages, ie it will be bit by D base signals or B bits per base signal coded.

From small balls with radius around the codes of the configuration, up piece fit into the big ball of receivable signals, that is true for the maximum bit rate ( D / 2 = WT)

Estimation of the transmission error

For very large D, the codes transmitted lie on a sphere of radius and the received code with high probability in spheres with radius around it and on the sphere with radius. One can therefore compare the received code with all codes in the configuration, in order to determine, which has a smaller distance to r.

The error sphere of radius r and with center on the sphere of the received code covers a range in the sphere of the transmitted code, which in turn lies within a sphere of radius. The probability that a random code is outside this range, that is greater than

That all M-1 are from the transmitted code different codes of the configuration outside this range, so it has a probability that is greater than

Should be below a probability of error e, ie the above expression to be greater than 1-e, we obtain after changing the bit rate

In the second term is negative and very large in absolute value, the contribution of the second term can be made arbitrarily small, but if the period T and thus the cardinality M of the configuration are large enough.

Thus, with increasing length of the signals in the configuration, the bit rate can be brought arbitrarily close to the ideal bitrate. However, administration oppose the configuration and the search of the best the receiving signal resembling a direct practical application of fast-growing requirements.

464774
de