Blockcode

A block code is one type of channel coding, characterized in that the code words used all have the same number of symbols from an alphabet, for example bits have.

Although block codes often are not optimal in the sense of minimum average codeword length, often restricts you to block codes, since the study of codes of arbitrary length is much more difficult. Another specialization represent the linear code

Important parameters of a block code, the information rate ( a parameter for the amount of data contained in a fixed amount of information ) as well as the correction rate ( Hamming distance - one parameter for the error-resistance at a fixed amount of data ). It is generally not possible to optimize these properties simultaneously. It must therefore be always chosen new to the practice, which block code provides the best compromise for a particular application.

The voltage between efficiency (large information rate ) and correction capability can also be by attempting to detect, at a given number of bits per code word and a particular correction rate (represented by the Hamming distance d) to maximize the total number of code words.

Is generally referred to as an (n, d, q ) code, if

  • A is an alphabet,
  • For the code
  • Applies to the minimum distance.

Considering linear codes, one speaks of (n, k, d, q ) codes, where k is the dimension of the body over here. n, and d is of the same meaning as for block codes.

So you now interested in Max ( n, d, q): = max { | |: (n, d, q ) - code }, since in this case an optimum for a given n, d, q information rate is achieved. See Singleton bound (MDS ) code, Hamming bound ( perfect code), Plotkin barrier, Optimal code.

Definition

Formally is the code block code, referred to as alphabet and the length of a code word.

Called block codes consist of information symbols at the beginning of the block and check symbols at the end of block Systematic block codes ( see figure).

Information rate for block codes

Be a block code and it applies, the alphabet had so q distinct elements. Then is to define the information rate:

If, for example, a binary code with different elements, then it takes a maximum of bits in order to obtain different code words. Therefore, the information rate is the actual number of characters with the fewest number of characters in relationship.

For example, if the first bit of a binary codeword are information bits which exist at all theoretically possible combinations, then the information rate:

Error correction

Block codes can be used for error detection and error correction in the transmission of data over error-prone channels. The transmitter assigns the information to be transmitted word of length a codeword of length, where. By adding additional redundancy symbols and the information rate is lowered is formed; However, the receiver can use the redundant information now to transmission errors to identify and correct.

Is used, for example, in the case of binary encoding, the mapping

Thus received codewords with exactly one bit error can be corrected by reversing the differing bit using a majority function:

However, if in this case, 2 bits wrong, the error " corrected wrong."

  • Coding Theory
132820
de