Decoding methods

A decoding rule referred to in information theory and coding theory, more precisely in channel coding, a provision which is to be associated with a received word word sent.

Description

The problem here is the following: a transmitter ( source) S sends a word c with n characters, consisting of letters of an alphabet A with q letters. The transmission goes out over a noisy channel. This individual letters can be changed due to transmission errors. That the receiver receives a possibly altered word. Its goal is to determine the correct word.

For the mathematical consideration of simplifying assumptions are almost always met:

  • The channel is discrete, the output signal can only finally adopt many different values
  • The channel is memoryless, the error probability of a character does not depend on the character was previously sent.
  • The channel is symmetric, the probability that a letter was sent is also true for all the characters to arrive the same size and the probability of all the errors are equal. In formulas: and. is the probability that received, if it has been sent.

Thus, a reasonably error-free decoding is ensured, the words will usually deliberately added redundancy by error-correcting codes are used.

Minimum Error decoding

At the minimum error decoding, the word to find attempts, which was most likely transmitted. This depends on two main factors. For one of the error probability of the channel, on the other hand by the entropy of the source; Say whether the words transmitted uniformly distributed and interdependent. The minimum error decoding is always the optimal decoding rule, but it is generally difficult to determine.

Example: Let the alphabet binary: the discrete memoryless symmetric channel have the probability of error, as the code of the binary repetition code is used: . Then the probability

  • For any transmission errors:
  • For a transfer error:
  • For two transmission errors:
  • For three transmission errors:

Which would be sent as often as the statistical average of three times. The time now is received the word. Consider the corresponding probabilities:

On the other hand, the probability that is sent:

Here is the random variable for the transmitted word and the random variable for the received word. Thus, the likelihood that has been sent:

And the probability:

Thus, in this case, after decoding.

Maximum Likelihood Decoding

In the maximum likelihood decoding of a received word to the ( code ) is decoded word that has generated the most likely. In contrast to the minimum error decoding, the maximum likelihood is relatively easy to implement decoding. Under the standard assumption of a discrete memoryless channel with an error probability of, the code word is selected, the lowest of different, ie the Hamming distance to the least.

The maximum likelihood decoding is used in the coding theory, the most common.

Under the assumptions that the source of their characters / words without memory and sends uniformly distributed and the channel is discrete, balanced and memoryless, the maximum likelihood decoding is a minimum error decoding. For this reason, entropy coding is often carried out by the data to be transmitted, for example, in front of the packed block to error correction coding.

For example, for the above example, the maximum likelihood decoding on the decoded word according to. Sends the source and equally often, the decoding by maximum likelihood is also a by minimum error.

224995
de