In some applications it is necessary to have good error protection. Sometimes, it will be impossible for the receiver to communicate back with the sender to check for errors in the received packages. Some algorithms are made for this kind of situation as for example in a multiple receiver communication. They use a forward error correction, which is based on the addition of redundant bits over the bitstream of data.

Repetition code

A simple example of forward error correction is (3,1) repetition code. In this example, each bit of data is sent three times and the value or meaning of the message is decided upon majority vote. The most frequently sent bit is assumed to be the value of the message (see table below).


Triplet received Interpreted as
000 0 (error free)
001 0
010 0
100 0
111 1 (error free)
110 1
101 1
011 1


The probability of occurrence of two errors is less than the probability of occurrence of only one. Therefore, selecting the most frequently sent bit as the good one reduces the global error probability on a word of three bits. In this example, the transmitted message can still be considered as received and correct if up to two bits are missing or if up to one bit is wrong. This is known as the error-correcting code’s capability.

This example is purposely simple to illustrate the principles of error correction and usually this type of algorithm uses hundreds of bits only to validate a few (for example, one hundred bits are received for a word of just one bit, ninety nine being only redundancy).


There are two families of forward error correction: the block code and the convolutional code. The first one works with blocks or packets of a predetermined number of bits. The block codes are decoded in polynomial time to their block length. Low density parity check (LDPC) is a form of highly efficient block code used among applications in the WiMax communication standard. The length of convolutionary code symbols is arbitrary and are most often decoded with Viterbi algorithm.


Forward error correction algorithms such as LDPC assume that errors are distributed uniformly and randomly across the bit stream. Some errors occur by burst and therefore make the bitstream rapidly unintelligible even with the precaution of using an LDPC or another error correcting algorithm. This is why some coding involves interleaving. Interleaving means that the bits are shuffled in a way that can be undone by the receiver (an action which is called deinterleaving), thus making burst error appears as if it were randomly located as shown below.

Transmission without interleaving:

Error free code words : aaaabbbbccccddddeeeeffffgggg

Transmission with a burst error aaaabbbbccc____deeeeffffgggg

With interleaving:

Error free code words : aaaabbbbccccddddeeeeffffgggg

Interleaving abcdefgabcdefgabcdefgabcdefg

Burst error abcdefgabcd____bcdefgabcdefg

Received code words after deinterleaving: aa_abbbbccccdddde_eef_ffg_gg

The interleaved bitstream remains more intelligible than the other one after suffering a burst error. Therefore, shuffling the bits in the code blocks makes interleaving correct the weakness towards burst errors that the most common forward error-correcting algorithm tends to suffer.


Most applications of digital signal and data processing use some kind of error correction. The need for forward correction appears when the backward communication towards the sender is impossible due to practical aspects of the specific application. Interleaving makes burst error less damaging to code blocks