• HyperCube Representation of Reed-Muller Codes

by tuebui • October 12, 2014 • Home

Well, as I promised in the previous post about HyperCube representation for Hamming Code, I’m going to introduce a method using Hypercubes to represent Reed-Muller(RM) Codes (Wikipedia). Not just for the fun purpose as what I did with Hamming Code, the representation actually produces an improvement on the algorithm to encode/decode generalized binary Reed-Muller code RM(r,m). This algorithm is as fast as the Fast Hadamard Transform(FHT) method mostly used in LTE networks nowadays  which has a complexity of O( ). However,  while FHT algorithm is only applicable in decoding only the first order RM(1,m), this algorithm will work with RM(r,m) for as well.

In fact, I don’t consider this as a new algorithm but as an improvement of the traditional decoding method even if it has different encoder and decoder which intuitively map code bits to HyperCubes. The reason is that it’s using the same concept of majority logic decoding and “peeling off” order-by-order to reach to the final result. Whereas, the performance improvement comes from the difference between linear algebra-based and geometry-based viewpoints, in which the geometric one knows exactly where to manipulate bit 1(s) without looping through redundant bit 0(s) used in matrix multiplication of linear algebra.

However, in this post, I will primarily demonstrate how to take advantage of hypercube’s unique graph to design an coding algorithm. So, if you don’t know much about Reed Muller code, don’t be afraid as I won’t say anything about it yet, at least in this post. The performance evaluation and its comparison to the other algorithms will be coming soon in the other post. For the time being, you could browse the source code or spend some time evaluating its performance by yourself. Source code in C++ https://github.com/bomot113/ECC-HyperCube

Here’s what I’m going to cover:

1. UnitCube(s) and their ParallelCube(s)
2. Why hypercubes?
3. CubeCode(4,2) = First Order RM Codes RM(1,4)

1. UnitCube(s) and their ParallelCube(s):

• Unit Cube: A Unit Cube in a hypercube of m dimensions (m-dim): a Hyper Cube of k dimensions that contains the root .

The example below is a 4-dim hypercube and its unit cubes. Let’s say 1x1x is a unit cube in that 4-dim hypercube. So it will have 4 elements 1010,1011,1110, and 1111 by replacing x(s) with binary 0 or 1. 1x1x is a square that contains root unit cube 1111 and has 2 dimensions (number of x(s) in its name). 1x1x has 2 fixed bits 1 at bit index 0 and 2.

A non-unit cube is hypercube that doesn’t contain the root , so it will have at least one fixed bit 0. For instances: 110x, 01xx, etc.

 Level/Dim 0 `1` 2 3 4 Unit Cube 1111 111x 11×1 1×11 x111 11xx 1x1x 1xx1 x11x x1x1 xx11 1xxx x1xx xx1x xxx1 n/a Cube Index 1111 1110 1101 1011 0111 1100 1010 1001 0110 0101 0011 1000 0100 0010 0001 n/a 15 14 13 11 7 12 10 9 6 5 3 8 4 2 1 n/a

Figure 1: unit cubes of a 4-dim hypercube.

• Parallel cubes: a cube is parallel to a unit cube if and only if it is a non-unit cube and its fixed bits have the same positions as those of the unit cube. For example: 10xx and 01xx are parallel to 11xx but 1x0x isn’t. Figure 2: Parallel cubes in dimensions 1 and 2.

You could easily figure out m-dim unit cubes of a n-dim hypercube will have elements and parallel cubes.

2. Why hypercubes?

The answer is if we map data bits (bit strings we want to send over a noisy channel) and parity bits (extra bits used to correct error bits received at receivers) to hypercube in a specific order, magic will happen.

I want to make it clear about a subtle difference between Parity bits and Parity, which could confuse you later on as it did to me sometimes. Both of them are a sum of a set of data bits. But parity bits will be added at the end of data bits and make it become a block code to transmit over a communication channel. On the hand, parity is basically a sum of bits. So, a parity bit is data that will be used to identify error bits while a parity is just a value. For example, bit string 10101 will have parity and bit 1 at the end of transmitting bit string 10101is a parity bit. So if there’s something wrong during the transmitting process such as one bit has been flipped, parity bit will help us out.

And now, here’s the secret of hypercube: if you put n bits on the vertices of hypercube from the lowest to highest layers/dims and sum the bits of each unit cubes in these dims, we will have n cube parities that eventually could be converted back to data bits. That means the relationship between data bits and their corresponding unit cube parities is 1:1. Vague? Let’s me explain by the following example. Let’s say we have a 7-bit string 1011010, I will show how to map them to hypercube’s graph and the relationships between them and things called “parities”.

 Level/Dim 0 `1` 2 3 4 Unit Cube 1111 111x 11×1 1×11 x111 11xx 1x1x 1xx1 x11x x1x1 xx11 1xxx x1xx xx1x xxx1 n/a Vertex Index 1111 1110 1101 1011 0111 1100 1010 1001 0110 0101 0011 1000 0100 0010 0001 n/a Data Bit 1 0 1 1 0 1 0 n/a Parity of Cube 1 1 0 0 1 1 0 n/a

Data bits ->parities: We arrange vertices of hypercubes from the lowest layer 0 to the highest 3, and highest to lowest index in each layer. And then, we put arbitrarily data bits from left to right let’s say 1011010. So the first data bit 1 will be put at root with index 1111, 0->1110, 1->1101 and so on. Parity of root unit cube 1111 is 1 as it has only one data bit. Parity of unit cube 111x will be sum of bits vertex indexes 1111 and 1110: 1 and 0, so it will be , and so on.

Parity->data bits: Reversely, if we have parities of 4 unit cubes 11xx, 11×1, 111x and 1111, we will easily calculate the data bit at vertex 1100 by summing up these parities which is . It’s not so difficult to prove this lemma. Let’s call function bits of unit cube U as a sum of all data bits placed inside U. So  bits(11xx) bits(11×1) bits(111x) bits(1111)= .databit(1100)  .databit(1110)  .databit(1101)  .databit(1111) = databit (1100). This will be true for other unit cubes in hypercube.

The 1:1 relationship between data bits and parities is another beautiful attribute of hypercube besides one I describe in the previous post that is used for Hamming codes. This is the heart of this coding algorithm. If somehow we can figure out these parities, we can calculate the original code at the other side of communication channel.
3. CubeCode (4,2)=First order Reed-Muller(1,4):

Ok, time to talk about an hypercube-based algorithm encoding and decoding a family of error correcting codes that can fix one or multiple error bits. There wouldn’t a better way to understand the generalized concept of this algorithm by demonstrating an instance of the code family Cube(n,m). Therefore, I will spend the rest of this post to explain how to encode and decode CubeCode(4,2) and from that we could develop a notation to extend it to other CubeCode(n,m).

CubeCode(n,m): a family of block code with a length of bits that are mapped to a n-dim hypercube in which m dimensions are used for parity bits. These parity bits added at the end of message bits to set parities of unit cubes at highest dims to 0.

Encoding for CubeCode(4,2): the encoding method is very similar to what I used for Hamming codes in the previous post. First we map data bits to an appropriate hypercube at the lowest dims, then add extra bits at the highest dims to set parity bits 0. For CubeCode(4,2) that uses a 4-dim hypercube to represent block codes of 15 bits, in which 2 dimensions is used to set parity bits to 0.

+ Problem statement:

 Level/Dim 0 `1` 2 3 4 UnitCube 1111 111x 11×1 1×11 x111 11xx 1x1x 1xx1 x11x x1x1 xx11 1xxx x1xx xx1x xxx1 n/a Bit Index 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 n/a Data Bit 1 0 1 1 0 Parity bits = ? n/a Parity 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 n/a

+ Programmable viewpoint: Using the reversed mapping from parities->data bits described above from the lowest to highest layers. Figure 4: Adding parity bits from the lowest to highest parity-dims.

Note: some with sharp eyes would recognize a small performance problem in this encoding method, but don’t worry about it just yet because the main purpose of this post is to demonstrate how take advantage of hypercube graph in encoding and decoding. I will come back to this issue in upcoming posts. Stay tuned.

Decoding for CubeCode(4,2): Parity bits are very powerful to identify error code with only one bit occurred. However, they’re unable to recognize multiple error bits. Fortunately, then can help to retrieve original parities of unit cubes even if code contained errors. We will then be able to compute original data bits from these parities using reverse mapping parities->data bits.

Remember parallel cubes I have mentioned before? When parities of higher-dim unit cubes are set to 0, the parities of unit cubes in the dim that is closest to these 0 parities will be the same as those of their parallel cubes. In CubeCode(4,2),  the parities of unit cubes in dimension 1 and those of their parallel cubes have the same values.

When parities of higher-dim unit cubes are set to 0, the parities of unit cubes in the dim that is closest to these 0 parities will be the same as those of their parallel cubes

+ Parallel cube’s parities:

In the figure 5, 01×1 is the parallel cube of 11×1 and they both lie inside the unit unit cube x1x1 whose parity is 0. As Parity(01×1) + Parity(11×1) = P(x1x1)=0 => Parity(01×1)= Parity(11×1). Similarly, we will have Parity(10×1) = Parity(11×1).  Moreover, Parity(xxx1) = Parity(01×1) + Parity(10×1) + Parity(11×1) + Parity(00×1)=0; therefore, these four parities are equal. Using inductive proof, we will prove unit cube 11×1 and its parallel cubes are equal as well, which means:

Parity(11×1)=Parity(11×0)=Parity(10×1)=Parity(01×1)=Parity(10×0)=Parity(01×0)=Parity(00×1)

+ Voting:

Because these 7 hypercube are parallel and interdependent to each other, if there are no more than error bits occurring during the transmitting, the majority of these parities will be still remain unchanged. Let’s say at the beginning, 7 parities were 1111111 and then at the other end of the noised channel we received 1010011 with 3 bits flipped. The majority of these parities are one, so we could conclude that these parities were 1 and so were that of the unit cube 11×1.

Doing the same with other unit cubes in the dimension 1,  it would be easy to figure out the correct parities of xx1x,x1xx, and 1xxx. However, in order to calculate the original data bits, the parities of unit cubes in dimension 1 are not enough, we still need those of the last dimension 0 as well that is the parity of 1111.

+ “Peeling off”:

To be able to use the voting technique above for the lower dimensions of the one closest to 0-parity dimensions, we need to set these parities in this dim so the received code will be compatible with voting mechanism. In other hands, in field GF(2,+), we have to manipulate the code word to so if where f is a function that map data bits ->parities , then the modified codeword will have . (Yes, I’m cheating by bringing on linear algebra here but wasn’t it intuitive enough?)

Thanks to hypercubes, we have . So: There you go. to make current codeword compatible with parallel voting, just reversed mapping parities to its corresponding data bits and then subtract this from the codeword.

Ok, done with maths, i will show how to do it intuitively: loop through all data bits in 0-parity dimensions, sum each of them with the parities of unit cubes in the closest dim (those has been calculated) that lie inside the data bit unit cube. The animation below will help you understand: Figure 6: Peeling off parities at dimension 1 by modifying the codeword.

I intended to create another hypercube peeling off to demonstrate but thought you’ve already got the idea, so It would be unnecessary.

And now, when the modified codeword satisfied the voting compatible requirements, we will start voting step again. Because unit cube 1111 are parallel to every 0-dim hypercube which are points, we just vote by using all new data bits now. Just pick out 7 of them and vote like what we have done before. There you go, we have all correct parities needed to compute original data bits even if there were at most 3 error bits occurred. Just using reversed mapping parities->data bits , we will have what we has been looking for so far.

Note: When peeling off parities, we don’t have to loop through all the higher dim parities, we just need enough data bits to get 0-parity dims required. Having said so, the peeling off previous, we just have to compute data bits from dims 1:2 and ignore the dim 3. Again, the performance issues will be discussed in the other post. This post is too long now, it’s time to finish. Besides graphics and so much words I had written in this post, you could execute or browse the source code. It will take several seconds to run the whole test with super long code like cubecode(15,7) (32768 bits length for codeword  and half of them for parity bits) which can correct up to 127 bits. So cool.

References:

 M. Li, D. Han, and S. Ma, “Decoding algorithm with Fast Hadamard Transform for channel quality indication (CQI) in 3GPP-LTE,” in 2010 International Conference on Computer Application and System Modeling (ICCASM), 2010, vol. 9, pp. V9–636–V9–639.
 T. K. Moon, Error Correction Coding: Mathematical Methods and Algorithms, 1st ed. Wiley-Interscience, 2005. 