• Hypercube Representation for Error Correcting Codes

    by  • October 11, 2014 • Home

    Ok, want to play a game? Let’s say you are pretending to be a magician, and there’re 7 audiences sitting in front of you.

    1. Setting up:
      – Number your audiences from 1 to 7.
      – Ask ones numbered 1, 2, 3, and 4 to think about a binary number either 0 or 1 and then let the others know but not you.
      – Then, the one with number 5 will sum all the binary numbers of those numbered 1, 2, and 4. If the result is even, he or she will choose 0; otherwise, 1.
      – Similarly, the one with number 6 will come to the audience 1, 2, and 3; and the one with number 7 will reach to the audience 2, 3, and 4 to do the same thing.
      – Ask them to discuss in group and choose 1 person will cheat on me. That one will decide whether to flip his or her binary number or not.
      Remember: they must not let you know what binary number they are holding until they decide the cheater
    2. Game gets started: Ask them to disclose the secret numbers they are holding. Then after thinking for seconds, you say: “You, the one with number 5 (for example), are cheating on me” if one of them actually flipped the binary number; otherwise, “well, thanks for all your honesty”. That’s magic.

    If you were interested in this ‘magic’ trick, read on. If you already knew what’s behind the scene and wanted to know how I could approach to this trick with 2n -1 audiences, read on too. Otherwise, just turn off this browser tab.

    Trick’s explanation:
    So, how could we figure out whether one of our audience is cheating or not? Generally speaking, the information of the other audiences helps us. No matter they are numbered from 1-4 who could think of any binary numbers as they want or from 5-7 whose binary number depends on the former group 1-4, each of these binary numbers contains some information to correct themselves when one of them goes wrong. That’s teamwork.
    Technically speaking, let’s say the audiences from 1 to 4 came up with the bits: 1, 0, 1, and 1 respectively. Following the rule above:

             – The audience 5 will have bit5 = (bit1\oplus bit2 \oplus bit4)

                                                                       =  (1 \oplus 0 \oplus 1) = 0  ( \oplus is the binary operation XOR)

             – Similarly, bit6=(bit1 \oplus bit2 \oplus bit3) = 1

             – And bit7 = bit2 \oplus bit3 \oplus bit4 = 0

    If the one chosen by the group to be a liar is 5, and he didn’t do anything then the rule above would stay true. Otherwise, the rules above would be broken. We couldn’t say 5 is a liar right away because probably the liar could be 1, 2, or 4 (remember that just only one of them is a liar). However, because the bit 6 and 7 satify the rule, the audiene 1, 2 or 4 is not the liar. For that reason, 5 is the liar. Confused yet? Try another explanation followed by the figure below:

    Figure 1: The magic’s explanation

     

    The audiences are divided into 3 groups arranged as the figure 1. We go through the rules step by step, and figure out the group of audience 1, 2, 4 and 5 have a problem with the XOR operation but the others are ok. As audience 5 is the only one doesn’t belong to the other groups. So that one definitely is the liar.
    Would it be still true if the liar is another one than audience 5? Yes, it would. Because of the arrangement among audiences which allows us to identify every audience’s position based on relationships between him or her and the 3 groups. For instances, audience 2 is the only one belonging to all 3 groups or audience 4 is the only one belonging to the same group with 5 and 7 but not with 6 (figure 1).
    The gist of the trick here is the way how we arranged our audiences in a special way in which they could cross-check the others in their group.

    Naysayers: Well, I could do the same trick without using the graph above. I will just group 3 audiences and then let one of them pick any binary number for the whole group. If one of them lied, I would know right away as he or she would differ from the others in their bits.
    Me: Yep, you could do that. The difference between your trick and mine is just the amount of the redundant information that is added to the data source in order to make the group be able to self-correct when something wrong happened.
    In my trick, I added 3 bits to the source with 4 bits, so the info rate is 4/7; whereas, you added 2 bit to the source with only one bit at the rate 1:3 which is lower in performance. Moreover, the 3 bits trick is not blurry enough to be a magic.

    So, not only the special arrangement of our audiences but also the amount of redundant information will be taken into account. That’s it for the trick. If you want to know where the special graph comes from or whether there are other graphs having that kind of structure in this world, please keep reading.

    Hamming Code:
    Now, the funny part is over, but the exciting part just gets started. The trick I’ve just shown you is the visual representation of Hamming (7, 4). Hamming codes in general are codes with length 2m-1 bits in which m is the number of parity bits included to enable the self-correcting ability of the bit blocks and 2m-m-1 is the length of source bits.

    Parity bits Total bits Data bits Name Rate
    2 3 1 Hamming(3,1) (Triple repetition code) 1/3 ≈ 0.333
    3 7 4 Hamming(7,4) 4/7 ≈ 0.571
    4 15 11 Hamming(15,11) 11/15 ≈ 0.733
    5 31 26 Hamming(31,26) 26/31 ≈ 0.839
    m 2^m 2^m-m-1 Hamming (2^m-1,2^m-m-1) (2^m-1,2^m-m-1)

    Source: Wikipedia

    In information theory, Hamming code is the classical linear error-correcting block code that will be able to correct one-bit errors (as you already experienced how they could correct themselves through the game. The code have several kinds of representation such as matrix, algebra, polynomial, Trellis, or Tanner [1]. But none of them is intuitive enough to let us explain how to encode and decode naturally. And of course “If you can’t explain it simply, you don’t understand it well enough.” – Albert Einstein. That’s why I bring this very classical information theory problem on again in this blog. I hope my explanation on Hamming Code would be simpler at some degrees so that it could be a ground for future development later on.

    Figure 2: Hamming Code (7, 4) used in transmitting information

     

    It’s quite easy for you to understand the self-correcting mechanism in the figure 2 if you already mastered the aforementioned ‘magic’ trick, isn’t it? However, if you are readers with sharp eyes, you will notice that the arrangement of bits looks like a cube that is missing a vertex and 3 line segments. Not just that, the bit length of a Hamming code is 2m-1 with m parity bits. We could predict that a hypercube would be perfect to be used as such special bit arrangements.

    Figure 3: Hypercube vs. Hamming Graph Representation

     

    Hypercube:

    Generally speaking, hypercube is a cube in n-dimension space. To make it simple, we will use a unit hypercube (shortly hypercube) whose side has length one unit in this discussion. We can get a detailed definition following this link in Wikipedia; however, I want to refer you to another definition that describes the most important characteristic of the hypercube: “n-Cubes an n-dimensional hypercube, or n-cube, denoted by Qn, is a graph that has vertices representing the 2n bit strings of length n. Two vertices are adjacent if and only if the bit strings that they represent differ in exactly one bit position.[2]

    Figure 4: n-Cube Qn with n=2, 3, and 4.

     To build an n-cube, we need to define its n unit vectors composed of n+1 bit strings of length n first and then using vector addition operations to position the 2n -n-1 other bit strings. It sounds alike to the definition of the Hamming codes, doesn’t it?
    That’s enough for the introduction of Hypercube and Hamming code.

    How to use Hypercube for Hamming Code representation:
    After having been reading so far, some of you would probably figure out the relationship between the Hamming Codes and n-Cubes. Let me go straight to the point. To create an encoder and a decoder for a Hamming Code with length 2^n-1 bits, we need to find a special graph composed of n groups each of which contains 1 parity check bit. The relative position of each bit (vertex) to all groups is distinctive, which helps us trace back who went wrong after group-checking.

    An n-Cube after removing one vertex and its n crossing edges is such a special graph. The explanation of the magic trick I have shown you before could prove this proposition for the 3-Cube instance. Let’s do it with the Q4 (tesseract).

    Figure 5: Q4 for Hamming code (15, 11)

     

    Encoding and decoding algorithms for the Hamming Code (15,11): We have a certain source 11-bit string 10101011101 transmitted over a noisy channel which could cause at most 1 bit flipped (0->1 or 1->0) in every 15 bits (noise rate is 1/15). Design encoding and decoding mechanisms to preserve the original information of the source over the noisy channel.

    Solution 1: Wikipedia (traditional method)

    Solution 2: Geometric view which is more intuitive and the main reason that keeps you reading so far.

    Encoding: What we need to do now is defining a special graph, placing the source bits in the right positions on the graph, adding redundant bits (parity check bits), and lastly throwing the encoded bits to the noisy channel.

    Decoding: Receive the transmitted bit string, put back these bits in the right positions on our graph, check if there’s any bit flipped then fix them if needed, and finally display the result.

    The Graph: The tesseract Q4 after getting removed the 4 edges of the 4 unit vectors that is used for encoding and decoding Hamming code (15, 11) is shown in the figure 5. These 4 Groups – each of them is using one parity check bit of a, b, c, and d – are 4 Q3 cubes in green. For convenience, these groups will be named as Ga, Gb, Gc, and Gd in that order. We named other 11 vertices in the graph randomly e, f, g, h, i, j, k, l, m, n, and o. (Normally if you want to program this algorithm, we should use the binary strings for easier to locate a bit. For example, we will definitely find out the whereabouts of bit a5 of the bit string a_0a_1a_2a_3a_4a_5a_6a_7a_8a_9a_{10}a_{11}a_{12}a_{13}a_{14}a_{15} on the graph. The a_5 which has the index 5 (0101_2)will stay outside the cubes with parity bits 8 (1000_2) and 2(0010_2), and stay inside the cubes with parity bits 4 (0100_2) and 1(0001_2). That’s computer science talking, so never mind if you don’t understand it.)
    Done with the setting up. Let’s send a random source 11-bit string 10010110111 to a noisy channel with error rate 1/15. 

    Encoding:

    • Source bits = 10010110111 = efghijklmno

      e=1, f=0, g=0, h=1, i=0, j=1, k=1, l=0, m=1, n=1, o=1 (you should assign bit values on the graph for easy reference)

       

    • Parity bits: abcd in which (refer the graph structure in figure 5):

      a = f \oplus g \oplus h \oplus i \oplus j \oplus k \oplus l = 0 \oplus 0 \oplus 1 \oplus 0 \oplus 1 \oplus 1 \oplus 0 = 1

      b = e \oplus g \oplus i \oplus n \oplus f \oplus k \oplus o = 1 \oplus 0 \oplus 0 \oplus 1 \oplus 0 \oplus 1 \oplus 1 = 0

      c = o \oplus m \oplus n \oplus i \oplus j \oplus k \oplus l = 1 \oplus 1 \oplus 1 \oplus 0 \oplus 1 \oplus 1 \oplus 1 = 0

      d= e \oplus g \oplus h \oplus m \oplus n \oplus i \oplus j = 1 \oplus 0 \oplus 1 \oplus 1 \oplus 1 \oplus 0 \oplus 1 = 1

    • Encoded bits = abcdefghijklmno = 100110010110111

    Transmitting: Let’s assume that bit e will be flipped (you can choose another bit as an exercise). Then bits that reaches to the receiver will be 100100010110111

    Decoding:

    Received bits: 100100010110111 (Keep in mind that you don’t know which bit has been flipped)

    Assign these bits back to the graph and do the math again:

    a=1, b=0, c=0, d=1, e=0, f=0, g=0, h=1, i=0, j=1, k=1, l=0, m=1, n=1, o=1

    a = f \oplus g \oplus h \oplus i \oplus j \oplus k \oplus l = 0 \oplus 0 \oplus 1 \oplus 0 \oplus 1 \oplus 1 \oplus 0 = 1

    b = e \oplus g \oplus i \oplus n \oplus f \oplus k \oplus o = 0 \oplus 0 \oplus 0 \oplus 1 \oplus 0 \oplus 1 \oplus 1 = 1 <- oops b received is 0

    c = o \oplus m \oplus n \oplus i \oplus j \oplus k \oplus l = 1 \oplus 1 \oplus 1 \oplus 0 \oplus 1 \oplus 1 \oplus 1 = 0

    d= e \oplus g \oplus h \oplus m \oplus n \oplus i \oplus j=0 \oplus 0 \oplus 1 \oplus 1 \oplus 1 \oplus 0 \oplus 1 = 0 <- oops d received is 1

    There is something wrong with the parity group Gb and Gd. The noisy channel may have at most only 1 bit error during the transmission so there’s one and only one bit flipped in the received bit string. NOW, please look at the graph and locate the bit staying inside the group Gb and Gd, but outside the group Ga and Gc. It will take you couple minutes but worth it, isn’t it? (Programmatically mapping groups to bits will take no time as I’ve explained before). Gb and Gd intersects at the rectangle ‘egni’. e is the only vertex which is a part of this rectangle but stay outside group Gc and Ga. So e is the bit that has been flipped and the source bits would be 10010110111.

    My recommendation is that you should try with flipping another bit in the transmitting step to understand the decoding as well as encoding process. Moreover, you would see how beautiful the Hypercube is.

    Proof of using n-cube for Hamming code (2n-1, 2n-n-1): We can use induction approach to solve the generalization problem; however I won’t bring on mathematics here as I know you’ll never care.

    Conclusion:  Why do I have to bother spending much time to represent a classical hamming code by a graph so called “hypercube”?

    1. It’s fun and intuitive to understand transmitting information bits in telecommunication.
    2. This is a ground for a further development which I will show you later in another post. By using the same method above, we could make encoders and decoders that can correct more than 1 bit in a block code at the same time like Reed-Muller which have a pretty fast maximum likelihood decoding algorithm.

    References:

    1. Moon, Todd K. Error Correction Coding: Mathematical Methods and Algorithms. 1st ed. Wiley-Interscience, 2005.

    2. Rosen, K. H. (2003). Discrete mathematics and its applications. Boston: McGraw-Hill.

    About