How can I get the original word without hamming encoding?
For example:
I've this hamming encoded word: 011001101100
How can I get back to the original word? correct ans is: 00111100
The encoding algorithm is described by this wikipedia article. The article contains a table that can be used to perform the decoding process by hand. Converting the decoding process to software is left as an exercise for the reader.
First, write the received code word along the bottom of the table. Then, for each row, compute the parity and write it into the column at the right. For example, for row p8, we want the parity of the five bits at the end of the code word, as indicated by the red X's. If there are an even number of 1 bits in the indicated positions, write a 0 in the right column, otherwise write a 1.
The resulting binary number in the right column (MSB on the bottom) indicates the bit position of the bit with an error. If the number is 0, then no bits have an error. In this example, the right column contains the number 3, so there's a bit error at bit position 3.
To complete the decoding, follow the steps below:
0110 0110 1100 the received code word
0100 0110 1100 flip the bit that has the error (bit 3 in this example)
__0_ 011_ 1100 remove the parity bits
and the remaining bits are 00111100.
Related
I'm implementing a QR code generation algorithm as explained on thonky.com and I'm trying to understand one of the cases:
as stated in this page and this page I can deduct that if the code is protected with M error correction level, and the chosen mask is No. 0, the first 5 bits of the format string (non-XORed) are '00000', and because of this the whole string of 15 bits is zeros.
the next step is to remove all leading zeros, which are, again, all of them. it means that there's nothing to XOR the generator polynomial string(10100110111) with, thus giving us a final string of 15 zeros, which means that the final (XORed) string will be simply the mask string (101010000010010).
I'm seeking for confirmation that my logic is right.
Thank you all very much in advance for the help.
Your logic is correct.
remove all leading zeroes
The actual process could be described as appending 10 zero bits to the 5 bits of data and treating the 15 bits as 15 single bit coefficients of a polynomial, then dividing that polynomial by the 11 bit generator polynomial resulting in a 10 bit remainder polynomial, which is then subtracted from the 5 data bits + 10 zero bits polynomial. Since this is binary math, add and subtract are both xor operations, and since the 10 appended bits are zero bits, the process can just append the 10 remainder bits to the 5 data bits.
As commented above, rather than actually implementing a BCH encode function, since there are only 32 possible format strings, you can just do a table lookup.
https://www.thonky.com/qr-code-tutorial/format-version-tables
Let's say we want to represent a signed number with 5 bits where the first bit is used for the sign (+ or -) of the number. Then the zero can be represented by two bit representations (10000 and 00000).
How is this problem solved?
Okay. There are always two bit in binary 1 or 0
And then there could be any number of bits for example 1bit to 64bit
If the question is 5-bit string then it should be XXXXX where X can be any bit(1 or 0)
First bit(sign bit) we can have either +0 and -0. (Thanks #machinery)
So if it is positive, we put 0 at first position and if it is negative, we put 1 at first position.
Four Bits
Now, we got our first bit, we are left with another 4-bits 0XXXX or 1XXXX as the question asked for 0,
the rest bit will be zero.
therefore the answer is 00000 or 10000
Look how to convert decimal to binary and binary to decimal.
I am trying to understand why the double-dabble algorithm is working, but I am not getting it.
There are a lot of great descriptions of the steps and the purpose of the algorithm, like
http://www.classiccmp.org/cpmarchives/cpm/mirrors/cbfalconer.home.att.net/download/dubldabl.txt or
http://en.wikipedia.org/wiki/Double_dabble
There are also some attempts of explanation. The best one I found is this one:
http://www.minecraftforum.net/forums/minecraft-discussion/redstone-discussion-and/340153-why-does-the-double-dabble-algorithm-work#c6
But I still feel like I am missing the connecting parts. Here is what I get:
I get, that you can convert binary numbers to decimal numbers by reading them from left to right, starting with a decimal value of 0, iterating over the digits of the binary number, adding 1 to the decimal number for every 1 you reach in the binary number, and multiplying by 2, when you move on to the next digit (as explained in the last link).
But why does this lead to the double-dabble-algorithm? We don't want to convert in decimal, we want to convert in BCD. Why are we keeping the multiplying (shifting), but we drop the adding of 1? Where is the connection?
I get, why we have to add 3, when a number in a BCD-field exceeds 4 before shifting. Because if you want that the BCD number gets multiplied by 2 if you shift it, you have to do some fixups. If you shift the BCD number 0000 1000 (8) and you want to get the double 0001 0011 (16), you have to add 3 (the half of 6) before shifting, because by just shifting you end up with 0001 0000 (10) (you're missing 6).
But why do we want this and where is the adding of 1 happening?
I guess I am just missing a little part.
Converting N to its decimal representation involves repeatedly determining R (rest) after dividing by 10. In stead of dividing by 10 you might also divide by 2 (which is just a shift) followed by a division by 5. Hence the 5. Long division means trying to subtract 5 and if that succeeds actually doing the subtraction while keeping track of that success by setting a bit in Q. Adding 3 is the same as subtracting 5 while setting the bit that will subsequently be shifted into Q. Hence the 3.
16 in binary is 10 in bcd.
We want 16 in bcd, so we add 6.
but why we add 3 and not 6?
Because adding is done before shifting, so it’s all divided by two, that’s why we add 3 when higher than 5!
I think I got it while writing this question:
Suppose you want to convert 1111 1111 from binary into BCD.
We use the method to convert the binary number to a decimal number, explained in the question, but we alter it a little bit.
We don't start with a decimal number of 0 but with a BCD number of 0000 0000 (0).
BCD binary
0000 0000 1111 1111
First we have to add 1 to the BCD-number. This can be done by a simple shift
0000 0001 1111 1110
Now we move on and want to multiply the BCD-Number by 2. In the next Step we want to add the current binary digit to the BCD-number. Both can be accomplished in one step by (again) shifting.
0000 0011 1111 1100
This works over and over again. The only situation in which this doesn't work, is when a block of the BCD-numer exceeds 4. In this case you have to do the fixup explained in the question.
After iterating through the binary number, you get the BCD-representation of the number on the left side \o/
The whole idea is to use the shifting explained in your link, but then convert the number on the left into BCD. At each stage, you are getting closer to the actual binary number on the left, but making sure that the number remains in BCD rather than binary.
When you shift in a '1', you are essentially adding it.
Take a look at the link below to get the gist of the 'add 3' argument:
https://olduino.files.wordpress.com/2015/03/why-does-double-dabble-work.pdf
The Hamming distance of v and w equals 2, but without parity
bit it would be just 1. Why is this the case?
This would be more appropriately asked in the theoretical computer science section of StackExchange, but since you've been honest and flagged it as homework ...
ASCII uses 7 bits to specify a character. (In ASCII, 'X' is represented by the 7 bits `1011000'.) If you start with any ASCII sequence, the number of bits you need to flip to get to another legitimate ASCII sequence is only 1 bit. Therefore the Hamming distance between plain ASCII sequences is 1.
However, if a parity bit is added (for a total of 8 bits -- the 7 ASCII bits plus one parity bit, conventionally shown in the leftmost position) then any single-bit flip in the sequence will cause the result to have incorrect parity. Following the example, with even parity 'X' is represented by 11011000, because the parity bit is chosen to give an even number of 1s in the sequence. If you now flip any single bit in that sequence then the result will be unacceptable because it will have incorrect parity. In order to arrive at an acceptable new sequence with even parity you must change a minimum of two bits. Therefore when parity is in effect the Hamming distance between acceptable sequences is 2.
What will be the binary value of -17 and how to find the 2's complement of -17?
Assuming an 8-bit word, start with the binary form of 17. = 00010001
Then invert the bits: = 11101110
Then just add 1: = 11101111.
If you've got a 16-, 32- or 64-bit word then you'll have a load more leading 1s.
Even if you do not assume anything, you have to just keep the leftmost bit significant.
Start with the word itself, 10001.
Then invert gives the one's, 01110
Now add 1 to this number. 01111.
But to keep the left most bit significant, append a one there eg,101111
in terms of the minimum number of bits required (6 here).