How to calculate Hamming code of (31,26)? - algorithm

I need to construct the (31,26) hamming code of 0x444.
After reading Wikipedia and the algorithm shown in GeeksForGeeks I still can't understand how this works as my construction ended up different than the result of a calculator I found on the internet.
My result is: 0100 0100 0010 0010 or 0x4422
is it correct?
As I understand:
P1 = Bitwise XOR(C1,C3,C5,C7,C9,C11,C13.C15,C17..) = 0
P2 = Bitwise XOR(C2,C3,C6,C7,C10,C11,C14,C15..) = 1
P3 = Bitwise XOR(C4,C5,C6,C7,C12,C13,C14,C15..) = 0
P4 = Bitwise XOR(C8,C9,C10,C11,C12,C13,C14,C15..) = 0
P5 = Bitwise XOR(C16,C17..) = 0
Another thing I can't understand.. if (31,26) hamming code is supposed to output a 31 bit result with 5 parity bits and 26 data bits.. why (7,4) hamming code transforms each 4 bits to 7 bits representation and not just 1 representation of 7 bits with 3 parity bits?
Thanks.

Yes, assuming you are numbering the bits from 1 at the right-hand end, then 0x000444 is encoded as 0x00004422 for a (31,26) Hamming Code -- for an even parity code-word.
Where C1, C2, etc are bit 1, 2, etc of the code-word, and P1, P2, etc are parity bits 1, 2, etc. I think is clearer to say that:
P1 = C1 = Bitwise_XOR(C3, C5, C7, C9, ...)
so that:
Bitwise_XOR(C1, C3, C5, C7, C9, ...) == 0
and so on. This is even parity.
You do not say which "calculator" you tried, but it could be that the discrepancy you see is to do with what end you number from. I note that Wikipedia gives:
If a byte of data to be encoded is 10011010, then the data word (using _ to represent the parity bits) would be __1_001_1010, and the code word is 011100101010.
which is clearly counting bits from the left-hand end.
I regret I do not understand your second question. I can say that a (31,26) Hamming Code does indeed take 26 bits of data and adds 5 parity bits to produce a 31 bits code-word. And that a (7,4) Hamming Code does likewise for 4 bits of data, 3 parity bits and a 7 bit code-word.

Related

Parity bit checks using General Hamming Algorithm

In a logic circuit, I have an 8-bit data vector that is fed into an ECC IC which I am supposed to develop the logic for and that contains a vector of 5 Parity Bits. My first step to develop the logic (with logic gates, XOR), is to figure out which parity bit is going to check for which Data bits (since they are interlaced). I am using even parity, and following general hamming code rules (a parity bit in every 2^n ), I get the following sequence of output:
P1 P2 D1 P3 D2 D3 D4 P4 D5 D6 D7 D8 P5
Following the General Hamming Algorithm:
For each parity bit, Position 1,2,4,8,16 and so on... (Powers of 2), we skip for the first position n (n-1) and we check 1 bit, then we skip another one, the check another one, etc... we repeat the same process for the other bits, but this time checking/skipping every 2^n, where n is the position they occupy in the output array (P1 P2 D1 P3 D2 D3 D4 P4 D5 D6 D7 D8 P5)
Following that convention, I get:
P1 Checks data bits -> XOR(3 5 7 9 10 12)
P2 Checks data bits -> XOR(3 6 7 10 11)
P3 Checks data bits -> XOR(5 6 10 11 12)
P4 Checks data bits -> XOR(9 10 11)
Am I right? The thing that confuses me is that if I should start checking counting the parity bit as one of the 2^n bits that are supposed to be checked, or 1 bit after that specific parity bit. Pretty much sums up to if it is inclusive or not.
Thank you for your help in advance!
Cheers!
You can follow this sheme. The bits marked in each row must sum up to 0 (mod 2) in other words for the marked positions in each row the number of set bits must be even.
P1 P2 D1 P3 D2 D3 D4 P4 D5 D6 D7 D8
x x x x x x
x x x x x x
x x x x x
x x x x x
I don't understand why you have P5 in the scheme.

Non-recursive Grey code algorithm understanding

This is task from algorithms book.
The thing is that I completely don't know where to start!
Trace the following non-recursive algorithm to generate the binary reflexive
Gray code of order 4. Start with the n-bit string of all 0’s.
For i = 1, 2, ... 2^n-1, generate the i-th bit string by flipping bit b in the
previous bit string, where b is the position of the least significant 1 in the
binary representation of i.
So I know the Gray code for 1 bit should be 0 1, for 2 00 01 11 10 etc.
Many questions
1) Do I know that for n = 1 I can start of with 0 1?
2) How should I understand "start with the n-bit string of all 0's"?
3) "Previous bit string"? Which string is the "previous"? Previous means from lower n-bit? (for instance for n=2, previous is the one from n=1)?
4) How do I even convert 1-bit strings to 2-bit strings if the only operation there is to flip?
This confuses me a lot. The only "human" method I understand so far is: take sets from lower n-bit, duplicate them, invert the 2nd set, add 0's to every element in 1st set, add 1's do every elements in 2nd set. Done (example: 0 1 -> 0 1 | 0 1 -> 0 1 | 1 0 -> 00 01 | 11 10 -> 11 01 11 10 done.
Thanks for any help
The answer to all four your questions is that this algorithm does not start with lower values of n. All strings it generates have the same length, and the i-th (for i = 1, ..., 2n-1) string is generated from the (i-1)-th one.
Here is the fist few steps for n = 4:
Start with G0 = 0000
To generate G1, flip 0-th bit in G0, as 0 is the position of the least significant 1 in the binary representation of 1 = 0001b. G1 = 0001.
To generate G2, flip 1-st bit in G1, as 1 is the position of the least significant 1 in the binary representation of 2 = 0010b. G2 = 0011.
To generate G3, flip 0-th bit in G2, as 0 is the position of the least significant 1 in the binary representation of 3 = 0011b. G3 = 0010.
To generate G4, flip 2-nd bit in G3, as 2 is the position of the least significant 1 in the binary representation of 4 = 0100b. G4 = 0110.
To generate G5, flip 0-th bit in G4, as 0 is the position of the least significant 1 in the binary representation of 5 = 0101b. G5 = 0111.

Why does a 4 bit adder/subtractor implement its overflow detection by looking at BOTH of the last two carry-outs?

This is the diagram we were given for class:
Why wouldn't you just use C4 in this image? If C4 is 1, then the last addition resulted in an overflow, which is what we're wondering. Why do we need to look at C3?
Overflow flag indicates an overflow condition for a signed operation.
Some points to remember in a signed operation:
MSB is always reserved to indicate sign of the number
Negative numbers are represented in 2's complement
An overflow results in invalid operation
Two's complement overflow rules:
If the sum of two positive numbers yields a negative result, the sum has overflowed.
If the sum of two negative numbers yields a positive result, the sum has overflowed.
Otherwise, the sum has not overflowed.
For Example:
**Ex1:**
0111 (carry)
0101 ( 5)
+ 0011 ( 3)
==================
1000 ( 8) ;invalid (V=1) (C3=1) (C4=0)
**Ex2:**
1011 (carry)
1001 (-7)
+ 1011 (−5)
==================
0100 ( 4) ;invalid (V=1) (C3=0) (C4=1)
**Ex3:**
1110 (carry)
0111 ( 7)
+ 1110 (−2)
==================
0101 ( 5) ;valid (V=0) (C3=1) (C4=1)
In a signed operation if the two leftmost carry bits (the ones on the far left of the top row in these examples) are both 1s or both 0s, the result is valid; if the left two carry bits are "1 0" or "0 1", a sign overflow has occurred. Conveniently, an XOR operation on these two bits can quickly determine if an overflow condition exists. (Ref:Two's complement)
Overflow vs Carry: Overflow can be considered as a two's complement form of a Carry. In a signed operation overflow flag is monitored and carry flag is ignored. Similarly in an unsigned operation carry flag is monitored and overflow flag is ignored.
Overflow for signed numbers occurs when the carry-in into the most significant bit is not equal to the carry out.
For example, working with 8 bits, 65 + 64 = 129 actually results in a overflow. This is because this is 1000 0001 in binary which is also -127 in 2's complement. If you work through this example, you can see that it is a result of the carry out not equalling the carry in.
It is possible to have a correct computation even when the carry flag is high.
Consider
1000 1000 = -120
+ 1111 1111 = -1
=(1) 10000111 = -121
There is a carry out of 1, but there has been no overflow.
I would like the give a more general answer to this question for any positive natural number of bits.
Lets call the last Carry output C1, the second to last Carry output C0, the sum sign output S0 and the signbits of A and B respectively A0 and B0.
Then the following holds:
C1 = A0 + B0 + C0
S0 = A0*B0 + A0*C0 + B0*C0
Lets now walk through the posibilities.
If C1 == 1 there are two possibilities:
if C0 == 0: A0 and B0
must both have been 1 (and thus both A en B must be negative). This
means S0 has to be 0 meaning the solution was positive while A en B
were negative => overflow
if C0 == 1: either
A en B have opposite signs, so overflow is not possible. => no overflow
A0 en B0 are both 1 (and thus A en B
must both be negative). This means S0 has to be 1 meaning the solution was negative => no overflow
If C1 == 0 there are two possibilities:
if C0 == 0: either
A0 en B0 are both 0 (and thus A en B
must both be positive). This means S0 has to be 0 meaning the solution was positive => no overflow
A en B have opposite signs => no overflow
if C0 == 1: A0 en B0 must both be 0 (and thus A en B
must both be positive) This
means S0 has to be 1 meaning the solution was negative while A en B
were positive => overflow
Hope that helps someone out there.

the nth gray code

the formula for calculating nth gray code is :
(n-1) XOR (floor((n-1)/2))
(Source: wikipedia)
I encoded it as:
int gray(int n)
{
n--;
return n ^ (n >> 1);
}
Can someone explain how the above formula works, or possibly its deriviation?
If you look at binary counting sequence, you note, that neighboring codes differ at several last bits (with no holes), so if you xor them, pattern of several trailing 1's appear. Also, when you shift numbers right, xors also will be shifted right: (A xor B)>>N == A>>N xor B>>N.
N N>>1 gray
0000 . 0000 . 0000 .
| >xor = 0001 >xor = 0000 >xor = 0001
0001 . 0000 . 0001 .
|| >xor = 0011 | >xor = 0001 >xor = 0010
0010 . 0001 . 0011 .
| >xor = 0001 >xor = 0000 >xor = 0001
0011 . 0001 . 0010 .
||| >xor = 0111 || >xor = 0011 >xor = 0100
0100 0010 0110
Original Xor results and shifted results differ in single bit (i marked them by dot above). This means that if you xor them, you'll get pattern with 1 bit set. So,
(A xor B) xor (A>>1 xor B>>1) == (A xor A>>1) xor (B xor B>>1) == gray (A) xor gray (B)
As xor gives us 1s in differing bits, it proves, what neighbouring codes differ only in single bit, and that's main property of Gray code we want to get.
So for completeness, whould be proven, that N can be restored from its N ^ (N>>1) value: knowing n'th bit of code we can restore n-1'th bit using xor.
A_[bit n-1] = A_[bit n] xor gray(A)_[bit n-1]
Starting from largest bit (it is xored with 0) thus we can restore whole number.
Prove by induction.
Hint: The 1<<kth to (1<<(k+1))-1th values are twice the 1<<(k-1)th to (1<<k)-1th values, plus either zero or one.
Edit: That was way too confusing. What I really mean is,
gray(2*n) and gray(2*n+1) are 2*gray(n) and 2*gray(n)+1 in some order.
The Wikipedia entry you refer to explains the equation in a very circuitous manner.
However, it helps to start with this:
Therefore the coding is stable, in the
sense that once a binary number
appears in Gn it appears in the same
position in all longer lists; so it
makes sense to talk about the
reflective Gray code value of a
number: G(m) = the m-th reflecting
Gray code, counting from 0.
In other words, Gn(m) & 2^n-1 is either Gn-1(m & 2^n-1) or ~Gn-1(m & 2^n-1). For example, G(3) & 1 is either G(1) or ~G(1). Now, we know that Gn(m) & 2^n-1 will be the reflected (bitwise inverse) if m is greater than 2^n-1.
In other words:
G(m, bits), k= 2^(bits - 1)
G(m, bits)= m>=k ? (k | ~G(m & (k - 1), bits - 1)) : G(m, bits - 1)
G(m, 1) = m
Working out the math in its entirety, you get (m ^ (m >> 1)) for the zero-based Gray code.
Incrementing a number, when you look at it bitwise, flips all trailing ones to zeros and the last zero to one. That's a whole lot of bits flipped, and the purpose of Gray code is to make it exactly one. This transformation makes both numbers (before and after increment) equal on all the bits being flipped, except the highest one.
Before:
011...11
+ 1
---------
100...00
After:
010...00
+ 1
---------
110...00
^<--------This is the only bit that differs
(might be flipped in both numbers by carry over from higher position)
n ^ (n >> 1) is easier to compute but it seems that only changing the trailing 011..1 to 010..0 (i.e. zeroing the whole trailing block of 1's except the highest 1) and 10..0 to 11..0 (i.e flipping the highest 0 in the trailing 0's) is enough to obtain a Gray code.

Query about working out whether number is a power of 2

Using the classic code snippet:
if (x & (x-1)) == 0
If the answer is 1, then it is false and not a power of 2. However, working on 5 (not a power of 2) and 4 results in:
0001 1111
0001 1111
0000 1111
That's 4 1s.
Working on 8 and 7:
1111 1111
0111 1111
0111 1111
The 0 is first, but we have 4.
In this link (http://www.exploringbinary.com/ten-ways-to-check-if-an-integer-is-a-power-of-two-in-c/) for both cases, the answer starts with 0 and there is a variable number of 0s/1s. How does this answer whether the number is a power of 2?
You need refresh yourself on how binary works. 5 is not represented as 0001 1111 (5 bits on), it's represented as 0000 0101 (2^2 + 2^0), and 4 is likewise not 0000 1111 (4 bits on) but rather 0000 0100 (2^2). The numbers you wrote are actually in unary.
Wikipedia, as usual, has a pretty thorough overview.
Any power of two number can be represent in binary with a single 1 and multiple 0s.
eg.
10000(16)
1000(8)
100(4)
If you subtract 1 from any power of two number, you will get all 1s to the right of where the original one was.
10000(16) - 1 = 01111(15)
ANDing these two numbers will give you 0 every time.
In the case of a non-power of two number, subtracting one will leave at least one "1" unchanged somewhere in the number like:
10010(18) - 1 = 10001(17)
ANDing these two will result in
10000(16) != 0
Keep in mind that if x is a power of 2, there is exactly 1 bit set. Subtract 1, and you know two things: the resulting value is not a power of two, and the bit that was set is no longer set. So, when you do a bitwise and &, every bit that was set in x is not unset, and all the bits in (x-1) that are set must be matched against bits not set in x. So the and of each bit is always 0.
In other words, for any bit pattern, you are guaranteed that (x&(x-1)) is zero.
((n & (n-1)) == 0)
It checks whether the value of “n” is a power of 2.
Example:
if n = 8, the bit representation is 1000
n & (n-1) = (1000) & ( 0111) = (0000)
So it return zero only if its value is in power of 2.
The only exception to this is ‘0’.
0 & (0-1) = 0 but ‘0’ is not the power of two.
Why does this make sense?
Imagine what happens when you subtract 1 from a string of bits. You read from left to right,
turning each 0 to a 1 until you hit a 1, at which point that bit is flipped:
1000100100 -> (subtract 1) -> 1000100011
Thus, every bit, up through the first 1, is flipped. If there’s exactly one 1 in the number, then every bit (other than the leading zeros) will be flipped. Thus, n & (n-1) == 0 if there’s exactly one 1. If there’s exactly one 1, then it must be a power of two.

Resources