Parity bit checks using General Hamming Algorithm - algorithm

In a logic circuit, I have an 8-bit data vector that is fed into an ECC IC which I am supposed to develop the logic for and that contains a vector of 5 Parity Bits. My first step to develop the logic (with logic gates, XOR), is to figure out which parity bit is going to check for which Data bits (since they are interlaced). I am using even parity, and following general hamming code rules (a parity bit in every 2^n ), I get the following sequence of output:
P1 P2 D1 P3 D2 D3 D4 P4 D5 D6 D7 D8 P5
Following the General Hamming Algorithm:
For each parity bit, Position 1,2,4,8,16 and so on... (Powers of 2), we skip for the first position n (n-1) and we check 1 bit, then we skip another one, the check another one, etc... we repeat the same process for the other bits, but this time checking/skipping every 2^n, where n is the position they occupy in the output array (P1 P2 D1 P3 D2 D3 D4 P4 D5 D6 D7 D8 P5)
Following that convention, I get:
P1 Checks data bits -> XOR(3 5 7 9 10 12)
P2 Checks data bits -> XOR(3 6 7 10 11)
P3 Checks data bits -> XOR(5 6 10 11 12)
P4 Checks data bits -> XOR(9 10 11)
Am I right? The thing that confuses me is that if I should start checking counting the parity bit as one of the 2^n bits that are supposed to be checked, or 1 bit after that specific parity bit. Pretty much sums up to if it is inclusive or not.
Thank you for your help in advance!
Cheers!

You can follow this sheme. The bits marked in each row must sum up to 0 (mod 2) in other words for the marked positions in each row the number of set bits must be even.
P1 P2 D1 P3 D2 D3 D4 P4 D5 D6 D7 D8
x x x x x x
x x x x x x
x x x x x
x x x x x
I don't understand why you have P5 in the scheme.

Related

How to calculate Hamming code of (31,26)?

I need to construct the (31,26) hamming code of 0x444.
After reading Wikipedia and the algorithm shown in GeeksForGeeks I still can't understand how this works as my construction ended up different than the result of a calculator I found on the internet.
My result is: 0100 0100 0010 0010 or 0x4422
is it correct?
As I understand:
P1 = Bitwise XOR(C1,C3,C5,C7,C9,C11,C13.C15,C17..) = 0
P2 = Bitwise XOR(C2,C3,C6,C7,C10,C11,C14,C15..) = 1
P3 = Bitwise XOR(C4,C5,C6,C7,C12,C13,C14,C15..) = 0
P4 = Bitwise XOR(C8,C9,C10,C11,C12,C13,C14,C15..) = 0
P5 = Bitwise XOR(C16,C17..) = 0
Another thing I can't understand.. if (31,26) hamming code is supposed to output a 31 bit result with 5 parity bits and 26 data bits.. why (7,4) hamming code transforms each 4 bits to 7 bits representation and not just 1 representation of 7 bits with 3 parity bits?
Thanks.
Yes, assuming you are numbering the bits from 1 at the right-hand end, then 0x000444 is encoded as 0x00004422 for a (31,26) Hamming Code -- for an even parity code-word.
Where C1, C2, etc are bit 1, 2, etc of the code-word, and P1, P2, etc are parity bits 1, 2, etc. I think is clearer to say that:
P1 = C1 = Bitwise_XOR(C3, C5, C7, C9, ...)
so that:
Bitwise_XOR(C1, C3, C5, C7, C9, ...) == 0
and so on. This is even parity.
You do not say which "calculator" you tried, but it could be that the discrepancy you see is to do with what end you number from. I note that Wikipedia gives:
If a byte of data to be encoded is 10011010, then the data word (using _ to represent the parity bits) would be __1_001_1010, and the code word is 011100101010.
which is clearly counting bits from the left-hand end.
I regret I do not understand your second question. I can say that a (31,26) Hamming Code does indeed take 26 bits of data and adds 5 parity bits to produce a 31 bits code-word. And that a (7,4) Hamming Code does likewise for 4 bits of data, 3 parity bits and a 7 bit code-word.

beauty of a binary number game

This a fairly known problem ( similar question: number of setbits in a number and a game based on setbits but answer not clear ):
The beauty of a number X is the number of 1s in the binary
representation of X. Two players are plaing a game. There is a number n
written on a blackboard. The game is played as following:
Each time a player chooses an integer number (0 <= k) so that 2^k is
less than n and (n-2^k) is equally as beautiful as n. Then n is removed from
blackboard and replaced with n-2^k instead. The player that cannot continue
the game (there is no such k that satisfies the constrains) loses the
game.
The First player starts the game and they alternate turns.
Knowing that both players play optimally must specify the
winner.
Now the solution I came up with is this:
Moving a 1 bit to its right, is subtracting the number by 2^p where ( p = position the bit moved to - 1). Example: 11001 --> 25 now if I change it to 10101 ---> 21 ( 25-(2^2))
A player can't make 2 or more such right shift in 1 round (not the programmatic right shift) as they can't sum to a power of 2. So the player are left with moving the set bit to some position to its right just once each round. This means there can be only R rounds where R is the number of times a set bit can be moved to a more right position. So the winner will always be the 1st player if R is Odd number and 2nd player if R is even number.
Original#: 101001 41
after 1st: 11001 25 (41-16)
after 2nd: 10101 21 (25-4)
after 1st: 1101 13 (21-8)
after 2nd: 1011 11 (13-2)
after 1st: 111 7 (11-4) --> the game will end at this point
I'm not sure about the correctness of the approach, is this correct? or am I missing something big?
Your approach is on the right track. The observation to be made here is that, also as illustrated in the example you gave, the game ends when all ones are on the least significant bits. So we basically need to count how many swaps we need to make the zeros go to the most significant bits.
Let's take an example, say the initial number from which game starts is 12 the the game state is as follows:
Initial state 1100 (12) ->
A makes move 1010 (10) ->
B makes move 1001 (9) ->
A makes move 0101 (5) ->
B makes 0011 (3)
A cannot move further and B wins
This can be programmatically (java program v7) achieved as
public int identifyWinner (int n) {
int total = 0, numZeros = 0;
while (n != 0) {
if ((n & 0x01) == 1) {
total += numZeros;
} else {
numZeros++;
}
n >>= 1;
}
return (total & 0b1) == 1 ? 1 : 2;
}
Also to note that even if there are multiple choices available with a player to make the next move, as illustrated below, the outcome will not change though the intermediate changes leading to outcome may change.
Again let us look at the state flow taking the same example of initial number 12
Initial state 1100 (12) ->
A makes move 1010 (10) ->
(B here has multiple choices) B makes move 0110 (6)
A makes move 0101 (5) ->
B makes 0011 (3)
A cannot move further and B wins
A cannot move further as for no k (k >=0 and n < 2**k so k =0, 1 are the only plausible choices here) does n-2^k has same beauty as n so B wins.
Multiple choices are possible with 41 as starting point as well, but A will win always (41(S) -> 37(A) -> 35(B) -> 19(A) -> 11(B) -> 7(A)).
Hope it helps!
Yes, each turn a 1 can move right if there is a 0 to its right.
But, no, the number of moves is not related to number of zeros. Counterexample:
101 (1 possible move)
versus
110 (2 possible moves)
The number of moves in the game is the sum of the total 1's to the left of each 0. (Or conversely the sum of the total 0's to the right of each 1.)
(i.e. 11000 has 2 + 2 + 2 = 6 moves, but 10100 has 1 + 2 + 2 = 5 moves because one 0 has one less 1 to its right)
The winner of the game will be the first player if the total moves in the game is odd, and will be the second player if the number of moves in the game is even.
Proof:
On any given move a player must choose a bit corresponding to
a 0 immediately to the right of a 1. Otherwise the total number of
1's will increase if a bit corresponding to a different 0 is chosen,
and will decrease if a bit corresponding to a 1 is chosen. Such a move
will result in the 1 to the right of the corresponding chosen bit
being moved one position to its right.
Given this observation, each
1 has to move through every 0 to its right; and every 0 it moves
through consumes one move. Note that regardless of the choices either
player makes on any given move, the total number of moves in the game
remains fixed.
Since Harshdeep has already posted a correct solution looping over each bit (the O(n) solution), I'll post an optimized divide and conquer O(log(n)) solution (in C/C++) reminiscent of a similar algorithm to calculate Hamming Weight. Of course using Big-Oh to describe the algorithm here is somewhat dubious since the number of bits is constant.
I've verified that the below code on all 32-bit unsigned integers gives the same result as the linear algorithm. This code runs over all values in order in 45 seconds on my machine, while the linear code takes 6 minutes and 45 seconds.
Code:
bool FastP1Win(unsigned n) {
unsigned t;
// lo: 0/1 count parity
// hi: move count parity
// 00 -> 00 : 00 >>1-> 00 &01-> 00 ; 00 |00-> 00 ; 00 &01-> 00 &00-> 00 *11-> 00 ^00-> 00
// 01 -> 01 : 01 >>1-> 00 &01-> 00 ; 01 |00-> 01 ; 01 &01-> 01 &00-> 00 *11-> 00 ^01-> 01
// 10 -> 11 : 10 >>1-> 01 &01-> 01 ; 10 |01-> 11 ; 10 &01-> 00 &01-> 00 *11-> 00 ^11-> 11
// 11 -> 00 : 11 >>1-> 01 &01-> 01 ; 11 |01-> 11 ; 11 &01-> 01 &01-> 01 *11-> 11 ^11-> 00
t = (n >> 1) & 0x55555555;
n = (n | t) ^ ((n & t & 0x55555555) * 0x3);
t = n << 2; // move every right 2-bit solution to line up with the every left 2-bit solution
n ^= ((n & t & 0x44444444) << 1) ^ t; // merge the right 2-bit solution into the left 2-bit solution
t = (n << 4); // move every right 4-bit solution to line up with the every left 4-bit solution
n ^= ((n & t & 0x40404040) << 1) ^ t; // merge the right 4-bit solution into the left 4-bit solution (stored in the high 2 bits of every 4 bits)
t = n << 8; // move every right 8-bit solution to line up with the every left 8-bit solution
n ^= ((n & t & 0x40004000) << 1) ^ t; // merge the right 8-bit solution into the left 8-bit solution (stored in the high 2 bits of every 8 bits)
t = n << 16; // move every right 16-bit solution to line up with the every left 16-bit solution
n ^= ((n & t) << 1) ^ t; // merge the right 16-bit solution into the left 16-bit solution (stored in the high 2 bits of every 16 bits)
return (int)n < 0; // return the parity of the move count of the overall solution (now stored in the sign bit)
}
Explanation:
To find number of moves in the game, one can divide the problem into smaller pieces and combine the pieces. One must track the number of 0's in any given piece, and also the number of moves in any given piece.
For instance, if we divide the problem into two 16-bit pieces then the following equation expresses the combination of the solutions:
totalmoves = leftmoves + rightmoves + (rightzeros * (16 - leftzeros)); // 16 - leftzeros yields the leftones count
Since we don't care about the total moves, just weather the value is even or odd (the parity) we only need to track the parity.
Here is the truth table for the parity of addition:
even + even = even
even + odd = odd
odd + even = odd
odd + odd = even
Given the above truth table, the parity of addition can be expressed with an XOR.
And the truth table for the parity of multiplication:
even * even = even
even * odd = even
odd * even = even
odd * odd = odd
Given the above truth table, the parity of multiplication can be expressed with an AND.
If we divide the problem into pieces of even size, then the parity of the zero count, and the one count, will always be equal and we need not track or calculate them separately.
At any given stage of the algorithm we need to know the parity of the zero/one count, and the parity of the number of moves in that piece of the solution. This requires two bits. So, lets transform every two bits in the solution so that the high bit becomes the move count parity, and the low bit becomes the zero/one count parity.
This is accomplished with this computation:
unsigned t;
t = (n >> 1) & 0x55555555;
n = (n | t) ^ ((n & t & 0x55555555) * 0x3);
From here we combine every adjacent 2-bit solution into a 4-bit solution (using & for multiplication, ^ for addition, and the relationships described above), then every adjacent 4-bit solution into a 8-bit solution, then every adjacent 8-bit solution into a 16-bit solution, and finally every adjacent 16-bit solution into a 32-bit solution.
At the end, only the parity of the number of moves is returned stored in the second least significant bit.

Non-recursive Grey code algorithm understanding

This is task from algorithms book.
The thing is that I completely don't know where to start!
Trace the following non-recursive algorithm to generate the binary reflexive
Gray code of order 4. Start with the n-bit string of all 0’s.
For i = 1, 2, ... 2^n-1, generate the i-th bit string by flipping bit b in the
previous bit string, where b is the position of the least significant 1 in the
binary representation of i.
So I know the Gray code for 1 bit should be 0 1, for 2 00 01 11 10 etc.
Many questions
1) Do I know that for n = 1 I can start of with 0 1?
2) How should I understand "start with the n-bit string of all 0's"?
3) "Previous bit string"? Which string is the "previous"? Previous means from lower n-bit? (for instance for n=2, previous is the one from n=1)?
4) How do I even convert 1-bit strings to 2-bit strings if the only operation there is to flip?
This confuses me a lot. The only "human" method I understand so far is: take sets from lower n-bit, duplicate them, invert the 2nd set, add 0's to every element in 1st set, add 1's do every elements in 2nd set. Done (example: 0 1 -> 0 1 | 0 1 -> 0 1 | 1 0 -> 00 01 | 11 10 -> 11 01 11 10 done.
Thanks for any help
The answer to all four your questions is that this algorithm does not start with lower values of n. All strings it generates have the same length, and the i-th (for i = 1, ..., 2n-1) string is generated from the (i-1)-th one.
Here is the fist few steps for n = 4:
Start with G0 = 0000
To generate G1, flip 0-th bit in G0, as 0 is the position of the least significant 1 in the binary representation of 1 = 0001b. G1 = 0001.
To generate G2, flip 1-st bit in G1, as 1 is the position of the least significant 1 in the binary representation of 2 = 0010b. G2 = 0011.
To generate G3, flip 0-th bit in G2, as 0 is the position of the least significant 1 in the binary representation of 3 = 0011b. G3 = 0010.
To generate G4, flip 2-nd bit in G3, as 2 is the position of the least significant 1 in the binary representation of 4 = 0100b. G4 = 0110.
To generate G5, flip 0-th bit in G4, as 0 is the position of the least significant 1 in the binary representation of 5 = 0101b. G5 = 0111.

Rotate left verilog case

My task is to write a 16 bit ALU in verilog. I found difficulties when I do the part that needs to rotate the operand and doing the 2's complement addition and subtraction. I know how to work that out by paper and pencil but i cant figure out ways to do it in Verilog.
for example:
A is denoted as a15 a14 a13 a12 a11 a10 a9 a8 a7 a6 a5 a4 a3 a2 a1 a0
if i am going to rotate 4 bits,
the answer would be
a11 a10 a9 a8 a7 a6 a5 a4 a3 a2 a1 a0 a15 a14 a13 a12
i tried concatenation but it turns out to be incorrect.
need you all help...
The following will work using one shifter:
assign A_out = {A_in,A_in} >> (16-shift[3:0]);
When shift is 0 the left A_in is selected. As shift increase the left A_in shifts to the left and the MSBs of the right A_in fills in.
If synthesizing, then you may want to use muxes, as dynamic shift logic tends require more gates. A 16-bit barrel shifter will require 4 levels of 2-to-1 muxes.
wire [15:0] tmp [3:1];
assign tmp[3] = shift[3] ? { A_in[ 7:0], A_in[15: 8]} : A_in;
assign tmp[2] = shift[2] ? {tmp[3][11:0],tmp[3][15:12]} : tmp[3];
assign tmp[1] = shift[1] ? {tmp[2][13:0],tmp[2][15:14]} : tmp[2];
assign A_out = shift[0] ? {tmp[1][14:0],tmp[1][15 ]} : tmp[1];
assign A_out = A_in << bits_to_rotate;
Where bits_to_rotate can be a variable value (either a signal or a reg).
This will infer a generic shifter using multiplexers, or a barrel shifter, whatever suits better the target hardware. The synthetizer will take care about that.
Oh, well. If you want to rotate instead of shift, the thing is just a bit trickier:
assign A_out = (A_in << bits_to_rotate) | (A_in >> ~bits_to_rotate);
Why is concatenation incorrect? This should do what you ask.
assign A_out[15:0] = {A_in[11:0], A_in[15:12]};
The best way I found to do this is finding a pattern. When you want to rotate left an 8 bit signal 1 position (8'b00001111 << 1) the result is 8'b00011110) also when you want to rotate left 9 positions (8'b00001111 << 9) the result is the same, 8'b00011110, and also rotating 17 positions, this reduces your possibilities to next table:
So if you look, the three first bits of all numbers on tale equivalent to rotate 1 position (1,9,17,25...249) are equal to 001 (1).
The three first bits of all numbers on table equivalent to rotate 6 positions (6,14,22,30...254) are equal to 110 (6).
So you can apply a mask (8'b00000111) to determine the correct shifting by making zero all other bits:
reg_out_temp <= reg_in_1 << (reg_in_2 & 8'h07);
reg_out_temp shall be the double of reg_in_1, in this case reg_out_temp shall be 16 bit and reg_in_1 8 bit, so you can get the carried bits to the other byte when you shift the data so you can combine them using an OR expression:
reg_out <= reg_out_temp[15:8] | reg_out_temp[7:0];
So by two clock cycles you have the result. For a 16 bit rotation, your mask shall be 8'b00011111 (8'h1F) because your shifts goes from 0 to 16, and your temporary register shall be of 32 bits.

Is there a specialized algorithm, faster than quicksort, to reorder data ACEGBDFH?

I have some data coming from the hardware. Data comes in blocks of 32 bytes, and there are potentially millions of blocks. Data blocks are scattered in two halves the following way (a letter is one block):
A C E G I K M O B D F H J L N P
or if numbered
0 2 4 6 8 10 12 14 1 3 5 7 9 11 13 15
First all blocks with even indexes, then the odd blocks. Is there a specialized algorithm to reorder the data correctly (alphabetical order)?
The constraints are mainly on space. I don't want to allocate another buffer to reorder: just one more block. But I'd also like to keep the number of moves low: a simple quicksort would be O(NlogN). Is there a faster solution in O(N) for this special reordering case?
Since this data is always in the same order, sorting in the classical sense is not needed at all. You do not need any comparisons, since you already know in advance which of two given data points.
Instead you can produce the permutation on the data directly. If you transform this into cyclic form, this will tell you exactly which swaps to do, to transform the permuted data into ordered data.
Here is an example for your data:
0 2 4 6 8 10 12 14 1 3 5 7 9 11 13 15
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Now calculate the inverse (I'll skip this step, because I am lazy here, assume instead the permutation I have given above actually is the inverse already).
Here is the cyclic form:
(0)(1 8 4 2)(3 9 12 6)(5 10)(7 11 13 14)(15)
So if you want to reorder a sequence structured like this, you would do
# first cycle
# nothing to do
# second cycle
swap 1 8
swap 8 4
swap 4 2
# third cycle
swap 3 9
swap 9 12
swap 12 6
# so on for the other cycles
If you would have done this for the inverse instead of the original permutation, you would get the correct sequence with a proven minimal number of swaps.
EDIT:
For more details on something like this, see the chapter on Permutations in TAOCP for example.
So you have data coming in in a pattern like
a0 a2 a4...a14 a1 a3 a5...a15
and you want to have it sorted to
b0 b1 b2...b15
With some reordering the permutation can be written like:
a0 -> b0
a8 -> b1
a1 -> b2
a2 -> b4
a4 -> b8
a9 -> b3
a3 -> b6
a6 -> b12
a12 -> b9
a10 -> b5
a5 -> b10
a11 -> b7
a7 -> b14
a14 -> b13
a13 -> b11
a15 -> b15
So if you want to sort in place it with only one block additional space in a temporary t, this could be done in O(1) with
t = a8; a8 = a4; a4 = a2; a2 = a1; a1 = t
t = a9; a9 = a12; a12= a6; a6 = a3; a9 = t
t = a10; a10 = a5; a5 = t
t = a11; a11 = a13; a13 = a14; a14 = a7; a7 = t
Edit:The general case (for N != 16), if it is solvable in O(N), is actually an interesting question. I suspect the cycles always start with a prime number which satisfies p < N/2 && N mod p != 0 and the indices have a recurrence like in+1 = 2in mod N, but I am not able to prove it. If this is the case, deriving an O(N) algorithm is trivial.
maybe i'm misunderstanding, but if the order is always identical to the one given then you can "pre-program" (ie avoiding all comparisons) the optimum solution (which is going to be the one that has the minimmum number of swaps to move from the string given to ABCDEFGHIJKLMNOP and which, for something this small, you can work out by hand - see LiKao's answer).
It is easier for me to label your set with numbers:
0 2 4 6 8 10 12 14 1 3 5 7 9 11 13 15
Start from the 14 and move all even numbers to place (8 swaps). You will get this:
0 1 2 9 4 6 13 8 3 10 7 12 11 14 15
Now you need another 3 swaps (9 with 3, 7 with 13, 11 with 13 moved from 7).
A total of 11 swaps. Not a general solution, but it could give you some hints.
You can also view the intended permutation as a shuffle of the address-bits `abcd <-> dabc' (with abcd the individual bits of the index) Like:
#include <stdio.h>
#define ROTATE(v,n,i) (((v)>>(i)) | (((v) & ((1u <<(i))-1)) << ((n)-(i))))
/******************************************************/
int main (int argc, char **argv)
{
unsigned i,a,b;
for (i=0; i < 16; i++) {
a = ROTATE(i,4,1);
b = ROTATE(a,4,3);
fprintf(stdout,"i=%u a=%u b=%u\n", i, a, b);
}
return 0;
}
/******************************************************/
That was count sort I believe

Resources