How to compute hamming code for 10 bits? - bit

I have seen examples of hamming code detection and correction with 8 bits or 12 bits. Suppose I had the bit string: 1101 0110 11 which contains 10 bits.
Do I need to add two additional bits to that bit string to complete the nibble? If so do I add 0s or 1s?
I have looked for other examples, but cannot seem to find those with partial nibbles to determine the procedure. Thank you.

Related

Get original hamming word without control bits

How can I get the original word without hamming encoding?
For example:
I've this hamming encoded word: 011001101100
How can I get back to the original word? correct ans is: 00111100
The encoding algorithm is described by this wikipedia article. The article contains a table that can be used to perform the decoding process by hand. Converting the decoding process to software is left as an exercise for the reader.
First, write the received code word along the bottom of the table. Then, for each row, compute the parity and write it into the column at the right. For example, for row p8, we want the parity of the five bits at the end of the code word, as indicated by the red X's. If there are an even number of 1 bits in the indicated positions, write a 0 in the right column, otherwise write a 1.
The resulting binary number in the right column (MSB on the bottom) indicates the bit position of the bit with an error. If the number is 0, then no bits have an error. In this example, the right column contains the number 3, so there's a bit error at bit position 3.
To complete the decoding, follow the steps below:
0110 0110 1100 the received code word
0100 0110 1100 flip the bit that has the error (bit 3 in this example)
__0_ 011_ 1100 remove the parity bits
and the remaining bits are 00111100.

Data Representation in LC-3

I was doing my exam prep and I have come across a problem that ive been having issues with mainly because of the lack of info provided. The question is
b.What integer does the 16 bit word F751 represent in the LC-3?
So do we convert the base 16 to base 10 or base 2, Im not really sure how to do this problem.
Take f751 and convert to binary
1111 0111 0101 0001
The most significant bit is 1 so we know the number is negative, so take the 2s complement
0000 1000 1010 1111
And Convert to decimal -2223
The High digit is greater or equal to 8 so the number is negative.
Take the complement to F (fifteen) of each digit: f751
f give 0
7 give 8
5 give A
1 give E
08AE is the 1 complement
08AF is the 2 complement which is in decimal -2223
This prevent to convert to binary

How many bits will be needed to multiply two 129 word numbers if the machine has 64 bit words?

So, I was studying and came across this algorithms question:
So, the machine uses 64 bits for words. We can multiply two n word numbers with a certain complexity. If n is 129, how many bits is that?
I'm a bit confused on how to do this. If a word is 64 bits, then I thought 129 * 64 would be the answer, but that seems like a very high number of bits. Can anyone explain how to approach this program?
Multiplying an N-bit number by an M-bit number yields an N+M-bit number. So to multiply a number of 129 words (8256 bits) by another yields a result of 16512 bits or 258 words. Yes, that's a lot of bits, but such multiplications appear in cryptography, for example.

Why is the double-dabble algorithm working?

I am trying to understand why the double-dabble algorithm is working, but I am not getting it.
There are a lot of great descriptions of the steps and the purpose of the algorithm, like
http://www.classiccmp.org/cpmarchives/cpm/mirrors/cbfalconer.home.att.net/download/dubldabl.txt or
http://en.wikipedia.org/wiki/Double_dabble
There are also some attempts of explanation. The best one I found is this one:
http://www.minecraftforum.net/forums/minecraft-discussion/redstone-discussion-and/340153-why-does-the-double-dabble-algorithm-work#c6
But I still feel like I am missing the connecting parts. Here is what I get:
I get, that you can convert binary numbers to decimal numbers by reading them from left to right, starting with a decimal value of 0, iterating over the digits of the binary number, adding 1 to the decimal number for every 1 you reach in the binary number, and multiplying by 2, when you move on to the next digit (as explained in the last link).
But why does this lead to the double-dabble-algorithm? We don't want to convert in decimal, we want to convert in BCD. Why are we keeping the multiplying (shifting), but we drop the adding of 1? Where is the connection?
I get, why we have to add 3, when a number in a BCD-field exceeds 4 before shifting. Because if you want that the BCD number gets multiplied by 2 if you shift it, you have to do some fixups. If you shift the BCD number 0000 1000 (8) and you want to get the double 0001 0011 (16), you have to add 3 (the half of 6) before shifting, because by just shifting you end up with 0001 0000 (10) (you're missing 6).
But why do we want this and where is the adding of 1 happening?
I guess I am just missing a little part.
Converting N to its decimal representation involves repeatedly determining R (rest) after dividing by 10. In stead of dividing by 10 you might also divide by 2 (which is just a shift) followed by a division by 5. Hence the 5. Long division means trying to subtract 5 and if that succeeds actually doing the subtraction while keeping track of that success by setting a bit in Q. Adding 3 is the same as subtracting 5 while setting the bit that will subsequently be shifted into Q. Hence the 3.
16 in binary is 10 in bcd.
We want 16 in bcd, so we add 6.
but why we add 3 and not 6?
Because adding is done before shifting, so it’s all divided by two, that’s why we add 3 when higher than 5!
I think I got it while writing this question:
Suppose you want to convert 1111 1111 from binary into BCD.
We use the method to convert the binary number to a decimal number, explained in the question, but we alter it a little bit.
We don't start with a decimal number of 0 but with a BCD number of 0000 0000 (0).
BCD binary
0000 0000 1111 1111
First we have to add 1 to the BCD-number. This can be done by a simple shift
0000 0001 1111 1110
Now we move on and want to multiply the BCD-Number by 2. In the next Step we want to add the current binary digit to the BCD-number. Both can be accomplished in one step by (again) shifting.
0000 0011 1111 1100
This works over and over again. The only situation in which this doesn't work, is when a block of the BCD-numer exceeds 4. In this case you have to do the fixup explained in the question.
After iterating through the binary number, you get the BCD-representation of the number on the left side \o/
The whole idea is to use the shifting explained in your link, but then convert the number on the left into BCD. At each stage, you are getting closer to the actual binary number on the left, but making sure that the number remains in BCD rather than binary.
When you shift in a '1', you are essentially adding it.
Take a look at the link below to get the gist of the 'add 3' argument:
https://olduino.files.wordpress.com/2015/03/why-does-double-dabble-work.pdf

A computer represents information in groups of 64 bits. How many different integers can be represented in BCD code?

This from my Interview-MCQ module:
A computer represents information in groups of 64 bits. How many
different integers can be represented in BCD code?
The given answer is 1016, however no explanation is provided, I was just wondering if somebody could help me understand the answer.
BCD is binary coded decimal. In BCD, every 4 bits is used to represent a single digit from 0 to 9. So if you have 64 bits, that gives you 64/4 = 16 decimal digits, which means you can have 10^16 different integers.

Resources