A computer represents information in groups of 64 bits. How many different integers can be represented in BCD code? - bit

This from my Interview-MCQ module:
A computer represents information in groups of 64 bits. How many
different integers can be represented in BCD code?
The given answer is 1016, however no explanation is provided, I was just wondering if somebody could help me understand the answer.

BCD is binary coded decimal. In BCD, every 4 bits is used to represent a single digit from 0 to 9. So if you have 64 bits, that gives you 64/4 = 16 decimal digits, which means you can have 10^16 different integers.

Related

does each bit correspond to one number in bit representation?

When something is assigned to a variable, such as an integer with 32 bits, does that mean the integer's binary form will always be of 32 digits? or does one number not correspond to one bit, and when we say we store data in "a box of x bits", the number used to represent that thing being stored is not the same number of digits as the number of bits?
Is it the case that for all things, such as all primitive and reference types in java, the number of bits into which something is stored corresponds to the number of digits used to represent this thing?
thanks
I have looked at examples of numbers to represent objects to see if they were the same length as the number of bits that fit into their memory box.

Algorithm for Converting large integer to string without modulo base

I have looked for a while to find an algorithm which converts integers to string. My requirement is to do this manually as I am using my own large number type. I have + - * /(with remainder) defined, but need to find a way to print a single number from a double int (high and low, if int is 64bits, 128bits total).
I have seen some answers such as
Convert integer to string without access to libraries
Converting a big integer to decimal string
but was wondering if a faster algorithm was possible. I am open to working with bits directly(e.g. base2 to base10-string - I could not find such an algorithm however), but I was just hoping to avoid repeated division by 10 for numbers possibly as large as 2^128.
You can use divide-and-conquer in such a way that the parts can be converted to string using your standard library (which will typically be quite efficient at that job).
So instead of dividing by 10 in every iteration, you can e.g. divide by 10**15, and have your library convert the chunks to 15-digit strings. After at most three steps, you're finished.
Of course you have to do some string manipulation regarding the zero-padding. But maybe your library can help you here as well, if you use something like a %015d zero-padding format for all the lower parts, and for the highest non-zero part use a non-padding %d format.
You may try your luck with a contrived method, as follows.
Numbers can be represented using the Binary-Coded Decimal representation. In this representation, every decimal digit is stored on 4 bits, and when performing an addition, if the sum of two digits exceeds 9, you add 6 and carry to the left.
If you have pre-stored the BCD representation of all powers of 2, then it takes at most 128 additions to perform the conversion. You can spare a little by the fact that for low powers, you don't need full length addition (39 digits).
But this sounds as a lot of operations. You can "parallelize" them by packing several BCD digits in an single integer: an integer addition on 32 bits is equivalent to 8 simultaneaous BCD digit additions. But we have a problem with the carries. To work around, we can store the digits on 5 bits instead of 4, and the carries will appear in the fifth bit. Then we can obtain the carries by masking, add them to the next digits (shift left 5), and adjust the digit sums (multiply by 10 and subtract).
2 3 4 5 6
+ 7 6 9 2 1
= 9 913 7 7
Carries:
0-0-1-0-0
Adjustments:
9 913 7 7
-0000010000
= 9 9 3 7 7
Actually, you have to handle possible cascaded carries, so the sum will involve the two addends and carries in, and generate a sum and carries out.
32 bits operations allow you to process 6 digits at a time (7 rounds for 39 digits), and 64 bits operations, 12 digits (4 rounds for 39 digits).
if you want to just encode your numbers as string
use hex numbers that is fast as you can cast all the digits just by bit operations ... also using Base64 encoding is doable just by bit operations + the translation table. Booth representations can be done on small int arithmetics only in O(n) where n is the count of printed digits.
If you need base10
then print a hex string and convert it to decimal on strings like this:
str_hex2dec
this is much slower than #1 but still doable on small int arithmetics ... You can do this also in reverse (input number from string) by using dec2hex ...
For bigint libs there are also another ways of easing up the string/integer conversions:
BCD
binary coded decimal ... the number printed as hex is the decadic number. So each digit has 4 bits. This waste some memory but many CPU's has BCD support and can do operations on such integers natively.
Base 10^n
sometimes is used base 10^n instead of 2^m while
10^n <= 2^m
The m is bitwidth of your atomic integer and n i snumber of decadic digits that fits inside it.
for example if your atomic unsigned integer is 16 bit it can hold up to 65536 values in base 2. If you use base 10000 instead you can print each atom asa decadic number with zeropad from left and simply stack all such prints together.
This also waste some memory but usually not too much (if the bitwidth is reasonably selected) and you can use standard instructions on the integers. Only the Carry propagation will change a bit...
for example for 32bit words:
2^32 = 4294967296 >= 1000000000
so we wasted log2(4.2949...) = ~2.1 bits per each 32 bits. This is much better than BCD log2(16/10)*(32/4)= ~5.42 bits And usually even better with higher bit widths

How to compute hamming code for 10 bits?

I have seen examples of hamming code detection and correction with 8 bits or 12 bits. Suppose I had the bit string: 1101 0110 11 which contains 10 bits.
Do I need to add two additional bits to that bit string to complete the nibble? If so do I add 0s or 1s?
I have looked for other examples, but cannot seem to find those with partial nibbles to determine the procedure. Thank you.

How many bits will be needed to multiply two 129 word numbers if the machine has 64 bit words?

So, I was studying and came across this algorithms question:
So, the machine uses 64 bits for words. We can multiply two n word numbers with a certain complexity. If n is 129, how many bits is that?
I'm a bit confused on how to do this. If a word is 64 bits, then I thought 129 * 64 would be the answer, but that seems like a very high number of bits. Can anyone explain how to approach this program?
Multiplying an N-bit number by an M-bit number yields an N+M-bit number. So to multiply a number of 129 words (8256 bits) by another yields a result of 16512 bits or 258 words. Yes, that's a lot of bits, but such multiplications appear in cryptography, for example.

BitXor decimal equivalents

I am trying to understand an algorithms that gray code number in a QAM system which use XOR. Can anyone explain what happens in the Decimal world when you bitxor(a,b) is there a decimal implementation or expalnation for this.
XOR works in binary base and there's no direct relationship to 10-base (decimal) numbers.
However I don't see a connection between gray code numbers and base 10 (decimal). Is it gray or BCD (binary-coded decimal)?
Gray codes are ways to represent integers as binary numbers so that two consecutive integers differ by one bit only. There when bitxor(a,b) and a = b +/- 1, the result has only one bit set.

Resources