This is an interview question:
What is the minimum representation in bits of two positions on an 8x8 chessboard?
I found the answer http://www.careercup.com/question?id=4981467352399872
But I am unable to understand what the author is trying to convey when she says:
You can represent 2^n values with n bits. However, you can represent
2^n + 2^(n-1) + 2^(n-2) + ... 1 = 2^(n+1) - 1 values with atmost n
bits. So you can represent 2^11 - 1 = 2047 different values using just
10 bits.
I am not seeking an explanation of what the author is suggesting in his answer, but I am more interested in solving the problem itself. As far as I can think, since there are 64C2 = 2016 ways to represent two pieces on an 8x8 board, the minimum number of bits required should be 11. But someone suggested that one can use just 10 bits to represent the board. How?
The author is saying that you can represent the positions using 5, 6, 7, 8, 9 and 10 bit values.
In binary 2016 is 11111100000 (1024 + 512+ 256 + 128 + 64 + 32)
5 bits (00000 - 11111) represent 32 positions
6 bits (000000 - 111111) represent 64 positions
7 bits (0000000 - 1111111) represent 128 positions
8 bits (00000000 - 11111111) represent 256 positions
9 bits (000000000 - 111111111) represent 512 positions
10 bits (0000000000 - 1111111111) represent 1024 positions
A total of 2016 positions.
This could be implemented in languages with bit collections, e.g. C++ bitset, which has a size function to get the length.
Here's an example for a 2x2 board which will hopefully explain this better.
For a 2x2 board, there are 4C2 (6) positions
.x x. .. xx .x x.
.x x. xx .. x. .x
so you could use 3 bits 000, 001, 010, 011, 100, 101 and 110
But 6 is binary 110 (4+2) so you could use 1 bit (0-1) for 2 of the positions and 2 bits (00, 01, 10, 11) for the remaining 4. So the positions are:
0, 1, 00, 01, 10, 11.
To answer the question and receive an integer solution you must evaluation the following equation:
bits = ceil(log2(combination(64,2)));
bits = ceil(log2(64!/(62!*2!)));
bits = ceil(log2(64*63/2));
bits = ceil(log2(32*63));
bits = ceil(log2(32)+log2(63));
bits = ceil(5+log2(63));
bits = ceil(5+5.97728);
bits = 11;
Deriving the equation requires a working knowledge of combinatorics.
combination(64,2) represents the number of ways to choose 2 of 64 possible unique spaces.
Related
How can I calculate a floating point multiplicand in Verilog? So far, I usually use shift << 1024 , then floating point number become to integer. Then I do some operations, then >> 1024 to obtain a fraction again.
For example 0.3545 = 2^-2 + 2^-4 + ...
I have question about another way, like this. I don't know where does the minus (-) comes from:
0.46194 = 2^-1 - 2^-5 - 2^-7 + 2^-10.
I have just look this from someone. but as you way, that is represented like this
0.46194 = 2^-2 + 2^-3 + 2^-4 + 2^-6 + 2^-7 + 2^-10 + .... .
I don't understand how does it know the minus is used it?
How do we know when the minus needed to it? Also how can I apply to verilog RTL?
UPDATE : I understand the concept the using minus in operation. But Is there any other way to equation or methodologies what to make reduce expression what multiplying with power of 2?
UPDATE : how can we use this method in verilog? for example, I have leaned 0.46194 = 2^-1 - 2^-5 - 2^-7 + 2^-10. then this code was written like this in verilog. 0.011101101 ='hED = 'd237. So the point of the question is how can we apply it to application in verilog?
UPDATE : Sir Would you please check this one? there are a little difference result.
0.46194 = 0.011101101. I just tried like this
0.011101101
0.100T10T01
= 2^-1 - 2^-4 + 2^-5 - 2^-7 + 2^-9. = 0.462890625
Something different. What do I wrong?
Multiplication of a variable by a constant is often implemented by adding the variable to shifted versions of itself. This is much cheaper to put on an FPGA than a multiplier circuit accepting two variables.
You can get further savings when there's a sequence of 1-bits in the constant, by using subtraction as well. (A subtraction circuit is only equally expensive as addition.)
Consider the number 30 = 11110. It's equal to 16 + 8 + 4 + 2, but it's also equal to 32 - 2.
In general, a sequence of multiplicand 1-bits, or the sum of several successive powers of two, can be formed by adding the first power of two after the most significant bit, and subtracting the least significant bit. Hence, instead of 16x + ... + 2x, use 32x - 2x.
It doesn't matter if the sequence of 1-bits is part of a fraction or an integer. You're just applying the identity 2^a = 1 + ∑2^0 ... 2^(a-1), in other worsd ∑2^0 ... 2^a = 2^(a+1) - 1.
In a 4 bit base 2 number can have these values:
Base 2: Unsigned 4 bit integer,
2^3 2^2 2^1 2^0
8 4 2 1
If we have a 0111 it represents 7. If we were to multiply by this number using a shift add architecture it would take 3 clockcycles (3 shift and adds).
An optimisation to this is called CSD (Canonical Signed Digit. It allows minus one to be present in the 'binary numbers'. We shall represent -1 as one bar, or T as that looks like a one with a bar over the top.
100T represents 8 - 1 which is the same as 0111. It can be observed that long runs of 1's can be replaced with a the 0 that ends the run becoming 1 and the first 1 of the run becoming a -1, (T).
An example of conversion:
00111101111
01000T1000T
But if passed in two section we would get :
00111101111
0011111000T
010000T000T
We have taken a number that would take 8 clock cycles or 8 blocks of logic to compute and turned it into 3.
Related questions to fixed point values in Verilog x precision binary fixed point representation? and verilog-floating-points-multiplication.
To cover the follow up question:
To answer the follow up section about your question on CSD conversion. I will look at them as pure integers to simplify the numbers, this is the same as multiplying the values by 2^9 (9 fractional bits).
256 128 64 32 16 8 4 2 1
0 1 1 1 0 1 1 0 1
128 + 64 +32 + 8 +4 +1 => 237
Now with your CSD conversion:
256 128 64 32 16 8 4 2 1
1 0 0 T 1 0 T 0 1
256 -32 + 16 - 4 + 1 => 237
You can see your conversion was correct. I get 237* 2^-9 as 0.462890625, which matches your answer when converted back to fractional. The 0.46194 that you started with must have been a rounded version, or when quantised to 9 fractional bits gets truncated. This error is known as quantisation error. The most important thing here though is that you got the CSD conversion correct.
I came across this interesting question today. (Note that this is not for my homework or interview, etc.)
Given a decimal number that is represented in string, we want to compute the number of '1' bits for the large number in binary format. Here the string can have thousands of characters, and cannot be represented with one int or long long variable.
For example, countBits("10") = 2 as '10' in decimal can be represented as '1010' in binary format. Similarly, we have countBits("12") = 2, countBits("7") = 3
What is an efficiently algorithm for this? One possible solution is to convert the decimal string to another string in the binary format, and count the '1's. Can we do better?
When converting from a decimal representation to and integer, the *n*th digit from the end of the string represents the number of 1010n ( one zero base ten to the power of n ) that is added to total the integer value. If you then want to represent that integer in binary, you have to raise 1010 which is 10102 to that power and multiply that value by the digit's value.
Because one of the factors of the base you are translating from, 5, is relatively prime compared to 2, the powers of 1010 have increasing long representations in base 2 - 12, 10102, 11001002, 11111010002.
Note that these powers have trailing zeros ( 1010 = 2 × 5 and 2 is not relatively prime with the base we are translating into ), so will only effect 1, 3, 5, and 7 bits of the answer instead of all 1, 4, 7, 10 bits. But the number of bits they effect will still vary with O(N) where N is the length of the input, so to calculate the effected bits will take O(N2) operations.
If the base you were translating from did not have factors where were relatively prime to the base you are translating to - say translating base 16 to base 2 or base 9 to base 3 and counting non-zero digits, then there would be a O(N) algorithm as the sum of non-zero digits in the target base would equal the sum for each digit in the input translated individually, but since that is not the case then you are stuck at an O(N2) algorithm where you translate the decimal representation into binary and then count the bits in the binary representation.
You convert it to binary and use Hamming weight algorithm.
How it works? Suppose you have the number 8, which is 00001000.
The algorithm takes chunks of 2 bits, so it'll have 00 00 10 00.
Now it'll sum each two bits (by having a mask 10101010, multiplying and shifting), which will result: 00 00 01 00.
Now it does the same for each 4 bits (by having a mask 00110011..), so it'll have 0000 1000. After adding each side, you'll have 0000 00001.
The last stage is adding the two numbers, 0 + 1, which is 1 and that's the final result.
My books(Artificial Intelligence A modern approach) says that Genetic algorithms begin with a set of k randomly generated states, called population. Each state is represented as a string over a finite alphabet- most commonly, a string of 0s and 1s. For eg, an 8-queens state must specify the positions of 8 queens, each in a column of 8 squares, and so requires 8 * log(2)8 = 24 bits. Alternatively the state could be represented as 8 digits, each in range from 1 to 8.
[ http://en.wikipedia.org/wiki/Eight_queens_puzzle ]
I don't understand the expression 8 * log(2)8 = 24 bits , why log2 ^ 8? And what are these 24 bits supposed to be for?
If we take first example on the wikipedia page, the solution can be encoded as [2,4,6,8,3,1,7,5] : the first digit gives the row number for the queen in column A, the second for the queen in column B and so on. Now instead of starting the row numbering at 1, we will start at 0. The solution is then encoded with [1,3,5,7,0,6,4]. Any position can be encoded such way.
We have only digits between 0 and 7, if we write them in binary 3 bit (=log2(8)) are enough :
000 -> 0
001 -> 1
...
110 -> 6
111 -> 7
A position can be encoded using 8 times 3 digits, e.g. from [1,3,5,7,2,0,6,4] we get [001,011,101,111,010,000,110,100] or more briefly 001011101111010000110100 : 24 bits.
In the other way, the bitstring 000010001011100101111110 decodes as 000.010.001.011.100.101.111.110 then [0,2,1,3,4,5,7,6] and gives [1,3,2,4,5,8,7] : queen in column A is on row 1, queen in column B is on row 3, etc.
The number of bits needed to store the possible squares (8 possibilities 0-7) is log(2)8. Note that 111 in binary is 7 in decimal. You have to specify the square for 8 columns, so you need 3 bits 8 times
I am reading a algorithms book by S.DasGupta. Following is text snippet from the text regarding number of bits required for nth Fibonacci number.
It is reasonable to treat addition as
a single computer step if small
numbers are being added, 32-bit
numbers say. But the nth Fibonacci
number is about
0.694n bits long, and this can far exceed 32 as n grows. Arithmetic
operations on arbitrarily large
numbers cannot possibly be performed
in a single, constant-time step.
My question is for eg, for Fibonacci number F1 = 1, F2 =1, F3=2, and so on. then substituting "n" in above formula i.e., 0.694n for F1 is approximately 1, F2 is approximately 2 bits, but for F3 and so on above formula fails. I think i didn't understand propely what author mean here, can any one please help me in understanding this?
Thanks
Well,
n 3 4 5 6 7 8
0.694n 2.08 2.78 3.47 4.16 4.86 5.55
F(n) 2 3 5 8 13 21
bits 2 2 3 4 4 5
log(F(n)) 1 1.58 2.32 3 3.7 4.39
Bits required is the base-2 log rounded up, so this is close enough for me.
The value 0.694 comes from the fact that F(n) is the closest integer to (φn)/√5. So log(F(n)) is n * log(phi) - log(sqrt(5)), and log(phi) is 0.694. As n gets bigger, the log(sqrt(5)) and the rounding rapidly become insignificant.
private static int nobFib(int n) // number of bits Fib(n)
{
return n < 6 ? ++n/2 : (int)(0.69424191363061738 * n - 0.1609640474436813);
}
Checked it for n from 0 to 500.000, n=500.000.000, n=1.000.000.000
It's based on Binet's formula.
Needed it for: Fibonacci Sequence Binary Plot.
See: http://bigintegers.blogspot.com/2012/09/fibonacci-sequence-binary-plot-edd-peg.html
First of all, the word about is very important, as in the nth Fibonacci number is about 0.694n bits long. Second, I think the author means when n->infinity. Try some big number and check :)
you cant have say half a bit... the amount of bits must be rounded
so it means
number of bits = Math.ceil(Math.max(0.694*n,32));
so its rounded up for n>32 and 32 for n<32
for 32bit systems that is
and the number may not be exact
I think he's just using the Fibonacci numbers to illustrate his point that for large numbers (>32 bit) addition cannot be assumed to be constant anymore because it involves more than a singe instruction on the CPU.
Why does the formula fail? For F3=2 the binary representation needs 2bits (3 * 0.694 = 2.082) Take F50=12586269025, which can be represented using 33bits (50 * 0.694 = 35) which is still reasonably close to the true value.
N F(N) 0.694*N
1 0 1
2 1 1
3 1 1
4 2 2
5 3 2
6 5 3
7 8 4
8 13 4
etc. That's my interpretation. But then, that means that you have to get to f(47) = 1,836,311,903 before you exceed 32 bits.
The author is basically describing how large numbers affect the performance of the algorithm. To be overly simple, a processor can add numbers of the register size very quickly, if the numbers exceed the register size, more low level processor instructions need to be executed.
This question not probably not typical stackoverflow but am not sure where to ask this small question of mine.
Problem:
Find the number of bits in the binary representation of decimal number 16?
Now I tried to solve this one using the formula $2^n = 16 \Rightarrow n = 4$ but the correct answer as suggested by my module is 5. Could anybody explain how ?
After reading some answer,(and also I have 10 more mints before I could accept the correct answer)I think this is probably an explanation,that will be consistent to the mathematical formula,
For representing 16 we need to represent 17 symbols (0,16), hence $2^n = 17 \Rightarrow n = 4.08746$ but as n need to be an integer then $n = 5$
Think of how binary works:
Bit 1: Add 1
Bit 2: Add 2
Bit 3: Add 4
Bit 4: Add 8
Bit 5: Add 16
Thus 16 would be: 10000
With 4 bits, you can represent numbers from 0 to 15.
So yes, you need 5 bits to represent 16.
Decimal - 16 8 4 2 1
Binary - 1 0 0 0 0
So for anything up to decimal 31 you only need 5 bits.
This is a classic fencepost error.
As you know, computers like to start counting from 0.
So to represent 16, you need bits 0, 1, 2, 3 and 4 (= floor(log2(16))).
But to actually contain bits 0 to 4, you need 5 bits.