I would like to know why the following expressions produce the same result:
# method 1
def produce(n)
n % 16
end
#method 2
def produce(n)
n & 15
end
I understand that method 1 uses modulo operation, and that the method 2 uses 'bitwise AND'. But I am a little lost as to why calling & with 15 is the same as calling modulo 16.
I have a hunch that it only works for modulo x where x is 2^n. Any ideas?
You are correct in thinking that this is to do with the modulo being 2^n. Lets take a look at how that breaks down in binary.
The first method with modulo 16 would look like this in binary:
10000
By comaparison to the bitwise and 15:
01111
In essence what you are doing with the modulo or remainder is getting what is left over when diving by 16. Since it is a power of 2 you are essentially removing all the bits above 16 since they divide evenly and leaving yourself with any bits below 16.
With the bitwise and you are also doing the same thing. You are keeping every bit below 16 and giving yourself the same result.
This will work for any number % 2^n and & (2^n - 1)
Related
I need a formula for counting the number of combinations within a given limit of numbers. There must only be 2 numbers given, we have to find the third.
For example, for 2(number of repetitions) and 3(limit number), the result would be 3, because there are 3 combinations for the digits: 1 and 2, 1 and 3, 2 and 3.
For 2 and 4 the result is 6,
For 3 and 5 the result is 10,
For 6 and 7 the result is 7, etc.
The first number has to be smaller than the second.
A formula is needed for figuring out the result, if the first number is A, the second is B, what would C is going to be?
You're describing combination. The formula is going to be C = B! / (A!*(B-A)!) (where ! is the factorial operation). It's also worth noting that the first number can be equal to the second -- there should only be one repetition in that case. By convention 0! == 1 and it is OK where both numbers are equal because C(n, n) = 1 and this means n!/(n! * 0!).
Unfortunately, since factorial grows very quickly (21! is too large for a 64-bit unsigned integer), you probably can't compute this directly. Wikipedia has a few algorithms you can use here.
There are multiple ways to find out the same and I tried using bitwise operation as -
if(((n<<3) - n)%7 == 0 ) {
print "divide by 7";
}
Is there any other more efficient way?
As we can find if number is multiple of 3 using below algorithm -
If difference between count of odd set bits (Bits set at odd positions) and even set bits is multiple of 3 then so is the number.
Can we generalize the above algorithm for other numbers too?
So if your number is representable by a hardware-supported integer, and the hardware has a division or modulo operations, you should just use those. It is simpler, and probably faster than anything you will write. To even compete with the hardware, you must use an assembler and use other faster instructions better than the hardware manufacturers did, and without the advantage of undocumented tricks they could use but you can not.
Where this question becomes interesting is where arbitrarily large integers are involved. Modulo has some tricks for that. For instance, I can tell you that 100000000010000010000 is divisible by 3, even though my brain is a horribly slow math processor compared to a computer, because of these properties of the % modulo operator:
(a+b+c) % d = ( (a%d) + (b%d) + (c%d) ) %d
(n*a) % d = ( (a%d) + (a%d) + (a%d) +... (n times) ) %d = (n*(a%d)) %d
Now note that:
10 % 3 = 1
100 % 3 = (10 * (10%3)) % 3 = 10%3 = 1
1000 % 3 = (10 * (100%3)) %3 = 1
etc...
So that to tell if a base-10 number is divisible by 3, we simply sum the digits and see if the sum is divisible by 3
Now using the same trick with a large binary number expressed in octal or base-8 (also pointed out by #hropyatr above in comments), and using divisibility by 7, we have the special case:
8 % 7 = 1
and from that we can deduce that:
(8**N) % 7 = (8 * (8 * ( ... *( 8 * (8%7) % 7 ) % 7 ) ... %7 = 1
so that to "quickly" test divisibility by 7 of an arbitrarily large octal number, all we need to do is add up its octal base-8 digits and try dividing that by 7.
Finally, the bad news.
The code posted:
if ( (n<<3 - n) % 7 ==0 ) ... is not a good test for divisibility by 7.
because it is always yields true for any n (as pointed out by #Johnathan Leffler)
n<<3 is multiplication by 8, and will equal 8n
So for instance 6 is not divisible by 7,
but 6<<3 = 48 and 48 - 6 = 42, which is divisible by 7.
If you meant right shift if ( (n>>3 - n ) % 7 == 0 ) that doesn't work either. Test it with 49, 49//8 is 6, 6-49 is -43 and although 49 is divisible by 7, -43 is not.
The simplest test, if (n % 7 ) == 0 is your best shot until n overflows hardware, and at that point you can find a routine to represent n in octal, and sum the octal digits modulo 7.
I think if(n%7 == 0) is more efficient way to check divisibility by 7.
But if you are dealing with large numbers and can't directly do modulus operation then this might help:
A number of the form 10x + y is divisible by 7 if and only if x − 2y is divisible by 7. In other words, subtract twice the last digit from the number formed by the remaining digits. Continue to do this until a number known to be divisible by 7 is obtained. The original number is divisible by 7 if and only if the number obtained using this procedure is divisible by 7.
For example, the number 371: 37 − (2×1) = 37 − 2 = 35; 3 − (2 × 5) = 3 − 10 = −7; thus, since −7is divisible by 7, 371 is divisible by 7.
Another method is multiplication by 3. A number of the form 10x + y has the same remainder when divided by 7 as 3x + y. One must multiply the leftmost digit of the original number by 3, add the next digit, take the remainder when divided by 7, and continue from the beginning: multiply by 3, add the next digit, etc.
For example, the number 371: 3×3 + 7 = 16 remainder 2, and 2×3 + 1 = 7.
This method can be used to find the remainder of division by 7.
P.S: reference
How can I calculate a floating point multiplicand in Verilog? So far, I usually use shift << 1024 , then floating point number become to integer. Then I do some operations, then >> 1024 to obtain a fraction again.
For example 0.3545 = 2^-2 + 2^-4 + ...
I have question about another way, like this. I don't know where does the minus (-) comes from:
0.46194 = 2^-1 - 2^-5 - 2^-7 + 2^-10.
I have just look this from someone. but as you way, that is represented like this
0.46194 = 2^-2 + 2^-3 + 2^-4 + 2^-6 + 2^-7 + 2^-10 + .... .
I don't understand how does it know the minus is used it?
How do we know when the minus needed to it? Also how can I apply to verilog RTL?
UPDATE : I understand the concept the using minus in operation. But Is there any other way to equation or methodologies what to make reduce expression what multiplying with power of 2?
UPDATE : how can we use this method in verilog? for example, I have leaned 0.46194 = 2^-1 - 2^-5 - 2^-7 + 2^-10. then this code was written like this in verilog. 0.011101101 ='hED = 'd237. So the point of the question is how can we apply it to application in verilog?
UPDATE : Sir Would you please check this one? there are a little difference result.
0.46194 = 0.011101101. I just tried like this
0.011101101
0.100T10T01
= 2^-1 - 2^-4 + 2^-5 - 2^-7 + 2^-9. = 0.462890625
Something different. What do I wrong?
Multiplication of a variable by a constant is often implemented by adding the variable to shifted versions of itself. This is much cheaper to put on an FPGA than a multiplier circuit accepting two variables.
You can get further savings when there's a sequence of 1-bits in the constant, by using subtraction as well. (A subtraction circuit is only equally expensive as addition.)
Consider the number 30 = 11110. It's equal to 16 + 8 + 4 + 2, but it's also equal to 32 - 2.
In general, a sequence of multiplicand 1-bits, or the sum of several successive powers of two, can be formed by adding the first power of two after the most significant bit, and subtracting the least significant bit. Hence, instead of 16x + ... + 2x, use 32x - 2x.
It doesn't matter if the sequence of 1-bits is part of a fraction or an integer. You're just applying the identity 2^a = 1 + ∑2^0 ... 2^(a-1), in other worsd ∑2^0 ... 2^a = 2^(a+1) - 1.
In a 4 bit base 2 number can have these values:
Base 2: Unsigned 4 bit integer,
2^3 2^2 2^1 2^0
8 4 2 1
If we have a 0111 it represents 7. If we were to multiply by this number using a shift add architecture it would take 3 clockcycles (3 shift and adds).
An optimisation to this is called CSD (Canonical Signed Digit. It allows minus one to be present in the 'binary numbers'. We shall represent -1 as one bar, or T as that looks like a one with a bar over the top.
100T represents 8 - 1 which is the same as 0111. It can be observed that long runs of 1's can be replaced with a the 0 that ends the run becoming 1 and the first 1 of the run becoming a -1, (T).
An example of conversion:
00111101111
01000T1000T
But if passed in two section we would get :
00111101111
0011111000T
010000T000T
We have taken a number that would take 8 clock cycles or 8 blocks of logic to compute and turned it into 3.
Related questions to fixed point values in Verilog x precision binary fixed point representation? and verilog-floating-points-multiplication.
To cover the follow up question:
To answer the follow up section about your question on CSD conversion. I will look at them as pure integers to simplify the numbers, this is the same as multiplying the values by 2^9 (9 fractional bits).
256 128 64 32 16 8 4 2 1
0 1 1 1 0 1 1 0 1
128 + 64 +32 + 8 +4 +1 => 237
Now with your CSD conversion:
256 128 64 32 16 8 4 2 1
1 0 0 T 1 0 T 0 1
256 -32 + 16 - 4 + 1 => 237
You can see your conversion was correct. I get 237* 2^-9 as 0.462890625, which matches your answer when converted back to fractional. The 0.46194 that you started with must have been a rounded version, or when quantised to 9 fractional bits gets truncated. This error is known as quantisation error. The most important thing here though is that you got the CSD conversion correct.
So, I am using the modulus and a rand as part of a function I am writing. Now, I understand that:
rand() % 6 + 1; gives me a random number between one and six in this situation. However, I also know that
rand(); gives me a random value from 0 - 32767 and srand; changes the sequence
but I thought...
% = Whatever the remainder of a is after a / b.
... So, if you were to break rand() % 6 + 1; up, what would it look like?
I need to make sense of this for my own good because I look at rand() % 6 + 1; like this:
some random number / 6 = remainder left over from the random. Then add the one.
So, my question is two fold:
1 - how does rand() get restricted to just 1 - 5 all of a sudden instead of the spectrum of numbers from 0 - 32767?
2 - Any of those numbers (1-5) divided by 6 gives you a fractional number, not a whole one and I thought modulus only works with whole numbers. What info am I missing here?
As you can see I am confused about this. Help is always appreciated :)
rand()%6 gives the reminder of the random number generated when divided by 6
For example if rand() generates a number 100, then the result of 100%6 is the reminder obtained when 100 is divided by 6, which is 4. So whatever the number is generated from the function rand(), when %6 is done, the output will be within 0-5(the reminder when divided by 6).
"how does rand() get restricted to just 1 - 5 all of a sudden instead of the spectrum of numbers from 0 - 32767?"
rand() does not get restricted to "0-5", it still outputs the numbers from 0 - 32767(or whatever don't know the max limit). But when we do rand()%6, then the output will be in the range 0-5
I am reading a algorithms book by S.DasGupta. Following is text snippet from the text regarding number of bits required for nth Fibonacci number.
It is reasonable to treat addition as
a single computer step if small
numbers are being added, 32-bit
numbers say. But the nth Fibonacci
number is about
0.694n bits long, and this can far exceed 32 as n grows. Arithmetic
operations on arbitrarily large
numbers cannot possibly be performed
in a single, constant-time step.
My question is for eg, for Fibonacci number F1 = 1, F2 =1, F3=2, and so on. then substituting "n" in above formula i.e., 0.694n for F1 is approximately 1, F2 is approximately 2 bits, but for F3 and so on above formula fails. I think i didn't understand propely what author mean here, can any one please help me in understanding this?
Thanks
Well,
n 3 4 5 6 7 8
0.694n 2.08 2.78 3.47 4.16 4.86 5.55
F(n) 2 3 5 8 13 21
bits 2 2 3 4 4 5
log(F(n)) 1 1.58 2.32 3 3.7 4.39
Bits required is the base-2 log rounded up, so this is close enough for me.
The value 0.694 comes from the fact that F(n) is the closest integer to (φn)/√5. So log(F(n)) is n * log(phi) - log(sqrt(5)), and log(phi) is 0.694. As n gets bigger, the log(sqrt(5)) and the rounding rapidly become insignificant.
private static int nobFib(int n) // number of bits Fib(n)
{
return n < 6 ? ++n/2 : (int)(0.69424191363061738 * n - 0.1609640474436813);
}
Checked it for n from 0 to 500.000, n=500.000.000, n=1.000.000.000
It's based on Binet's formula.
Needed it for: Fibonacci Sequence Binary Plot.
See: http://bigintegers.blogspot.com/2012/09/fibonacci-sequence-binary-plot-edd-peg.html
First of all, the word about is very important, as in the nth Fibonacci number is about 0.694n bits long. Second, I think the author means when n->infinity. Try some big number and check :)
you cant have say half a bit... the amount of bits must be rounded
so it means
number of bits = Math.ceil(Math.max(0.694*n,32));
so its rounded up for n>32 and 32 for n<32
for 32bit systems that is
and the number may not be exact
I think he's just using the Fibonacci numbers to illustrate his point that for large numbers (>32 bit) addition cannot be assumed to be constant anymore because it involves more than a singe instruction on the CPU.
Why does the formula fail? For F3=2 the binary representation needs 2bits (3 * 0.694 = 2.082) Take F50=12586269025, which can be represented using 33bits (50 * 0.694 = 35) which is still reasonably close to the true value.
N F(N) 0.694*N
1 0 1
2 1 1
3 1 1
4 2 2
5 3 2
6 5 3
7 8 4
8 13 4
etc. That's my interpretation. But then, that means that you have to get to f(47) = 1,836,311,903 before you exceed 32 bits.
The author is basically describing how large numbers affect the performance of the algorithm. To be overly simple, a processor can add numbers of the register size very quickly, if the numbers exceed the register size, more low level processor instructions need to be executed.