I want to calculate value of
F(N) = (F(N-1) * [((N-R+1)^(N-R+1))/(R^R)]) mod M for given values of N,R and M.
Here A^B shows A power B and NOT any Bitwise operation
Here M need not to be prime.How to approach this question?Please help because if M was prime that it would not have beed so difficult to find inverse of R^R mod M.
But as M can be any value ranging from 1 to 10^9.I am not able to tackle this problem.
N can be between 1 and 10^5 and R is less than or equal to N.
Assuming you know somehow that the result of the division is an integer:
As N and R are small, you can do this by computing the prime factorisation of N-R+1 and R.
If we know that R=p^a...q^b then R^R = p^(Ra)...q^(Rb).
Similarly you can work out the power of each prime in (N-R+1)^(N-R+1).
Subtract the powers of the primes in R^R from the powers of the primes in (N-R+1)^(N-R+1) gives you the powers of each prime in the result.
You can then use a standard binary exponentiation routine to compute the result modulo M with no inverses required.
Related
There are many solutions to count the total no. of bits of a number and below is one of them:-
int total_bits=log2(num)+1;
Can you explain, what's the use of log2(num) and adding 1?
Thanks & Regards
Let n be a positive integer. Let k be its number of bits.
Then n and k are linked by this mathematical relation:
2^(k-1) ≤ n < 2^k
Where ^ represents exponentiation.
Now, logarithm is a strictly increasing function, so this relation is equivalent to:
log2(2^(k-1)) ≤ log2(n) < log2(2^k)
And since log2 is exactly the inverse function of 2^..., this is the same as:
k-1 ≤ log2(n) < k
Or equivalently:
k ≤ log2(n) + 1 < k+1
In other words, the integer k is the integer part of log2(n) + 1. We can write this as:
k = floor(log2(n) + 1)
Or in language C:
int k = log2(n) + 1;
Note: In this answer, I used ^ to represent exponentiation. In most programming languages, ^ represents bitwise-xor, which is completely unrelated to exponents. Be careful and avoid using ^ for exponents in your programs.
This doesn't count the number of bits, but it may or may not return the index of the highest bit that is set.
"May or may not" because of rounding errors: First, log2(65536) might not return 16, but 15.999999999999999999999 in which case you get the wrong answer. Second, if you need this for 64 bit numbers, then I can guarantee that either log2(0x8000_0000_0000_0000) or log2(0x7fff_ffff_ffff_ffff) will give the wrong result.
I'm taking an algorithms class and I repeatedly have trouble when I'm asked to analyze the runtime of code when there is a line with multiplication or division. How can I find big-theta of multiplying an n digit number with an m digit number (where n>m)? Is it the same as multiplying two n digit numbers?
For example, right now I'm attempting to analyze the following line of code:
return n*count/100
where count is at most 100. Is the asymptotic complexity of this any different from n*n/100? or n*n/n?
You can always look up here Computational complexity of mathematical operations.
In your complexity of n*count/100 is O(length(n)) as 100 is a constant and length(count) is at most 3.
In general multiplication of two numbers n and m digits length, takes O(nm), the same time required for division. Here i assume we are talking about long division. There are many sophisticated algorithms which will beat this complexity.
To make things clearer i will provide an example. Suppose you have three numbers:
A - n digits length
B - m digits length
C - p digits length
Find complexity of the following formula:
A * B / C
Multiply first. Complexity of A * B it is O(nm) and as result we have number D, which is n+m digits length. Now consider D / C, here complexity is O((n+m)p), where overall complexity is sum of the two O(nm + (n+m)p) = O(m(n+p) + np).
Divide first. So, we divide B / C, complexity is O(mp) and we have m digits number E. Now we calculate A * E, here complexity is O(nm). Again overall complexity is O(mp + nm) = O(m(n+p)).
From the analysis you can see that it is beneficial to divide first. Of course in real life situation you would account for numerical stability as well.
From Modern Computer Arithmetic:
Assume the larger operand has size
m, and the smaller has size n ≤ m, and denote by M(m,n) the corresponding
multiplication cost.
When m is an exact multiple of n, say m = kn, a trivial strategy is to cut the
larger operand into k pieces, giving M(kn,n) = kM(n) + O(kn).
Suppose m ≥ n and n is large. To use an evaluation-interpolation scheme,
we need to evaluate the product at m + n points, whereas balanced k by k
multiplication needs 2k points. Taking k ≈ (m+n)/2, we see that M(m,n) ≤ M((m + n)/2)(1 + o(1)) as n → ∞. On the other hand, from the discussion
above, we have M(m,n) ≤ ⌈m/n⌉M(n)(1 + o(1)).
I have got an exercise which requires to find to write a program in which you should find if N! is divided by N^2.
1 ≤ N ≤ 10^9
I wanted to this with the easy way of creating factorial function and dividing it to the power of N but obviously it won't work.
Just algorithm or pseudo-code would be enough
For any n > 4, if n is a prime, then n! is not evenly divisible by n^2.
Here is simple explanation to support my argument:
After n! is divided by n, we are left with (n-1)! in the numerator that needs to be divided by n. So we need n or a multiple of n in the numerator in order for (n-1)! to be evenly divisible by n, which can never happen when n is prime.
While the above will always happen when n is a non-prime. Check it out for yourself by diving into a bit of Number Theory
Hope it helps!!!
Edit: Here is a simple Python code for the above. Complexity is O(sqrt(N)):
def checkPrime(n):
i = 2
while i<n**(1/2.0):
if n%i == 0:
return "Yes" # non-prime, so it's divisible
i = i + 1
return "No" # prime, so not divisible
def main():
n = int(raw_input())
if n==1:
print "Yes"
elif n==4:
print "No"
else:
print checkPrime(n)
main()
Input:
7
Output:
No
This is related to though easier than Wilson's Theorem which says that a number n > 1 is prime if and only if
(n-1)! = -1 (mod n)
This is algebraically equivalent to saying that n>1 is prime if and only if
n! = -n (mod n^2)
Furthermore, it is known and easy to prove that (to quote the Wikipedia article)
With the sole exception of 4, where 3! = 6 ≡ 2 (mod 4), if n is
composite then (n − 1)! is congruent to 0 (mod n).
Hence with the sole exception of 4, if n is composite, (n-1)! = 0 (mod n) hence n! = 0 (mod n^2) and if n is prime, n! = -n = n^2-n (mod n^2) hence n! isn't congruent to 0 in that case.
The full power of Wilson's theorem is needed if you want to show that for prime n, n! leaves a remainder of exactly n^2-n upon division by n^2. For this problem all you need to know is that it isn't zero.
In any event, you could just write a program which runs a primality check, although whether or not that would be considered a valid solution is up to whoever assigned the problem.
For general integer keys and a table of size M, a prime number:
• a good fast general purpose hash function is H(K) = K mod M
can someone please explain what H(K) = K mod M means or how it works im really confused what this hash function is supposed to represent
K mod M is the remainder of K when divided by M. In many languages this is computed by the % operator. As K mod M will always be between 0 and M-1, we can always map an integer to one of the M slots.
How to find nth tribonacci number with matrix multiplication method if the initial values are some arbitrary numbers say 1, 2 3 i.e T(1) = 1, T(2) =2 and T(3) = 3.
If T(n) = T(n-1) + T(n-2) + T(n-3) then how to find T(n) if n is very very large, I would appreciate if anyone can explain with matrix multiplication method. How to construct initial matrix.
The matrix multiplication method involves using the matrix recurrence relation.
For the Fibonacci series, we can define a vector of length 2 to represent adjacent Fibonacci numbers. Using this vector, we can define a recurrence relation with a matrix multiplication:
Similarly, the Tribonacci series recurrence relation can be written in this way:
The only difference is that the vector and matrix sizes are different.
Now, to calculate a large Tribonacci number, we just apply the matrix multiplication n times, and we get:
The matrix to the power of n (Mn) can be efficiently calculated, because we can use an exponentiation algorithm.
Many efficient exponentiation algorithms for scalars are described by Wikipedia in Exponentiation by Squaring. We can use the same idea for matrix exponentiation.
I will describe a simple way to do this. First we write n as a binary number, eg:
n = 37 = 100101
Then, calculate M to each power of 2 by squaring the previous power of 2: M1, M2 = M1M1, M4 = M2M2, M8 = M4M4, M16 = M8M8, M32 = M16M16, ...
And finally, multiply the powers of M corresponding to the binary digits of n. In this case, Mn = M1M4M32.
After calculating that, we can multiply the matrix with the Tribonacci vector for the first 3 values, ie.
Because the matrices have fixed size, each matrix multiplication takes constant time. We must do O(log n) matrix multiplications. Thus, we can calculate the nth Tribonacci number in O(log n) time.
Compare this to the normal dynamic programming method, where it takes O(n) time, by calculating each Tribonacci number up to the nth Tribonacci number (ie. for (i = 3 to n) {T[i] = T[i-1]+T[i-2]+T[i-3];} return T[n];).
I will assume that you know how to code up matrix multiplication in the language of your choice.
Consider:
| a1 b1 c1 |
[f(n) f(n - 1) f(n - 2)] * | a2 b2 c2 | = [f(n + 1) f(n) f(n - 1)]
| a3 b3 c3 |
Find the unknowns in the matrix based on that and that will be the matrix you want.
The answer in this case is:
1 1 0
1 0 1
1 0 0
The method is general however, it works even if you sum k previous terms, even if they have constants in front of them etc.