The question is, how to solve 1/x + 1/y = 1/N! (N factorial). Find the number of values that satisfy x and y for large values of N.
I've solved the problem for relatively small values of N (any N! that'll fit into a long). So, I know I solve the problem by getting all the divisors of (N!)^2. But that starts failing when (N!)^2 fails to fit into a long. I also know I can find all the divisors of N! by adding up all the prime factors of each number factored in N!. What I am missing is how I can use all the numbers in the factorial to find the x and y values.
EDIT: Not looking for the "answer" just a hint or two.
Problem : To find the count of factors of (N!)^2.
Hints :
1) You don't really need to compute (N!)^2 to find its prime factors.
Why?
Say you find the prime factorization of N! as (p1^k1) x (p2^k2) .... (pi^ki)
where pj's are primes and kj's are exponents.
Now the number of factors of N! is as obvious as
(k1 + 1) x (k2 + 1) x ... x (ki + 1).
2) For (N!)^2, the above expression would be,
(2*k1 + 1) * (2*k2 + 1) * .... * (2*k1 + 1)
which is essentially what we are looking for.
For example, lets take N=4, N! = 24 and (N!)^2 = 576;
24 = 2^3 * 3^1;
Hence no of factors = (3+1) * (1+1) = 8, viz {1,2,3,4,6,8,12,24}
For 576 = 2^6 * 3^2, it is (2*3 + 1) * (2*1 + 1) = 21;
3) Basically you need to find the multiplicity of each primes <= N here.
Please correct me if i'm wrong somewhere till here.
Here is your hint. Suppose that m = p1k1 · p2k2 · ... · pjkj. Every factor of m will have from 0 to k1 factors of p1, 0 to k2 factors of p2, and so on. Thus there are (1 + k1) · (1 + k2) · ... · (1 + kj) possible divisors.
So you need to figure out the prime factorization of n!2.
Note, this will count, for instance, 1⁄6 = 1⁄8 + 1⁄24 as being a different pair from 1⁄6 = 1⁄24 + 1⁄8. If order does not matter, add 1 and divide by 2. (The divide by 2 is because typically 2 divisors will lead to the same answer, with the add 1 for the exception that the divisor n! leads to a pair that pairs with itself.)
It's more to math than programming.
Your equation implies xy = n!(x+y).
Let c = gcd(x,y), so x = cx', y= cy', and gcd(x', y')=1.
Then c^2 x' y'=n! c (x'+y'), so cx'y' = n!(x' + y').
Now, as x' and y' are coprime, and cannot be divisible be x'+y', c should be.
So c = a(x'+y'), which gives ax'y'=n!.
To solve your problem, you should find all two coprime divisors of n!, every pair of which will give a solution as ( n!(x'+y')/y', n!(x'+y')/x').
Let F(N) be the number of (x,y) combinations that satisfy your requirements.
F(N+1) = F(N) + #(x,y) that satisfy the condition for N+1 and at least one of them (x or y) is not divisible N+1.
The intuition here is for all combinations (x,y) that work for N, (x*(N+1), y*(N+1)) would work for N+1. Also, if (x,y) is a solution for N+1 and both are divisible by n+1, then (x/(N+1),y/(N+1)) is a solution for N.
Now, I am not sure how difficult it is to find #(x,y) that work for (N+1) and at least one of them not divisible by N+1, but should be easier than solving the original problem.
Now Multiplicity or Exponent for Prime p in N! can be found by below formula:\
Exponent of P in (N!)= [N/p] + [N/(P^2)] +[N/(P^3)] + [N/(P^4)] +...............
where [x]=Step function E.g. [1.23]=integer part(1.23)=1
E.g. Exponent of 3 in 24! = [24/3] +[24/9]+ [24/27] + ... = 8 +2 +0 + 0+..=10
Now whole problem reduces to identifying prime number below N and finding its Exponent in N!
Related
I have an array of n random integers
I choose a random integer and partition by the chosen random integer (all integers smaller than the chosen integer will be on the left side, all bigger integers will be on the right side)
What will be the size of my left and right side in the average case, if we assume no duplicates in the array?
I can easily see, that there is 1/n chance that the array is split in half, if we are lucky. Additionally, there is 1/n chance, that the array is split so that the left side is of length 1/2-1 and the right side is of length 1/2+1 and so on.
Could we derive from this observation the "average" case?
You can probably find a better explanation (and certainly the proper citations) in a textbook on randomized algorithms, but here's the gist of average-case QuickSort, in two different ways.
First way
Let C(n) be the expected number of comparisons required on average for a random permutation of 1...n. Since the expectation of the sum of the number of comparisons required for the two recursive calls equals the sum of the expectations, we can write a recurrence that averages over the n possible divisions:
C(0) = 0
1 n−1
C(n) = n−1 + ― sum (C(i) + C(n−1−i))
n i=0
Rather than pull the exact solution out of a hat (or peek at the second way), I'll show you how I'd get an asymptotic bound.
First, I'd guess the asymptotic bound. Obviously I'm familiar with QuickSort and my reasoning here is fabricated, but since the best case is O(n log n) by the Master Theorem, that's a reasonable place to start.
Second, I'd guess an actual bound: 100 n log (n + 1). I use a big constant because why not? It doesn't matter for asymptotic notation and can only make my job easier. I use log (n + 1) instead of log n because log n is undefined for n = 0, and 0 log (0 + 1) = 0 covers the base case.
Third, let's try to verify the inductive step. Assuming that C(i) ≤ 100 i log (i + 1) for all i ∈ {0, ..., n−1},
1 n−1
C(n) = n−1 + ― sum (C(i) + C(n−1−i)) [by definition]
n i=0
2 n−1
= n−1 + ― sum C(i) [by symmetry]
n i=0
2 n−1
≤ n−1 + ― sum 100 i log(i + 1) [by the inductive hypothesis]
n i=0
n
2 /
≤ n−1 + ― | 100 x log(x + 1) dx [upper Darboux sum]
n /
0
2
= n−1 + ― (50 (n² − 1) log (n + 1) − 25 (n − 2) n)
n
[WolframAlpha FTW, I forgot how to integrate]
= n−1 + 100 (n − 1/n) log (n + 1) − 50 (n − 2)
= 100 (n − 1/n) log (n + 1) − 49 n + 100.
Well that's irritating. It's almost what we want but that + 100 messes up the program a little bit. We can extend the base cases to n = 1 and n = 2 by inspection and then assume that n ≥ 3 to finish the bound:
C(n) = 100 (n − 1/n) log (n + 1) − 49 n + 100
≤ 100 n log (n + 1) − 49 n + 100
≤ 100 n log (n + 1). [since n ≥ 3 implies 49 n ≥ 100]
Once again, no one would publish such a messy derivation. I wanted to show how one could work it out formally without knowing the answer ahead of time.
Second way
How else can we derive how many comparisons QuickSort does in expectation? Another possibility is to exploit the linearity of expectation by summing over each pair of elements the probability that those elements are compared. What is that probability? We observe that a pair {i, j} is compared if and only if, at the leaf-most invocation where i and j exist in the array, either i or j is chosen as the pivot. This happens with probability 2/(j+1 − i), since the pivot must be i, j, or one of the j − (i+1) elements that compare between them. Therefore,
n n 2
C(n) = sum sum ―――――――
i=1 j=i+1 j+1 − i
n n+1−i 2
= sum sum ―
i=1 d=2 d
n
= sum 2 (H(n+1−i) − 1) [where H is the harmonic numbers]
i=1
n
= 2 sum H(i) − n
i=1
= 2 (n + 1) (H(n+1) − 1) − n. [WolframAlpha FTW again]
Since H(n) is Θ(log n), this is Θ(n log n), as expected.
I have recently stumbled upon an algorithmic problem and I can't get the end of it. You're given a positive integer N < 10^13, and you need to choose a nonnegative integer M, such that the sum: MN + N(N-1) / 2 has the least number of divisors that lie between 1 and N, inclusive.
Can someone point me to the right direction for solving this problem?
Thank you for your time.
Find a prime P greater than N. There are a number of ways to do this.
If N is odd, then M*N + N*(N-1)/2 is a multiple of N. It must be divisible by any factor of N, but if we choose M = P - (N-1)/2, then M*N + N*(N-1)/2 = P*N, so it isn't divisible by any other integers between 1 and N.
If N is even, then M*N + N*(N-1)/2 is a multiple of N/2. It must be divisible by any factor of N/2, but if we choose M = (P - N + 1)/2 (which must be an integer), then M*N + N*(N-1)/2 = (P - N + 1)*N/2 + (N-1)*N/2 = P*N/2, so it isn't divisible by any other integers between 1 and N.
The multiplication algorithm is for multiplying two radix r numbers:
0 <= x,y < r^n
x = x1 * r^(n/2) + x0
y = y1 * r^(n/2) + y0
where x0 is the half of x that contains the least significant digits, and x1 is the half with the most significant digits, and similarly for y.
So if r = 10 and n = 4, we have that x = 9723 = 97 * 10^2 + 23, where x1 = 97 and x0 = 23.
The multiplication can be done as:
z = x*y = x1*y1 + (x0*y1 + x1*y0) + x0*y0
So we have now four multiplications of half-sized numbers (we initially had a multiplication of n digit numbers, and now we have four multiplications of n/2 digit numbers).
As I see it the recurrence for this algorithm is:
T(n) = O(1) + 4*T(n/2)
But apparently it is T(n) = O(n) + 3T(n/2)
Either way, the solution is T(n) = O(n^2), and I can see this, but I am wondering why there is an O(n) term instead of an O(1) term?
You are right, if you'll compute the term x0*y1 + x1*y0 naively, with two products, the time complexity is quadratic. This is because we do four products and the recurrence is, as you suggest, T(n) = O(n) + 4T(n/2), which solves to O(n^2).
However, Karatsuba observed that xy=z2 * r^n + z1 * r^(n/2) + z0, where we let z2=x1*y2, z0=x0*y0, and z1=x0*y1 + x1*y0, and that one can express the last term as z1=(x1+x0)(y1+y0)-z2-z0,which involves only one product. Using this trick, the recurrence does become T(n) = O(n) + 3T(n/2) because we do three products altogether (as opposed to four if don't use the trick).
Because the numbers are of order r^n we will need n digits to represent the numbers (in general, for a fixed r>=2, we need O(log N) digits to represent the number N). To add two numbers of that order, you need to "touch" all the digits. Since there are n digits, you need O(n) (formally I'd say Omega(n), meaning "at least order of n time", but let's leave the details aside) time to compute their sum.
For example, when computing the product N*M, the number of bits n will be max(log N, log M) (assuming the base r>=2 is constant).
The algebraic trick is explained in more detail on the Wiki page for the Karatsuba algorithm.
Interview questions where I start with "this might be solved by generating all possible combinations for the array elements" are usually meant to let me find something better.
Anyway I would like to add "I would definitely prefer another solution since this is O(X)".. the question is: what is the O(X) complexity of generating all combinations for a given set?
I know that there are n! / (n-k)!k! combinations (binomial coefficients), but how to get the big-O notation from that?
First, there is nothing wrong with using O(n! / (n-k)!k!) - or any other function f(n) as O(f(n)), but I believe you are looking for a simpler solution that still holds the same set.
If you are willing to look at the size of the subset k as constant, then for k <= n - k:
n! / ((n - k)! k!) = ((n - k + 1) (n - k + 2) (n - k + 3) ... n ) / k!
But the above is actually (n ^ k + O(n ^ (k - 1))) / k!, which is in O(n ^ k)
Similarly, if n - k < k, you get O(n ^ (n - k))
Which gives us O(n ^ min{k, n - k})
I know this is an old question, but it comes up as a top hit on google, and IMHO has an incorrectly marked accepted answer.
C(n,k) = n Choose k = n! / ( (n-k)! * k!)
The above function represents the number of sets of k-element that can be made from a set of n-element. Purely from a logical reasoning perspective, C(n, k) has to be smaller than
∑ C(n,k) ∀ k ∊ (1..n).
as this expression represents the power-set. In English, the above expression represents: add C(n,k) for all k from 1 to n. We know this to have 2 ^ n elements.
So, C(n, k) has an upper bound of 2 ^ n which is definitely smaller than n ^ k for any n, k > 3, and k < n.
So to answer your question C(n, k) has an upper bound of 2 ^ n for sure, but don't know if there is a tighter upper bound that describes it better.
As a follow-up to #amit, an upper bound of min{k, n - k} is n / 2.
Therefore, the upper-bound for "n choose k" complexity is O(n ^ (n / 2))
case1: if n-k < k
Let suppose n=11 and k=8 and n-k=3 then
n!/(n-k)!k! = 11!/(3!8!)= 11x10x9/3!
let suppose it is (11x11x11)/6 = O(11^3) and 11 was equal to n so O(n^3) and also n-k=3 so it become O(n^(n-k))
case2: if k < n-k
Let suppose n=11 and k=3 and n-k=8 then
n!/(n-k)!k! = 11!/(8!3!)= 11x10x9/3!
let suppose it is (11x11x11)/6 = O(11^3) and 11 was equal to n so O(n^3) and also k=3 so it become O(n^(k))
Which gives us O(n^min{k,n-k})
I was curious if there was a good way to do this. My current code is something like:
def factorialMod(n, modulus):
ans=1
for i in range(1,n+1):
ans = ans * i % modulus
return ans % modulus
But it seems quite slow!
I also can't calculate n! and then apply the prime modulus because sometimes n is so large that n! is just not feasible to calculate explicitly.
I also came across http://en.wikipedia.org/wiki/Stirling%27s_approximation and wonder if this can be used at all here in some way?
Or, how might I create a recursive, memoized function in C++?
n can be arbitrarily large
Well, n can't be arbitrarily large - if n >= m, then n! ≡ 0 (mod m) (because m is one of the factors, by the definition of factorial).
Assuming n << m and you need an exact value, your algorithm can't get any faster, to my knowledge. However, if n > m/2, you can use the following identity (Wilson's theorem - Thanks #Daniel Fischer!)
to cap the number of multiplications at about m-n
(m-1)! ≡ -1 (mod m)
1 * 2 * 3 * ... * (n-1) * n * (n+1) * ... * (m-2) * (m-1) ≡ -1 (mod m)
n! * (n+1) * ... * (m-2) * (m-1) ≡ -1 (mod m)
n! ≡ -[(n+1) * ... * (m-2) * (m-1)]-1 (mod m)
This gives us a simple way to calculate n! (mod m) in m-n-1 multiplications, plus a modular inverse:
def factorialMod(n, modulus):
ans=1
if n <= modulus//2:
#calculate the factorial normally (right argument of range() is exclusive)
for i in range(1,n+1):
ans = (ans * i) % modulus
else:
#Fancypants method for large n
for i in range(n+1,modulus):
ans = (ans * i) % modulus
ans = modinv(ans, modulus)
ans = -1*ans + modulus
return ans % modulus
We can rephrase the above equation in another way, that may or may-not perform slightly faster. Using the following identity:
we can rephrase the equation as
n! ≡ -[(n+1) * ... * (m-2) * (m-1)]-1 (mod m)
n! ≡ -[(n+1-m) * ... * (m-2-m) * (m-1-m)]-1 (mod m)
(reverse order of terms)
n! ≡ -[(-1) * (-2) * ... * -(m-n-2) * -(m-n-1)]-1 (mod m)
n! ≡ -[(1) * (2) * ... * (m-n-2) * (m-n-1) * (-1)(m-n-1)]-1 (mod m)
n! ≡ [(m-n-1)!]-1 * (-1)(m-n) (mod m)
This can be written in Python as follows:
def factorialMod(n, modulus):
ans=1
if n <= modulus//2:
#calculate the factorial normally (right argument of range() is exclusive)
for i in range(1,n+1):
ans = (ans * i) % modulus
else:
#Fancypants method for large n
for i in range(1,modulus-n):
ans = (ans * i) % modulus
ans = modinv(ans, modulus)
#Since m is an odd-prime, (-1)^(m-n) = -1 if n is even, +1 if n is odd
if n % 2 == 0:
ans = -1*ans + modulus
return ans % modulus
If you don't need an exact value, life gets a bit easier - you can use Stirling's approximation to calculate an approximate value in O(log n) time (using exponentiation by squaring).
Finally, I should mention that if this is time-critical and you're using Python, try switching to C++. From personal experience, you should expect about an order-of-magnitude increase in speed or more, simply because this is exactly the sort of CPU-bound tight-loop that natively-compiled code excels at (also, for whatever reason, GMP seems much more finely-tuned than Python's Bignum).
Expanding my comment to an answer:
Yes, there are more efficient ways to do this. But they are extremely messy.
So unless you really need that extra performance, I don't suggest to try to implement these.
The key is to note that the modulus (which is essentially a division) is going to be the bottleneck operation. Fortunately, there are some very fast algorithms that allow you to perform modulus over the same number many times.
Division by Invariant Integers using Multiplication
Montgomery Reduction
These methods are fast because they essentially eliminate the modulus.
Those methods alone should give you a moderate speedup. To be truly efficient, you may need to unroll the loop to allow for better IPC:
Something like this:
ans0 = 1
ans1 = 1
for i in range(1,(n+1) / 2):
ans0 = ans0 * (2*i + 0) % modulus
ans1 = ans1 * (2*i + 1) % modulus
return ans0 * ans1 % modulus
but taking into account for an odd # of iterations and combining it with one of the methods I linked to above.
Some may argue that loop-unrolling should be left to the compiler. I will counter-argue that compilers are currently not smart enough to unroll this particular loop. Have a closer look and you will see why.
Note that although my answer is language-agnostic, it is meant primarily for C or C++.
n! mod m can be computed in O(n1/2 + ε) operations instead of the naive O(n). This requires use of FFT polynomial multiplication, and is only worthwhile for very large n, e.g. n > 104.
An outline of the algorithm and some timings can be seen here: http://fredrikj.net/blog/2012/03/factorials-mod-n-and-wilsons-theorem/
If we want to calculate M = a*(a+1) * ... * (b-1) * b (mod p), we can use the following approach, if we assume we can add, substract and multiply fast (mod p), and get a running time complexity of O( sqrt(b-a) * polylog(b-a) ).
For simplicity, assume (b-a+1) = k^2, is a square. Now, we can divide our product into k parts, i.e. M = [a*..*(a+k-1)] *...* [(b-k+1)*..*b]. Each of the factors in this product is of the form p(x)=x*..*(x+k-1), for appropriate x.
By using a fast multiplication algorithm of polynomials, such as Schönhage–Strassen algorithm, in a divide & conquer manner, one can find the coefficients of the polynomial p(x) in O( k * polylog(k) ). Now, apparently there is an algorithm for substituting k points in the same degree-k polynomial in O( k * polylog(k) ), which means, we can calculate p(a), p(a+k), ..., p(b-k+1) fast.
This algorithm of substituting many points into one polynomial is described in the book "Prime numbers" by C. Pomerance and R. Crandall. Eventually, when you have these k values, you can multiply them in O(k) and get the desired value.
Note that all of our operations where taken (mod p).
The exact running time is O(sqrt(b-a) * log(b-a)^2 * log(log(b-a))).
Expanding on my comment, this takes about 50% of the time for all n in [100, 100007] where m=(117 | 1117):
Function facmod(n As Integer, m As Integer) As Integer
Dim f As Integer = 1
For i As Integer = 2 To n
f = f * i
If f > m Then
f = f Mod m
End If
Next
Return f
End Function
I found this following function on quora:
With f(n,m) = n! mod m;
function f(n,m:int64):int64;
begin
if n = 1 then f:= 1
else f:= ((n mod m)*(f(n-1,m) mod m)) mod m;
end;
Probably beat using a time consuming loop and multiplying large number stored in string. Also, it is applicable to any integer number m.
The link where I found this function : https://www.quora.com/How-do-you-calculate-n-mod-m-where-n-is-in-the-1000s-and-m-is-a-very-large-prime-number-eg-n-1000-m-10-9+7
If n = (m - 1) for prime m then by http://en.wikipedia.org/wiki/Wilson's_theorem n! mod m = (m - 1)
Also as has already been pointed out n! mod m = 0 if n > m
Assuming that the "mod" operator of your chosen platform is sufficiently fast, you're bounded primarily by the speed at which you can calculate n! and the space you have available to compute it in.
Then it's essentially a 2-step operation:
Calculate n! (there are lots of fast algorithms so I won't repeat any here)
Take the mod of the result
There's no need to complexify things, especially if speed is the critical component. In general, do as few operations inside the loop as you can.
If you need to calculate n! mod m repeatedly, then you may want to memoize the values coming out of the function doing the calculations. As always, it's the classic space/time tradeoff, but lookup tables are very fast.
Lastly, you can combine memoization with recursion (and trampolines as well if needed) to get things really fast.