Input: Two n-bit integers x and y, where x ≥ 0, y ≥ 1.
Output: The quotient and remainder of x divided by y.
if x = 0, then return (q, r) := (0, 0);
q := 0; r := x;
while (r ≥ y) do
{ q := q + 1;
r := r – y};
return (q, r);
I have obtained the Big O complexity as O(n^2) but a friends says it is O(2^n) where n is the number of bits as the input size
Please provide explanation
The number of iterations of the while-loop is exactly floor(x/y). Each iteration takes n operations, because that is the complexity of the subtraction r - y.
Hence the complexity of the algorithm is n * floor(x/y). However, we want to express the complexity as a function of n, not as a function of x and y.
Thus the question becomes: how does floor(x/y) relate to n, in the worst case?
The biggest value that can be obtained for x/y when x and y are two nonnegative n-digits numbers, and y >= 1, is obtained by taking the biggest possible value for x, and the smallest possible value for y.
The biggest possible value for x is x = 2**n - 1 (all bits of x are 1 in its binary representation);
The smallest possible value for y is y = 1.
Hence the biggest possible value for x/y is x/y = 2**n - 1.
The time-complexity of your division algorithm is O(n * 2**n), and this upper-bound is achieved when x = 2**n - 1 and y = 1.
My proposed solution:
When calculating Big O complexity we need to take n->infinity, where n is > input size, we have 3
possibilities:
x & y both become infinity when n->infinity
y becomes infinity when n->infinity
x becomes infinity when n-> infinity
We are interested only in case 3 here,
(x – i * y ) < y, i being the number of iterations
also written as x/y < i + 1
when x becomes infinity LHS(Left hand side) is infinity in this
equation which implies RHS is infinity as well
So as n-> infinity the number iterations becomes equal to n
Hence, the complexity is O(n^2)
I want to compare three algorithms in case of steps of calculation, but I'm not very familiar with the O notation. The steps of calculation for each algorithm depend on three parameters (x,y,z):
algorithm 1) number_of_steps = x^2 * y * z - xyz
algorithm 2) number_of_steps = x^2 * y^2 * z - xyz
algorithm 3) number_of_steps = x^2 * y^2 * z^2 - xyz
There is not a huge difference between x, y and z - maybe one of the value is twice or triple another one, but not more.
How would it look in o notation? There are some solutions I can think about:
O1(n) vs. O2(n*y) vs. O3(n*y*z)
or
O1(n^4) vs. O2(n^5) vs. O3(n^6) - if x, y and z are nearly equal
or just
O1(x^2 * y * z) vs. O2(x^2 * y^2 * z) vs. O3(x^2 * y^2 * z^2)
Which is correct?
If max(x,y,z)/min(x,y,z) <= 3 (or less than any fixed number), then your complexities are
Algorithm 1: O(x^4) = O(y^4) = O(z^4)
Algorithm 2: O(x^5) = O(y^5) = O(z^5)
Algorithm 3: O(x^6) = O(y^6) = O(z^6)
If all you know is that x,y,z are at least 2, you can write
Algorithm 1: O(x^2 * y * z)
Algorithm 2: O(x^2 * y^2 * z)
Algorithm 3: O(x^2 * y^2 * z^2)
Of course you could always write them out in full like O(x^2 * y * z - xyz), but the simpler versions are better when possible.
I have the following function to prove that its time complexity is less or equal to O(xlogx)
f(x) =xlogx+3logx2
I need some help to solve this.
Given,
f(x) = xlogx+3logx^2
= xlogx+6logx // since log (a^b) = b log a
As we know, f(x) = O(g(x)), if | f(x) | <= M. | g(x) |, where M is a positive real number.
Therefore, for M>=7 and x varying in the real positive range,
M . x log x >= x log x + 6 log x
>= (x+6) log x.
f(x) = x log x + 3log x^2
= O(x log x).
I was curious if there was a good way to do this. My current code is something like:
def factorialMod(n, modulus):
ans=1
for i in range(1,n+1):
ans = ans * i % modulus
return ans % modulus
But it seems quite slow!
I also can't calculate n! and then apply the prime modulus because sometimes n is so large that n! is just not feasible to calculate explicitly.
I also came across http://en.wikipedia.org/wiki/Stirling%27s_approximation and wonder if this can be used at all here in some way?
Or, how might I create a recursive, memoized function in C++?
n can be arbitrarily large
Well, n can't be arbitrarily large - if n >= m, then n! ≡ 0 (mod m) (because m is one of the factors, by the definition of factorial).
Assuming n << m and you need an exact value, your algorithm can't get any faster, to my knowledge. However, if n > m/2, you can use the following identity (Wilson's theorem - Thanks #Daniel Fischer!)
to cap the number of multiplications at about m-n
(m-1)! ≡ -1 (mod m)
1 * 2 * 3 * ... * (n-1) * n * (n+1) * ... * (m-2) * (m-1) ≡ -1 (mod m)
n! * (n+1) * ... * (m-2) * (m-1) ≡ -1 (mod m)
n! ≡ -[(n+1) * ... * (m-2) * (m-1)]-1 (mod m)
This gives us a simple way to calculate n! (mod m) in m-n-1 multiplications, plus a modular inverse:
def factorialMod(n, modulus):
ans=1
if n <= modulus//2:
#calculate the factorial normally (right argument of range() is exclusive)
for i in range(1,n+1):
ans = (ans * i) % modulus
else:
#Fancypants method for large n
for i in range(n+1,modulus):
ans = (ans * i) % modulus
ans = modinv(ans, modulus)
ans = -1*ans + modulus
return ans % modulus
We can rephrase the above equation in another way, that may or may-not perform slightly faster. Using the following identity:
we can rephrase the equation as
n! ≡ -[(n+1) * ... * (m-2) * (m-1)]-1 (mod m)
n! ≡ -[(n+1-m) * ... * (m-2-m) * (m-1-m)]-1 (mod m)
(reverse order of terms)
n! ≡ -[(-1) * (-2) * ... * -(m-n-2) * -(m-n-1)]-1 (mod m)
n! ≡ -[(1) * (2) * ... * (m-n-2) * (m-n-1) * (-1)(m-n-1)]-1 (mod m)
n! ≡ [(m-n-1)!]-1 * (-1)(m-n) (mod m)
This can be written in Python as follows:
def factorialMod(n, modulus):
ans=1
if n <= modulus//2:
#calculate the factorial normally (right argument of range() is exclusive)
for i in range(1,n+1):
ans = (ans * i) % modulus
else:
#Fancypants method for large n
for i in range(1,modulus-n):
ans = (ans * i) % modulus
ans = modinv(ans, modulus)
#Since m is an odd-prime, (-1)^(m-n) = -1 if n is even, +1 if n is odd
if n % 2 == 0:
ans = -1*ans + modulus
return ans % modulus
If you don't need an exact value, life gets a bit easier - you can use Stirling's approximation to calculate an approximate value in O(log n) time (using exponentiation by squaring).
Finally, I should mention that if this is time-critical and you're using Python, try switching to C++. From personal experience, you should expect about an order-of-magnitude increase in speed or more, simply because this is exactly the sort of CPU-bound tight-loop that natively-compiled code excels at (also, for whatever reason, GMP seems much more finely-tuned than Python's Bignum).
Expanding my comment to an answer:
Yes, there are more efficient ways to do this. But they are extremely messy.
So unless you really need that extra performance, I don't suggest to try to implement these.
The key is to note that the modulus (which is essentially a division) is going to be the bottleneck operation. Fortunately, there are some very fast algorithms that allow you to perform modulus over the same number many times.
Division by Invariant Integers using Multiplication
Montgomery Reduction
These methods are fast because they essentially eliminate the modulus.
Those methods alone should give you a moderate speedup. To be truly efficient, you may need to unroll the loop to allow for better IPC:
Something like this:
ans0 = 1
ans1 = 1
for i in range(1,(n+1) / 2):
ans0 = ans0 * (2*i + 0) % modulus
ans1 = ans1 * (2*i + 1) % modulus
return ans0 * ans1 % modulus
but taking into account for an odd # of iterations and combining it with one of the methods I linked to above.
Some may argue that loop-unrolling should be left to the compiler. I will counter-argue that compilers are currently not smart enough to unroll this particular loop. Have a closer look and you will see why.
Note that although my answer is language-agnostic, it is meant primarily for C or C++.
n! mod m can be computed in O(n1/2 + ε) operations instead of the naive O(n). This requires use of FFT polynomial multiplication, and is only worthwhile for very large n, e.g. n > 104.
An outline of the algorithm and some timings can be seen here: http://fredrikj.net/blog/2012/03/factorials-mod-n-and-wilsons-theorem/
If we want to calculate M = a*(a+1) * ... * (b-1) * b (mod p), we can use the following approach, if we assume we can add, substract and multiply fast (mod p), and get a running time complexity of O( sqrt(b-a) * polylog(b-a) ).
For simplicity, assume (b-a+1) = k^2, is a square. Now, we can divide our product into k parts, i.e. M = [a*..*(a+k-1)] *...* [(b-k+1)*..*b]. Each of the factors in this product is of the form p(x)=x*..*(x+k-1), for appropriate x.
By using a fast multiplication algorithm of polynomials, such as Schönhage–Strassen algorithm, in a divide & conquer manner, one can find the coefficients of the polynomial p(x) in O( k * polylog(k) ). Now, apparently there is an algorithm for substituting k points in the same degree-k polynomial in O( k * polylog(k) ), which means, we can calculate p(a), p(a+k), ..., p(b-k+1) fast.
This algorithm of substituting many points into one polynomial is described in the book "Prime numbers" by C. Pomerance and R. Crandall. Eventually, when you have these k values, you can multiply them in O(k) and get the desired value.
Note that all of our operations where taken (mod p).
The exact running time is O(sqrt(b-a) * log(b-a)^2 * log(log(b-a))).
Expanding on my comment, this takes about 50% of the time for all n in [100, 100007] where m=(117 | 1117):
Function facmod(n As Integer, m As Integer) As Integer
Dim f As Integer = 1
For i As Integer = 2 To n
f = f * i
If f > m Then
f = f Mod m
End If
Next
Return f
End Function
I found this following function on quora:
With f(n,m) = n! mod m;
function f(n,m:int64):int64;
begin
if n = 1 then f:= 1
else f:= ((n mod m)*(f(n-1,m) mod m)) mod m;
end;
Probably beat using a time consuming loop and multiplying large number stored in string. Also, it is applicable to any integer number m.
The link where I found this function : https://www.quora.com/How-do-you-calculate-n-mod-m-where-n-is-in-the-1000s-and-m-is-a-very-large-prime-number-eg-n-1000-m-10-9+7
If n = (m - 1) for prime m then by http://en.wikipedia.org/wiki/Wilson's_theorem n! mod m = (m - 1)
Also as has already been pointed out n! mod m = 0 if n > m
Assuming that the "mod" operator of your chosen platform is sufficiently fast, you're bounded primarily by the speed at which you can calculate n! and the space you have available to compute it in.
Then it's essentially a 2-step operation:
Calculate n! (there are lots of fast algorithms so I won't repeat any here)
Take the mod of the result
There's no need to complexify things, especially if speed is the critical component. In general, do as few operations inside the loop as you can.
If you need to calculate n! mod m repeatedly, then you may want to memoize the values coming out of the function doing the calculations. As always, it's the classic space/time tradeoff, but lookup tables are very fast.
Lastly, you can combine memoization with recursion (and trampolines as well if needed) to get things really fast.
I am reading thee book "Discrete Mathematics and its Application" by Kenneth H. Rosen
I am now in the chapter Growth of Functions and trying the Exercise of that.[5th Edition page 142]
I am stuck here:
Determine whether these functions are in O(x^2) [(big oh of x's square]
[1] f(x) = 2^x,
[2] f(x) = (x^4)/2
[3] f(x) = floor(x) * floor(x)
I can not do the 1st one. Will anybody please help?
I have done the 2nd and 3rd as follows. Please check and comment.
[2]
if f(x) = (x^4)/2 is O(x^2) for x > k (k = any constant, C = any constant)
___then |(x^4)/2| <= C|.(x^2)| for x > k
___or |(x^2)| <= 2C for x > k
___or x <= sqrt(2C) for x > k
___or x<= C1 [ C1 = sqrt(2c) = Constant]
___but this contradicts with x > k where k is any constant
so f(x) is not O(x^2)
[3]
f(x) = floor(x) * floor(x) <= x * x for x > 1
__or f(x) <= x^2 for x>1
__so f(x) is O(x^2) taking C = 1 and k = 1 as witness
Please help me in the 1st one.
f(x) is in O(g(x)) exactly when f(x) <= a*g(x)+b for large x and fixed a and b. One method from elementary calculus is to divide and take the limit:
lim(f(x)/g(x)) as x->inf
If this is zero, then f(x) grows strictly slower than x^2. If it's any other finite number, then the two functions are in the same big-O class (and that number is your C-witness). If it's infinity, then f(x) is NOT O(x^2).
The simpler approach is just to look at the largest power of x (for polynomials) or just know that k^x grows faster than x^k for all fixed k greater than one. (Hint: this is your answer.)
Big-O notation is something you should be able to eyeball. In fact, if you aren't presently this comfortable with the relations between algebraic functions, then you're better off to drop the algorithms class and switch to a math class first until you have good grades in calculus. Then, you will have no trouble at all with big-O notation.