I am having trouble developing the recurrence for an algorithm that uses recursive Merge Sort calls for list sizes greater than m. It uses Selection Sort for list sizes less or equal to m.
Here is my pseudocode:
proc merge_and_selection (A, p, r, m) {
if (p <= r) then
q = (p + r)/2
if r - p > m then
merge_and_selection(A, p, q - 1, m)
merge_and_selection(A, q + 1, r, m)
else
selection_sort(A, p, q - 1)
selection_sort(A, q + 1, r)
end
merge(A, p, q, r)
end if
}
I think the recurrence is:
with T(2) = [m(m-1)]/2
I think more accurate formula is the following:
T(n) = 2*T(n/2) + Theta(n) for n >= m/2
T(n) = Theta(n^2) for n < m/2
Related
How can I find (n!) % m faster than O(n)?
1 <= n <= 1e18
1 <= m <= 1e6
You can easily have O(m) time complexity in the worst case (when m is a prime) and it seems to be good enough since you have m <= 1e6 (while n can be up to 1e18). Note, that when n >= m
n! = 1 * 2 * ... * m * ... * n
^
factorial is divisible by m
and that's why
n! % m == 0 # whenever n >= m
Another implementation detail is that you don't have to compute n! % m as 1 * 2 * ... * n % m but you can do it as ((..(1 % m) * 2 % m) ... * n % m) in order not to deal with huge numbers.
C# code example
private static int Compute(long n, long m) {
if (n >= m)
return 0;
long result = 1;
// result != 0 - we can well get 0 and stop looping when m is not prime
for (long d = 2; d <= n && result != 0; ++d)
result = (result * d) % m;
return result;
}
As explained by Dmitry, you can suppose than m<n. Let p1....pk the list of primes smaller or equal to m. Then m! mod n=(p1^a1.p2^a2....pk^ak)mod n=(p1^a1)mod n.(p2^a2 mod n)....(pk ^ak mod n) (mod n) for some a1.... ak that I'll let you find by yourself.
Using https://en.wikipedia.org/wiki/Modular_exponentiation, you can then compute m! (mod n).
So here is an algorithm that is supposed to return the polynomial value of P(x) of a given polynomial with any given x.
A[] is the coefficient array and P[] the power of x array.
(e.g. x^2 +2*x + 1 would have: A[] = {1,2,1} , P[]= {2,1,0})
Also, recPower() = O(logn)
int polynomial(int x, int A[], int P[], int l, int r)
{
if (r - l == 1)
return ( A[l] * recPower(x, P[l]) ) + ( A[r] * recPower (x, P[r]) );
int m = (l + r) / 2;
return polynomial(x, A, P, l, m) + polynomial(x, A, P, m, r);
}
How do I go about calculating this time complexity? I am perplexed due to the if statement. I have no idea what the recurrence relation will be.
Following observation might help: As soon as we have r = l + 1, we spend O(logn) time and we are done.
My answer requires good understanding of Recursion Tree. So proceed wisely.
So our aim is to find : after how many iterations will we be able to tell that we have r = l + 1?
Lets find out:
Focusing on return polynomial(x, A, P, l, m) + polynomial(x, A, P, m, r);
Let us first consider left function polynomial(x, A, P, l, m). Key thing to note is that l , remains constant , in all subsequent left function called recursively.
By left function I mean polynomial(x, A, P, l, m) and by right function I mean
polynomial(x, A, P, m, r).
For left function polynomial(x, A, P, l, m), We have:
First iteration
l = l and r = (l + r)/2
Second iteration
l = l and r = (l + (l + r)/2)/2
which means that
r = (2l + l + r)/2
Third iteration
l = l and r = (l + (l + (l + r)/2)/2)/2
which means that
r = (4l + 2l + l + r)/4
Fourth iteration
l = l and r = (l + (l + (l + (l + r)/2)/2)/2)/2
which means that
r = (8l + 4l + 2l + l + r)/8
This means in nth iteration we have:
r = (l(1 + 2 + 4 + 8 +......2^n-1) + r)/2^n
and terminating condition is r = l + 1
Solving (l(1 + 2 + 4 + 8 +......2^n-1) + r)/2^n = l + 1, we get
2^n = r - l
This means that n = log(r - l). One might say that in all subsequent calls of left function we ignored the other call, that is right function call. The reason is this:
Since in the right function call we l = m, where m is already a reduced , as we take the mean, and r = r, which is even more averaged this asymptotically wont have any effect on time complexity.
So our recursion tree will have maximum depth = log(r - l). Its true that not all levels will be fully populated, but for the sake of simplicity, we assume this in asymptotic analysis. So after reaching a depth of log(r - l), we call function recPower, which takes O(logn) time. Total nodes (assuming all levels above are full) at depth log(r - l) is 2^(log(r - l) - 1). For a single node , we take O(logn) time.
Therefore we have total time = O( logn*(2^(log(r - l) - 1)) ).
This might help:
T(#terms) = 2T(#terms/2) + a
T(2) = 2logn + b
Where a and b are constants, and #terms refer to number of terms in polynomial.
This recurrence relation can be solved using Master's Theorem or using the Recursion tree method.
I have a question about the Euclid's Algorithm for finding greatest common divisors.
gcd(p,q) where p > q and q is a n-bit integer.
I'm trying to follow a time complexity analysis on the algorithm (input is n-bits as above)
gcd(p,q)
if (p == q)
return q
if (p < q)
gcd(q,p)
while (q != 0)
temp = p % q
p = q
q = temp
return p
I already understand that the sum of the two numbers, u + v where u and v stand for initial values of p and q , reduces by a factor of at least 1/2.
Now let m be the number of iterations for this algorithm.
We want to find the smallest integer m such that (1/2)^m(u + v) <= 1
Here is my question.
I get that sum of the two numbers at each iteration is upper-bounded by (1/2)^m(p + q). But I don't really see why the max m is reached when this quantity is <= 1.
The answer is O(n) for n-bits q, but this is where I'm getting stuck.
Please help!!
Imagine that we have p and q where p > q. Now, there are two cases:
1) p >= 2*q: in this case, p will be reduced to something less than q after mod, so the sum will be at most 2/3 of what is was before.
2) q < p < 2*q: in this case, a mod operation will be like subtracting q from p, so again the overall sum will be at most 2/3 of what is was before.
Therefore, in each step this sum will be 2/3 of the last sum. Since your numbers are n bits the magnitude of the sum is 2^{n+1}; so, with log 2^{n+1} (base 3/2) steps which is actually O(n), the sum will be 0.
I'm learning about recurrence relations at the moment. I can solve them and figure out the bounds on them, but what I'm not really sure of is how to come up with a recurrence relation for a particular algorithm. Here's an example in my book:
// Sort array A[] between indices p and r inclusive.
SampleSort (A, p, r) {
// Base Case: use HeapSort
//
if (r - p < 12) {
HeapSort(A, p, r) ;
}
// Break the array into 1st quarter, 2nd quarter and second half
//
n = r - p + 1 ; // number of items in A[p..r] inclusive
q1 = p - 1 + n/4 ; // end of 1st quarter
q2 = q1 + n/4 ; // end of 2nd quarter
// Sort each of the 3 pieces
// using SampleSort recursively, Insertion-Sort and Heap-Sort
//
SampleSort (A, p, q1) ;
InsertionSort (A, q1 + 1, q2) ;
HeapSort (A, q2 + 1, r) ;
// Merge the 3 sorted arrays into 1 sorted array
//
Merge (A, p, q1, q2) ; // Merge 1st & 2nd quarter
Merge (A, p, q2, r) ; // Merge 1st & 2nd halves
return ;
}
It also says I can assume InsertionSort, HeapSort and Merge are Θ(n2), Θ(n log n) and Θ(n).
Here's what I've come up with so far:
I'm dividing the array into three pieces. The first two pieces are 1/4 of the original data, the and the third piece (the half) is 1/2 of the data.
So right now I have T(n) = 2T(n/4) + T(n/2).
Not sure where to go from here. Any help would be greatly appreciated!
As David points out in his comment, there is only one recursive call in the algorithm. So your recurrence relation looks like this:
SampleSort InsertionSort HeapSort Merges
| | | |
v v v v
T(n) = T(n / 4) + O((n / 4)^2) + O((n / 2) log (n / 2)) + O(n)
= T(n / 4) + O(n^2)
Using the Master theorem (Case 3), we conclude that
T(n) = O(n^2) (worst case)
Because SampleSort on n items involves HeapSort on n / 2 items, which has best case Ω((n / 2) log (n / 2)) = Ω(n log n), we know that
T(n) = Ω(n log n) (best case)
I have this problem that I can't solve.. what is the complexity of this foo algorithm?
int foo(char A[], int n, int m){
int i, a=0;
if (n>=m)
return 0;
for(i=n;i<m;i++)
a+=A[i]
return a + foo(A, n*2, m/2);
}
the foo function is called by:
foo(A,1,strlen(A));
so.. I guess it's log(n) * something for the internal for loop.. which I'm not sure if it's log(n) or what..
Could it be theta of log^2(n)?
This is a great application of the master theorem:
Rewrite in terms of n and X = m-n:
int foo(char A[], int n, int X){
int i, a=0;
if (X < 0) return 0;
for(i=0;i<X;i++)
a+=A[i+n]
return a + foo(A, n*2, (X-3n)/2);
}
So the complexity is
T(X, n) = X + T((X - 3n)/2, n*2)
Noting that the penalty increases with X and decreases with n,
T(X, n) < X + T(X/2, n)
So we can consider the complexity
U(X) = X + U(X/2)
and plug this into master theorem to find U(X) = O(X) --> complexity is O(m-n)
I'm not sure if there's a 'quick and dirty' way, but you can use old good math. No fancy theorems, just simple equations.
On k-th level of recursion (k starts from zero), a loop will have ~ n/(2^k) - 2^k iterations. Therefore, the total amount of loop iterations will be S = sum(n/2^i) - sum(2^i) for 0 <= i <= l, where l is the depth of recursion.
The l will be approximately log(2, n)/2 (prove it).
Transforming each part in formula for S separately, we get.
S = (1 + 2 + .. + 2^l)*n/2^l - (2^(l + 1) - 1) ~= 2*n - 2^(l + 1) ~= 2*n - sqrt(n)
Since each other statement except loop will be repeated only l times and we know that l ~= log(2, n), it won't affect complexity.
So, in the end we get O(n).