recurrence relation on a Merge Sort algorithm - algorithm

The question is :
UNBALANCED MERGE SORT
is a sorting algorithm, which is a modified version of
the standard MERGE SORT
algorithm. The only difference is that instead of dividing
the input into 2 equal parts in each stage, we divide it into two unequal parts – the first
2/5 of the input, and the other 3/5.
a. Write the recurrence relation for the worst case time complexity of the
UNBALANCED MERGE SORT
algorithm.
b. What is the worst case time complexity of the UNBALANCEDMERGESORT
algorithm? Solve the recurrence relation from the previous section.
So i'm thinkin the recurrence relation is : T(n) <= T(2n/5) + T(3n/5) + dn.
Not sure how to solve it.
Thanks in advance.

I like to look at it as "runs", where the ith "run" is ALL the recursive steps with depth exactly i.
In each such run, at most n elements are being processed (we will prove it soon), so the total complexity is bounded by O(n*MAX_DEPTH), now, MAX_DEPTH is logarithmic, as in each step the bigger array is size 3n/5, so at step i, the biggest array is of size 3^i/5^i * n.
Sovle the equation:
3^i/5^i * n = 1
and you will find out that i = log_a(n) - for some base a
So, let's be more formal:
Claim:
Each element is being processed by at most one recursive call at depth
i, for all values of i.
Proof:
By induction, at depth 0, all elements are processed exactly once by the first call.
Let there be some element x, and let's have a look on it at step i+1. We know (induction hypothesis) that x was processed at most once in depth i, by some recursive call. This call later invoked (or not, we claim at most once) the recursive call of depth i+1, and sent the element x to left OR to right, never to both. So at depth i+1, the element x is proccessed at most once.
Conclusion:
Since at each depth i of the recursion, each element is processed at most once, and the maximal depth of the recursion is logarithmic, we get an upper bound of O(nlogn).
We can similarly prove a lower bound of Omega(nlogn), but that is not needed, since sorting is already an Omega(nlogn) problem - so we can conclude the modified algorithm is still Theta(nlogn).
If you want to prove it with "basic arithmetics", it can also be done, by induction.
Claim: T(n) = T(3n/5) + T(2n/5) + n <= 5nlog(n) + n
It will be similar when replacing +n with +dn, I simplified it, but follow the same idea of proof with T(n) <= 5dnlogn + dn
Proof:
Base: T(1) = 1 <= 1log(1) + 1 = 1
T(n) = T(3n/5) + T(2n/5) + n
<= 5* (3n/5) log(3n/5) +3n/5 + 5*(2n/5)log(2n/5) +2n/5 + n
< 5* (3n/5) log(3n/5) + 5*(2n/5)log(3n/5) + 2n
= 5*nlog(3n/5) + 2n
= 5*nlog(n) + 5*nlog(3/5) + 2n
(**)< 5*nlog(n) - n + 2n
= 5nlog(n) + n
(**) is because log(3/5)~=-0.22, so 5nlog(3/5) < -n, and 5nlog(3/5) + 2n < n

Related

Sorting algorithm proof and running-time

Hydrosort is a sorting algorithm. Below is the pseudocode.
*/A is arrary to sort, i = start index, j = end index */
Hydrosort(A, i, j): // Let T(n) be the time to find where n = j-1+1
n = j – i + 1 O(1)
if (n < 10) { O(1)
sort A[i…j] by insertion-sort O(n^2) //insertion sort = O(n^2) worst-case
return O(1)
}
m1 = i + 3 * n / 4 O(1)
m2 = i + n / 4 O(1)
Hydrosort(A, i, m1) T(n/2)
Hydrosort(A, m2, j) T(n/2)
Hydrosort(A, i, m1) T(n/2)
T(n) = O(n^2) + 3T(n/2), so T(n) is O(n^2). I used the 3rd case of the Master Theorem to solve this recurrence.
I have 2 questions:
Have I calculated the worst-case running time here correctly?
how would I prove that Hydrosort(A, 1, n) correctly sorts an array A of n elements?
Have I calculated the worst-case running time here correctly?
I am afraid not.
The complexity function is:
T(n) = 3T(3n/4) + CONST
This is because:
You have three recursive calls for a problem of size 3n/4
The constant modifier here is O(1), since all non recursive calls operations are bounded to a constant size (Specifically, insertion sort for n<=10 is O(1)).
If you go on and calculate it, you'll get worse than O(n^2) complexity
how would I prove that Hydrosort(A, 1, n) correctly sorts an array A
of n elements?
By induction. Assume your algorithm works for problems of size n, and let's examine a problem of size n+1. For n+1<10 it is trivial, so we ignore this case.
After first recursive call, you are guaranteed that first 3/4 of the
array is sorted, and in particular you are guaranteed that the first n/4 of the elements are the smallest ones in this part. This means, they cannot be in the last n/4 of the array, because there are at least n/2 elements bigger than them. This means, the n/4 biggest elements are somewhere between m2 and j.
After second recursive call, since it is guaranteed to be invoked on the n/4 biggest elements, it will place these elements at the end of the array. This means the part between m1 and j is now sorted properly. This also means, 3n/4 smallest elements are somewhere in between i and m1.
Third recursive call sorts the 3n/4 elements properly, and the n/4 biggest elements are already in place, so the array is now sorted.

Complexity of array sum with divide and conquer

Let the following algorithm be:
sum(v, i, j) {
if i == j
return v[i]
else {
k = (i + j) / 2
return sum(v, i, k) + sum(v, k+1, j)
}
}
The time complexity of this algorithm is O(n), but how can I prove (in natural language) its complexity? The problem always gets divided in two new problems so that would be O(log n), but where does the rest of the complexity come from?
Applying master theorem yields the expected result, O(n).
Thanks.
From a high level perspective, your algorithm acts as if it is traversing a balanced binary tree, where each node covers a specific interval [i, j]. Their children divide the interval into 2, roughly equal parts, namely [i, (i+j)/2] and [(i+j)/2 + 1, j].
Let's assume that they are, in this case equal. (in other words, for the sake of the proof, the length of the array n is a power of 2)
Think of it in the following way. There are n leaves of this balanced binary tree your algorithm is traversing. Each are responsible from an interval of length 1. There are n/2 nodes of the tree that are the parents of these n leaves. Those n/2 nodes have n/4 parents. This goes all the way until you reach the root node of the tree, which covers the entire interval.
Think of how many nodes there are in this tree. n + (n/2) + (n/4) + (n/8) + ... + 2 + 1. Since we initially assumed that n = 2^k, we can formulate this sum as the sum of exponents, for which the summation formula is well known. It turns out that there are 2^(k+1) - 1 = 2 * (2^k) - 1 = 2n - 1 nodes in that tree. So, obviously traversing all nodes of that tree would take O(n) time.
Dividing the problem in two parts does not necessarly mean that complexity is log(n).
I guess you are referring to binary search algorithm but in that every division each half is skipped as we know search key would be in other side of division.
Just by looking at the sudo code , Recursive call is made for every division and it is not skipping anything. Why would it be log(n)?
O(n) is correct complexity.

Why is the Fibonacci Sequence Big O(2^n) instead of O(logn)?

I took discrete math (in which I learned about master theorem, Big Theta/Omega/O) a while ago and I seem to have forgotten the difference between O(logn) and O(2^n) (not in the theoretical sense of Big Oh). I generally understand that algorithms like merge and quick sort are O(nlogn) because they repeatedly divide the initial input array into sub arrays until each sub array is of size 1 before recursing back up the tree, giving a recursion tree that is of height logn + 1. But if you calculate the height of a recursive tree using n/b^x = 1 (when the size of the subproblem has become 1 as was stated in an answer here) it seems that you always get that the height of the tree is log(n).
If you solve the Fibonacci sequence using recursion, I would think that you would also get a tree of size logn, but for some reason, the Big O of the algorithm is O(2^n). I was thinking that maybe the difference is because you have to remember all of the fib numbers for each subproblem to get the actual fib number meaning that the value at each node has to be recalled, but it seems that in merge sort, the value of each node has to be used (or at least sorted) as well. This is unlike binary search, however, where you only visit certain nodes based on comparisons made at each level of the tree so I think this is where the confusion is coming from.
So specifically, what causes the Fibonacci sequence to have a different time complexity than algorithms like merge/quick sort?
The other answers are correct, but don't make it clear - where does the large difference between the Fibonacci algorithm and divide-and-conquer algorithms come from? Indeed, the shape of the recursion tree for both classes of functions is the same - it's a binary tree.
The trick to understand is actually very simple: consider the size of the recursion tree as a function of the input size n.
In the Fibonacci recursion, the input size n is the height of the tree; for sorting, the input size n is the width of the tree. In the former case, the size of the tree (i.e. the complexity) is an exponent of the input size, in the latter: it is input size multiplied by the height of the tree, which is usually just a logarithm of the input size.
More formally, start by these facts about binary trees:
The number of leaves n is a binary tree is equal to the the number of non-leaf nodes plus one. The size of a binary tree is therefore 2n-1.
In a perfect binary tree, all non-leaf nodes have two children.
The height h for a perfect binary tree with n leaves is equal to log(n), for a random binary tree: h = O(log(n)), and for a degenerate binary tree h = n-1.
Intuitively:
For sorting an array of n elements with a recursive algorithm, the recursion tree has n leaves. It follows that the width of the tree is n, the height of the tree is O(log(n)) on the average and O(n) in the worst case.
For calculating a Fibonacci sequence element k with the recursive algorithm, the recursion tree has k levels (to see why, consider that fib(k) calls fib(k-1), which calls fib(k-2), and so on). It follows that height of the tree is k. To estimate a lower-bound on the width and number of nodes in the recursion tree, consider that since fib(k) also calls fib(k-2), therefore there is a perfect binary tree of height k/2 as part of the recursion tree. If extracted, that perfect subtree would have 2k/2 leaf nodes. So the width of the recursion tree is at least O(2^{k/2}) or, equivalently, 2^O(k).
The crucial difference is that:
for divide-and-conquer algorithms, the input size is the width of the binary tree.
for the Fibonnaci algorithm, the input size is it the height of the tree.
Therefore the number of nodes in the tree is O(n) in the first case, but 2^O(n) in the second. The Fibonacci tree is much larger compared to the input size.
You mention Master theorem; however, the theorem cannot be applied to analyze the complexity of Fibonacci because it only applies to algorithms where the input is actually divided at each level of recursion. Fibonacci does not divide the input; in fact, the functions at level i produce almost twice as much input for the next level i+1.
To address the core of the question, that is "why Fibonacci and not Mergesort", you should focus on this crucial difference:
The tree you get from Mergesort has N elements for each level, and there are log(n) levels.
The tree you get from Fibonacci has N levels because of the presence of F(n-1) in the formula for F(N), and the number of elements for each level can vary greatly: it can be very low (near the root, or near the lowest leaves) or very high. This, of course, is because of repeated computation of the same values.
To see what I mean by "repeated computation", look at the tree for the computation of F(6):
Fibonacci tree picture from: http://composingprograms.com/pages/28-efficiency.html
How many times do you see F(3) being computed?
Consider the following implementation
int fib(int n)
{
if(n < 2)
return n;
return fib(n-1) + fib(n-2)
}
Let's denote T(n) the number of operations that fib performs to calculate fib(n). Because fib(n) is calling fib(n-1) and fib(n-2), it means that T(n) is at least T(n-1) + T(n-2). This in turn means that T(n) > fib(n). There is a direct formula of fib(n) which is some constant to the power of n. Therefore T(n) is at least exponential. QED.
To my understanding, the mistake in your reasoning is that using a recursive implementation to evaluate f(n) where f denotes the Fibonacci sequence, the input size is reduced by a factor of 2 (or some other factor), which is not the case. Each call (except for the 'base cases' 0 and 1) uses exactly 2 recursive calls, as there is no possibility to re-use previously calculated values. In the light of the presentation of the master theorem on Wikipedia, the recurrence
f(n) = f (n-1) + f(n-2)
is a case for which the master theorem cannot be applied.
With the recursive algo, you have approximately 2^N operations (additions) for fibonacci (N). Then it is O(2^N).
With a cache (memoization), you have approximately N operations, then it is O(N).
Algorithms with complexity O(N log N) are often a conjunction of iterate over every item (O(N)) , split recurse, and merge ... Split by 2 => you do log N recursions.
Merge sort time complexity is O(n log(n)). Quick sort best case is O(n log(n)), worst case O(n^2).
The other answers explain why naive recursive Fibonacci is O(2^n).
In case you read that Fibonacci(n) can be O(log(n)), this is possible if calculated using iteration and repeated squaring either using matrix method or lucas sequence method. Example code for lucas sequence method (note that n is divided by 2 on each loop):
/* lucas sequence method */
int fib(int n) {
int a, b, p, q, qq, aq;
a = q = 1;
b = p = 0;
while(1) {
if(n & 1) {
aq = a*q;
a = b*q + aq + a*p;
b = b*p + aq;
}
n /= 2;
if(n == 0)
break;
qq = q*q;
q = 2*p*q + qq;
p = p*p + qq;
}
return b;
}
As opposed to answers master theorem can be applied. But master theorem for decreasing functions needs to be applied instead of master theorem for dividing functions. Without theorem with following recurrence relation with substitution it can be solved,
f(n) = f(n-1) + f(n-2)
f(n) = 2*f(n-1) + c
let assume c is equal 1 since it is constant and doesn't affect the complexity
f(n) = 2*f(n-1) + 1
and substitute this function k times
f(n) = 2*[2*f(n-2) +1 ] + 1
f(n) = 2^2*f(n-2) + 2 + 1
f(n) = 2^2*[2*f(n-3) + 1] +2 + 1
f(n) = 2^3*f(n-3) + 4 + 2 + 1
.
.
.
f(n) = 2^k*f(n-k) + 2^k-1 + 2^k-2 + ... + 4 + 2 + 1
now let's assume n=k
f(n) = 2^n*f(0) + 2^n-1 + 2^n-2 + ... + 4 + 2 + 1
f(n) = 2^n+1 thus complexity is O(2^n)
Check this video for master theorem for decreasing functions.

Worst Case Performance of Quicksort

I am trying to prove the following worst-case scenario for the Quicksort algorithm but am having some trouble. Initially, we have an array of size n, where n = ij. The idea is that at every partition step of Quicksort, you end up with two sub-arrays where one is of size i and the other is of size i(j-1). i in this case is an integer constant greater than 0. I have drawn out the recursive tree of some examples and understand why this is a worst-case scenario and that the running time will be theta(n^2). To prove this, I've used the iteration method to solve the recurrence equation:
T(n) = T(ij) = m if j = 1
T(n) = T(ij) = T(i) + T(i(j-1)) + cn if j > 1
T(i) = m
T(2i) = m + m + c*2i = 2m + 2ci
T(3i) = m + 2m + 2ci + 3ci = 3m + 5ci
So it looks like the recurrence is:
j
T(n) = jm + ci * sum k - 1
k=1
At this point, I'm a bit lost as to what to do. It looks the summation at the end will result in j^2 if expanded out, but I need to show that it somehow equals n^2. Any explanation on how to continue with this would be appreciated.
Pay attention, the quicksort algorithm worst case scenario is when you have two subproblems of size 0 and n-1. In this scenario, you have this recurrence equations for each level:
T(n) = T(n-1) + T(0) < -- at first level of tree
T(n-1) = T(n-2) + T(0) < -- at second level of tree
T(n-2) = T(n-3) + T(0) < -- at third level of tree
.
.
.
The sum of costs at each level is an arithmetic serie:
n n(n-1)
T(n) = sum k = ------ ~ n^2 (for n -> +inf)
k=1 2
It is O(n^2).
Its a problem of simple mathematics. The complexity as you have calculated correctly is
O(jm + ij^2)
what you have found out is a parameterized complextiy. The standard O(n^2) is contained in this as follows - assuming i=1 you have a standard base case so m=O(1) hence j=n therefore we get O(n^2). if you put ij=n you will get O(nm/i+n^2/i) . Now what you should remember is that m is a function of i depending upon what you will use as the base case algorithm hence m=f(i) thus you are left with O(nf(i)/i + n^2/i). Now again note that since there is no linear algorithm for general sorting hence f(i) = omega(ilogi) which will give you O(nlogi + n^2/i). So you have only one degree of freedom that is i. Check that for any value of i you cannot reduce it below nlogn which is the best bound for comparison based.
Now what I am confused is that you are doing some worst case analysis of quick sort. This is not the way its done. When you say worst case it implies you are using randomization in which case the worst case will always be when i=1 hence the worst case bound will be O(n^2). An elegant way to do this is explained in randomized algorithm book by R. Motwani and Raghavan alternatively if you are a programmer then you look at Cormen.

Average Runtime of Quickselect

Wikipedia states that the average runtime of quickselect algorithm (Link) is O(n). However, I could not clearly understand how this is so. Could anyone explain to me (via recurrence relation + master method usage) as to how the average runtime is O(n)?
Because
we already know which partition our desired element lies in.
We do not need to sort (by doing partition on) all the elements, but only do operation on the partition we need.
As in quick sort, we have to do partition in halves *, and then in halves of a half, but this time, we only need to do the next round partition in one single partition (half) of the two where the element is expected to lie in.
It is like (not very accurate)
n + 1/2 n + 1/4 n + 1/8 n + ..... < 2 n
So it is O(n).
Half is used for convenience, the actual partition is not exact 50%.
To do an average case analysis of quickselect one has to consider how likely it is that two elements are compared during the algorithm for every pair of elements and assuming a random pivoting. From this we can derive the average number of comparisons. Unfortunately the analysis I will show requires some longer calculations but it is a clean average case analysis as opposed to the current answers.
Let's assume the field we want to select the k-th smallest element from is a random permutation of [1,...,n]. The pivot elements we choose during the course of the algorithm can also be seen as a given random permutation. During the algorithm we then always pick the next feasible pivot from this permutation therefore they are chosen uniform at random as every element has the same probability of occurring as the next feasible element in the random permutation.
There is one simple, yet very important, observation: We only compare two elements i and j (with i<j) if and only if one of them is chosen as first pivot element from the range [min(k,i), max(k,j)]. If another element from this range is chosen first then they will never be compared because we continue searching in a sub-field where at least one of the elements i,j is not contained in.
Because of the above observation and the fact that the pivots are chosen uniform at random the probability of a comparison between i and j is:
2/(max(k,j) - min(k,i) + 1)
(Two events out of max(k,j) - min(k,i) + 1 possibilities.)
We split the analysis in three parts:
max(k,j) = k, therefore i < j <= k
min(k,i) = k, therefore k <= i < j
min(k,i) = i and max(k,j) = j, therefore i < k < j
In the third case the less-equal signs are omitted because we already consider those cases in the first two cases.
Now let's get our hands a little dirty on calculations. We just sum up all the probabilities as this gives the expected number of comparisons.
Case 1
Case 2
Similar to case 1 so this remains as an exercise. ;)
Case 3
We use H_r for the r-th harmonic number which grows approximately like ln(r).
Conclusion
All three cases need a linear number of expected comparisons. This shows that quickselect indeed has an expected runtime in O(n). Note that - as already mentioned - the worst case is in O(n^2).
Note: The idea of this proof is not mine. I think that's roughly the standard average case analysis of quickselect.
If there are any errors please let me know.
In quickselect, as specified, we apply recursion on only one half of the partition.
Average Case Analysis:
First Step: T(n) = cn + T(n/2)
where, cn = time to perform partition, where c is any constant(doesn't matter). T(n/2) = applying recursion on one half of the partition.Since it's an average case we assume that the partition was the median.
As we keep on doing recursion, we get the following set of equation:
T(n/2) = cn/2 + T(n/4) T(n/4) = cn/2 + T(n/8) .. . T(2) = c.2 + T(1) T(1) = c.1 + ...
Summing the equations and cross-cancelling like values produces a linear result.
c(n + n/2 + n/4 + ... + 2 + 1) = c(2n) //sum of a GP
Hence, it's O(n)
I also felt very conflicted at first when I read that the average time complexity of quickselect is O(n) while we break the list in half each time (like binary search or quicksort). It turns out that breaking the search space in half each time doesn't guarantee an O(log n) or O(n log n) runtime. What makes quicksort O(n log n) and quickselect is O(n) is that we always need to explore all branches of the recursive tree for quicksort and only a single branch for quickselect. Let's compare the time complexity recurrence relations of quicksort and quickselect to prove my point.
Quicksort:
T(n) = n + 2T(n/2)
= n + 2(n/2 + 2T(n/4))
= n + 2(n/2) + 4T(n/4)
= n + 2(n/2) + 4(n/4) + ... + n(n/n)
= 2^0(n/2^0) + 2^1(n/2^1) + ... + 2^log2(n)(n/2^log2(n))
= n (log2(n) + 1) (since we are adding n to itself log2 + 1 times)
Quickselect:
T(n) = n + T(n/2)
= n + n/2 + T(n/4)
= n + n/2 + n/4 + ... n/n
= n(1 + 1/2 + 1/4 + ... + 1/2^log2(n))
= n (1/(1-(1/2))) = 2n (by geometric series)
I hope this convinces you why the average runtime of quickselect is O(n).

Resources