space complexity of merge sort using array - algorithm

This algorithm is of mergesort, I know this may be looking weird to you but my main focus is on calculating space complexity of this algorithm.
If we look at the recurrence tree of mergesort function and try to trace the algorithm then the stack size will be log(n). But since merge function is also there inside the mergesort which is creating two arrays of size n/2, n/2 , then first should I find the space complexity of recurrence relation and then, should I add in that n/2 + n/2 that will become O(log(n) + n).
I know the answer, but I am confused in the process. Can anyone tell me correct procedure?
This confusion is due to merge function which is not recursive but called in a recursive function
And why we are saying that space complexity will be O(log(n) + n) and by the definition of recursive function space complexity, we usually calculate the height of recursive tree
Merge(Leftarray, Rightarray, Array) {
nL <- length(Leftarray)
nR <- length(Rightarray)
i <- j <- k <- 0
while (i < nL && j < nR) {
if (Leftarray[i] <= Rightarray[j])
Array[k++] <- Leftarray[i++]
else
Array[k++] <- Rightarray[j++]
}
while (i < nL) {
Array[k++] <- Leftarray[i++]
}
while (j < nR) {
Array[k++] <- Rightarray[j++]
}
}
Mergesort(Array) {
n <- length(Array)
if (n < 2)
return
mid <- n / 2
Leftarray <- array of size (mid)
Rightarray <- array of size (n-mid)
for i <- 0 to mid-1
Leftarray[i] <- Array[i]
for i <- mid to n-1
Right[i-mid] <- Array[mid]
Mergesort(Leftarray)
Mergesort(Rightarray)
Merge(Leftarray, Rightarray)
}

MergeSort time Complexity is O(nlgn) which is a fundamental knowledge. Merge Sort space complexity will always be O(n) including with arrays. If you draw the space tree out, it will seem as though the space complexity is O(nlgn). However, as the code is a Depth First code, you will always only be expanding along one branch of the tree, therefore, the total space usage required will always be bounded by O(3n) = O(n).
For example, if you draw the space tree out, it seems like it is O(nlgn)
16 | 16
/ \
/ \
/ \
/ \
8 8 | 16
/ \ / \
/ \ / \
4 4 4 4 | 16
/ \ / \ / \ / \
2 2 2 2..................... | 16
/ \ /\ ........................
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 | 16
where height of tree is O(logn) => Space complexity is O(nlogn + n) = O(nlogn). However, this is not the case in the actual code as it does not execute in parallel. For example, in the case where N = 16, this is how the code for mergesort executes. N = 16.
16
/
8
/
4
/
2
/ \
1 1
notice how number of space used is 32 = 2n = 2*16 < 3n
Then it merge upwards
16
/
8
/
4
/ \
2 2
/ \
1 1
which is 34 < 3n. Then it merge upwards
16
/
8
/ \
4 4
/
2
/ \
1 1
36 < 16 * 3 = 48
then it merge upwards
16
/ \
8 8
/ \
4 4
/ \
2 2
/\
1 1
16 + 16 + 14 = 46 < 3*n = 48
in a larger case, n = 64
64
/ \
32 32
/ \
16 16
/ \
8 8
/ \
4 4
/ \
2 2
/\
1 1
which is 64*3 <= 3*n = 3*64
You can prove this by induction for the general case.
Therefore, space complexity is always bounded by O(3n) = O(n) even if you implement with arrays as long as you clean up used space after merging and not execute code in parallel but sequential.
Example of my implementation is given below:

This implementation of MergeSort is quite inefficient in memory space and has some bugs:
the memory is not freed, I assume you rely on garbage collection.
the target array Array is not passed to Merge by MergeSort.
Extra space in the amount of the size of the Array is allocated by MergeSort for each recursion level, so at least twice the size of the initial array (2*N) is required, if the garbage collection is optimal, for example if it uses reference counts, and up to N*log2(N) space is used if the garbage collector is lazy. This is much more than required, as a careful implementation can use as little as N/2 extra space.

Related

Is there such a BST that has optimal height but does not satisfy the AVL condition?

I'm curious whether it is possible to construct a binary search tree in such a way that it has minimal height for its n elements but it is not an AVL tree.
Or in other words, is every binary search tree with minimal height by definition also an AVL tree?
The AVL requirement is that left and right depths differ at most by 1.
An optimal BST of N elements, where D = ²log N, has the property that sum of depths is minimal. The effect is that the depth of every element resides at most ceil(D) deep.
To have a minimal sum of depths the tree must be filled most full from the top on down, so the sum of individual lengths is minimal.
Not optimal BST - and not AVL:
f
/ \
a q
/ \
n x
/ \ \
j p y
Elememts: 8
Depts: 0 + 1 + 1 + 2 + 2 + 3 + 3 + 3 = 15
Optimal BST - and AVL:
_ f _
/ \
j q
/ \ / \
a n p x
\
y
Elememts: 8
Depts: 0 + 1 + 1 + 2 + 2 + 2 + 2 + 3 = 13
So there is no non-AVL optimal BST.

Space complexity of breadth first search of binary tree?

What would be the space complexity of breadth first search on a binary tree? Since it would only store one level at a time, I don't think it would be O(n).
The space complexity is in fact O(n), as witnessed by a perfect binary tree. Consider an example of depth four:
____________________14____________________
/ \
_______24_________ __________8_________
/ \ / \
__27__ ____11___ ___23___ ____22___
/ \ / \ / \ / \
_4 5 _13 _2 _17 _12 _26 _25
/ \ / \ / \ / \ / \ / \ / \ / \
29 0 9 6 16 19 20 1 10 7 21 15 18 30 28 3
Note that the number of nodes at each depth is given by
depth num_nodes
0 1
1 2
2 4
3 8
4 16
In general, there are 2^d nodes at depth d. The total number of nodes in a perfect binary tree of depth d is n = 1 + 2^1 + 2^2 + ... + 2^d = 2^(d+1) - 1. As d goes to infinity, 2^d/n goes to 1/2. So, roughly half of all nodes occur at the deepest level. Since n/2 = O(n), the space complexity is linear in the number of nodes.
The illustration credit goes to the binarytree package.

A statement from heapsort algorithm in clrs book

please explain the underlined statement in the picture. It's from section 6.2 in CLRS. How is the subtree size 2n/3 at most ?
Remember that balance in binary trees is generally a good thing for time complexities! The worst case time complexity occurs when the tree is the most inbalanced it can be. We're working with heaps here – heaps are complete binary trees. The most inbalanced a complete tree can have is when its bottomost level is half-full. This is shown below.
-------*-------
/ \
* *
/ \ / \
/ \ / \
/ \ / \
/-------\ /-------\
/---------\ <-- last level is half-full
Suppose there are m nodes in the last level. Then there must be m - 1 nodes remaining in the left subtree.
-------*-------
/ \
* *
/ \ / \
/ \ / \
/ m-1 \ / \
/-------\ /-------\
/--- m ---\
Why? Well in general, a tree with m leaf nodes must have m - 1 internal nodes. Imagine if these m leaf nodes represented players in a tournament, if one player is eliminated per game, there must be m - 1 games to determine the winner. Each game corresponds to an internal node. Hence there are m - 1 internal nodes.
Because the tree is complete, the right subtree must also have m - 1 nodes.
-------*-------
/ \
* *
/ \ / \
/ \ / \
/ m-1 \ / m-1 \
/-------\ /-------\
/--- m ---\
Hence we have total number of nodes (including the root):
n = 1 + [(m - 1) + m] + (m - 1)
= 3m - 1
Let x = number of nodes in the left subtree. Then:
x = (m - 1) + m
= 2m - 1
We can solve these simultaneous equations, eliminating variable m:
2n - 3x = 1
x = (2n - 1) / 3
Hence x is less than 2n/3. This explains the original statement:
The children's subtrees each have size at most 2n/3 – the worst case occurs when the bottom level of the tree is exactly half full

time complexity similar to bubble sort

Analyze the following sorting algorithm:
for (int i = 0; i < SIZE; i++)
{
if (list[i] > list[i + 1])
{
swap list[i] with list[i + 1];
i = 0;
}
}
I want to determine the time complexity for this, in the worse case...I don't understand how it is O(n^3)
Clearly the for loop by itself is O(n). The question is, how many times can it run?
Every time you do a swap, the loop starts over. How many times will you do a swap? You will do a swap for each element from its starting position until it reaches its proper spot in the sorted output. For an input that is reverse sorted, that will average to n/2 times, or O(n) again. But that's for each element, giving another O(n). That's how you get to O(n^3).
I ran an analysis for n = 10 and n = 100. The number of comparisons seems to be O(n3) which makes sense because i gets set to 0 an average of n / 2 times so it's somewhere around n2*(n/2) comparison and increment operations for your for loop, but the number of swaps seems to be only O(n2) because obviously no more swaps are necessary to sort the entire list. The best case is still n-1 comparisons and 0 swaps of course.
For best-case testing I use an already sorted array of n elements: [0...n-1].
For worst-case testing I use a reverse-sorted array of n elements: [n-1...0]
def analyzeSlowSort(A):
comparison_count = swap_count = i = 0
while i < len(A) - 1:
comparison_count += 1
if A[i] > A[i+1]:
A[i], A[i+1] = A[i+1], A[i]
swap_count += 1
i = 0
i += 1
return comparison_count, swap_count
n = 10
# Best case
print analyzeSlowSort(range(n)) # ->(9, 0)
# Worst case
print analyzeSlowSort(range(n, 0, -1)) # ->(129, 37)
n = 100
# Best case
print analyzeSlowSort(range(n)) # ->(99, 0)
# Worst case
print analyzeSlowSort(range(n, 0, -1)) # ->(161799, 4852)
Clearly this is a very inefficient sorting algorithm in terms of comparisons. :)
Okay.. here goes..
in the worst case lets say we have a completely flipped array..
9 8 7 6 5 4 3 2 1 0
Each time there is a swap.. the i is getting resetted to 0.
Lets start by flipping 9 8 : We have now 8 9 7 6 5 4 3 2 1 0 and the i is set back to zero.
Now the loop runs till 2 and we have a flip again.. : 8 7 9 6 5 4 3 2 1 0 i reset again.. but to get 7 to the first place we have another flip for 8 and 7. : 7 8 9 6 5 4 3 2 1 0
So the number of loops are like this :
T(1) = O(1)
T(2) = O(1 + 2)
T(3) = O(1 + 2 + 3)
T(4) = O(1 + 2 + 3 + 4) and so on..
Finally For nth term which is the biggest in this case its T(n) = O(n(n-1)/2).
But for the entire thing you need to sum all of these terms up
Which can be bounded by the case Summation of (T(n)) = O(Summation of (n^2)) = O(n^3)
Addition
Think of it this way: For each element you need to go up to it and bring it back.. but when you bring it back its just by one space. I hope that makes it a little more clear.
Another Edit
If any of the above is not making sense. Think of it this way : You have to bring 0 to the front of the array. You have initially walk up to the zero 9 steps and put it before 1. But after that you are magically transported (i=0) to the beginning of the array. So now you have to walk 8 steps to the zero and then bring it in two's position. Again Zap! and you are back to start of the array. How many steps approximately you have to take to get to zero each time so that its right at the front. 9 + 8 + 7 + 6 + 5 + .. this is the last term of the recurrence and so is bounded by the Square of the length of the array. Does this make sense? Now to do this for each of the element on average you are doing O(n) work.. right? Which translates to summing all the terms up.. And we have O(n^3).
Please comment if things help or don't make sense.

Merge sort time and space complexity

Let's take this implementation of Merge Sort as an example
void mergesort(Item a[], int l, int r) {
if (r <= l) return;
int m = (r+l)/2;
mergesort(a, l, m); ------------(1)
mergesort(a, m+1, r); ------------(2)
merge(a, l, m, r);
a) The time complexity of this Merge Sort is O(n lg(n)). Will parallelizing (1) and (2) give any practical gain? Theorotically, it appears that after parallelizing them also you would end up in O(n lg(n)). But practically can we get any gains?
b) Space complexity of this Merge Sort here is O(n). However, if I choose to perform in-place merge sort using linked lists (not sure if it can be done with arrays reasonably) will the space complexity become O(lg(n)), since you have to account for recursion stack frame size?
Can we treat O(lg(n)) as constant since it cannot be more than 64? I may have misunderstood this at couple of places. What exactly is the significance of 64?
c) Sorting Algorithms Compared - Cprogramming.com says merge sort requires constant space using linked lists. How? Did they treat O(lg(n)) constant?
d) Added to get more clarity. For space complexity calculation is it fair to assume the input array or list is already in memory? When I do complexity calculations I always calculate the "Extra" space I will be needing besides the space already taken by input. Otherwise space complexity will always be O(n) or worse.
MergeSort time Complexity is O(nlgn) which is a fundamental knowledge.
Merge Sort space complexity will always be O(n) including with arrays.
If you draw the space tree out, it will seem as though the space complexity is O(nlgn). However, as the code is a Depth First code, you will always only be expanding along one branch of the tree, therefore, the total space usage required will always be bounded by O(3n) = O(n).
For example, if you draw the space tree out, it seems like it is O(nlgn)
16 | 16
/ \
/ \
/ \
/ \
8 8 | 16
/ \ / \
/ \ / \
4 4 4 4 | 16
/ \ / \ / \ / \
2 2 2 2..................... | 16
/ \ /\ ........................
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 | 16
where height of tree is O(logn) => Space complexity is O(nlogn + n) = O(nlogn).
However, this is not the case in the actual code as it does not execute in parallel. For example, in the case where N = 16, this is how the code for mergesort executes. N = 16.
16
/
8
/
4
/
2
/ \
1 1
notice how number of space used is 32 = 2n = 2*16 < 3n
Then it merge upwards
16
/
8
/
4
/ \
2 2
/ \
1 1
which is 34 < 3n.
Then it merge upwards
16
/
8
/ \
4 4
/
2
/ \
1 1
36 < 16 * 3 = 48
then it merge upwards
16
/ \
8 8
/ \
4 4
/ \
2 2
/\
1 1
16 + 16 + 14 = 46 < 3*n = 48
in a larger case, n = 64
64
/ \
32 32
/ \
16 16
/ \
8 8
/ \
4 4
/ \
2 2
/\
1 1
which is 643 <= 3n = 3*64
You can prove this by induction for the general case.
Therefore, space complexity is always bounded by O(3n) = O(n) even if you implement with arrays as long as you clean up used space after merging and not execute code in parallel but sequential.
Example of my implementation is given below:
templace<class X>
void mergesort(X a[], int n) // X is a type using templates
{
if (n==1)
{
return;
}
int q, p;
q = n/2;
p = n/2;
//if(n % 2 == 1) p++; // increment by 1
if(n & 0x1) p++; // increment by 1
// note: doing and operator is much faster in hardware than calculating the mod (%)
X b[q];
int i = 0;
for (i = 0; i < q; i++)
{
b[i] = a[i];
}
mergesort(b, i);
// do mergesort here to save space
// http://stackoverflow.com/questions/10342890/merge-sort-time-and-space-complexity/28641693#28641693
// After returning from previous mergesort only do you create the next array.
X c[p];
int k = 0;
for (int j = q; j < n; j++)
{
c[k] = a[j];
k++;
}
mergesort(c, k);
int r, s, t;
t = 0; r = 0; s = 0;
while( (r!= q) && (s != p))
{
if (b[r] <= c[s])
{
a[t] = b[r];
r++;
}
else
{
a[t] = c[s];
s++;
}
t++;
}
if (r==q)
{
while(s!=p)
{
a[t] = c[s];
s++;
t++;
}
}
else
{
while(r != q)
{
a[t] = b[r];
r++;
t++;
}
}
return;
}
a) Yes - in a perfect world you'd have to do log n merges of size n, n/2, n/4 ... (or better said 1, 2, 3 ... n/4, n/2, n - they can't be parallelized), which gives O(n). It still is O(n log n). In not-so-perfect-world you don't have infinite number of processors and context-switching and synchronization offsets any potential gains.
b) Space complexity is always Ω(n) as you have to store the elements somewhere. Additional space complexity can be O(n) in an implementation using arrays and O(1) in linked list implementations. In practice implementations using lists need additional space for list pointers, so unless you already have the list in memory it shouldn't matter.
edit
if you count stack frames, then it's O(n)+ O(log n) , so still O(n) in case of arrays. In case of lists it's O(log n) additional memory.
c) Lists only need some pointers changed during the merge process. That requires constant additional memory.
d) That's why in merge-sort complexity analysis people mention 'additional space requirement' or things like that. It's obvious that you have to store the elements somewhere, but it's always better to mention 'additional memory' to keep purists at bay.
Simple and smart thinking.
Total levels (L) = log2(N).
At the last level number of nodes = N.
step 1 : let's assume for all levels (i) having nodes = x(i).
step 2 : so time complexity = x1 + x2 + x3 + x4 + .... + x(L-1) + N(for i = L);
step 3 : fact we know , x1,x2,x3,x4...,x(L-1) < N
step 4 : so let's consider x1=x2=x3=...=x(L-1)=N
step 5 : So time complexity = (N+N+N+..(L)times)
Time complexity = O(N*L);
put L = log(N);
Time complexity = O(N*log(N))
We use the extra array while merging so,
Space complexity: O(N).
Hint: Big O(x) time means, x is the smallest time for which we can surely say with proof that it will never exceed x in average case
a) Yes, of course, parallelizing merge sort can be very beneficial. It remains nlogn, but your constant should be significantly lower.
b) Space complexity with a linked list should be O(n), or more specifically O(n) + O(logn). Note that that's a +, not a *. Don't concern yourself with constants much when doing asymptotic analysis.
c) In asymptotic analysis, only the dominant term in the equation matters much, so the fact that we have a + and not a * makes it O(n). If we were duplicating the sublists all over, I believe that would be O(nlogn) space - but a smart linked-list-based merge sort can share regions of the lists.
Worst-case performance of merge sort : O(n log n),
Best-case performance of merge sort : O(n log n) typicaly, O(n) natural variant,
Average performance of merge sort : O(n log n),
Worst-case space complexity of merge sort : О(n) total, O(n) auxiliary
for both best and worst case the complexity is O(nlog(n)) .
though extra N size of array is needed in each step so
space complexity is O(n+n) is O(2n) as we remove constant value for calculating complexity so it is O(n)
merge sort space complexity is O(nlogn), this is quite obvious considering that it can go to at maximum of O(logn) recursions and for each recursion there is additional space of O(n) for storing the merged array that needs to be reassigned.
For those who are saying O(n) please don't forget that it is O(n) for reach stack frame depth.

Resources