I believe I have understood merge-sort to some extent and I was trying to understand the time complexity of merge-sort but find it hard to totally understand it. so we will recursively call mergesort to each(i.e. left and right sub-arrays), which will be log(n) and I understand that. but the merging part of the merge-sort says its complexity is o(n) making the whole time complexity, O(nlog(n)). but i don't quite understand how the merging part is linear becasue for every subarray call, there will be len(sub_array) * some constant operations. so if the length of the array to be sorted is n, then the merging part will be as ff according to my understanding:
let k = number of primitive operations, n = length of sub-array, then total time complexity will be, here 2 is as there are two parts(left sub-array and right sub-array).
2(k)*2 + 2(k)*4+2(k)*8 + ......+2(k)n.(number of sums is logn times) but i don't know how is this generalized to nlogn?(like had it been k + k + k + k +.....+k n times, then it makes sense to say O(nk) and discard the constant k, however, in merge sort, it's not constant? so how is the time complexity really computed??
Let array length is n=2^k for simplicity. I'll show scheme for length 8 array.
At first we recursively divide array to 2^(k-1) chunks, then 2^(k-2) chunks and so on until length becomes 1
a b c d e f g h
Now we perform merging to size 2 chunks (ab denotes sorted array with a and b elements)
a b c d e f g h
\ / \ / \ / \ /
ab cd ef gh
We make 4*2 operations (not elementary operations, but every is O(1))
ab cd ef gh
\ / \ /
abcd efgh
We make 2*4 operations
abcd efgh
\ /
abcdefgh
And here we make 1*8 operations
We can see that we make log(n) steps, every steps takes the same number: n operations, so overall complexity is O(nlogn)
Related
I had a lecture on Big Oh for Merge Sort and I'm confused.
What was shown is:
0 Merges [<----- n -------->] = n
1 Merge [<--n/2--][-n/2--->] = (n/2 + n/2) = n
2 Merges [n/4][n/4][n/4][n/4] = 2(n/4 + n/4) = n
....
log(n) merges = n
Total = (n + n + n + ... + n) = lg n
= O(n log n)
I don't understand why (n + n + ... + n) can also be expressed as log base 2 of n and how they got for 2 merges = 2(n/4 + n/4)
In the case of 1 merge, you have two sub arrays to be sorted where each sub-array will take time proportional to n/2 to be sorted. In that sense, to sort those two sub-arrays you need a time proportional to n.
Similarly, when you are doing 2 merges, there are 4 sub arrays to be sorted where each will be taking a time proportional to n/4 which will again sum up to n.
Similarly, if you have n merges, it will take a time proportional to n to sort all the sub-arrays. In that sense, we can write the time taken by merge sort as follows.
T(n) = 2 * T(n/2) + n
You will understand that this recursive call can go deep (say to a height of h) until n/(2^h) = 1. By taking log here, we get h=log(n). That is how log(n) came to the scene. Here log is taken from base 2.
Since you have log(n) steps where each step takes a time proportional to n, total time taken can be expressed as,
n * log(n)
In big O notations, we give this as an upper bound O(nlog(n)). Hope you got the idea.
Following image of the recursion tree will enlighten you further.
The last line of the following part written in your question,
0 Merges [<----- n -------->] = n
1 Merge [<--n/2--][-n/2--->] = (n/2 + n/2) = n
2 Merges [n/4][n/4][n/4][n/4] = 2(n/4 + n/4) = n
....
n merges = n --This line is incorrect!
is wrong. You will not have total n merges of size n, but Log n merges of size n.
At every level, you divide the problem size into 2 problems of half the size. As you continue diving, the total divisions that you can do is Log n. (How? Let's say total divisions possible is x. Then n = 2x or x = Log2n.)
Since at each level you do a total work of O(n), therefore for Log n levels, the sum total of all work done will be O(n Log n).
You've got a deep of log(n) and a width of n for your tree. :)
The log portion is the result of "how many times can I split my data in two before I have only one element left?" This is the depth of your recursion tree. The multiple of n comes from the fact that for each of those levels in the tree you'll look at every element in your data set once after all merge steps at that level.
recurse downwards:
n unsorted elements
[n/2][n/2] split until singletons...
...
merge n elements at each step when recursing back up
[][][]...[][][]
[ ] ... [ ]
...
[n/2][n/2]
n sorted elements
It's very simple. Each merge takes O(n) as you demonstrated. The number of merges you need to do is log n (base 2), because each merge doubles the size of the sorted sections.
How does a program's worst case or average case dependent on log function? How does the base of log come in play?
The log factor appears when you split your problem to k parts, of size n/k each and then "recurse" (or mimic recursion) on some of them.
A simple example is the following loop:
foo(n):
while n > 0:
n = n/2
print n
The above will print n, n/2, n/4, .... , 1 - and there are O(logn) such values.
the complexity of the above program is O(logn), since each printing requires constant amount of time, and number of values n will get along the way is O(logn)
If you are looking for "real life" examples, in quicksort (and for simplicity let's assume splitting to exactly two halves), you split the array of size n to two subarrays of size n/2, and then you recurse on both of them - and invoke the algorithm on each half.
This makes the complexity function of:
T(n) = 2T(n/2) + O(n)
From master theorem, this is in Theta(nlogn).
Similarly, on binary search - you split the problem to two parts, and recurse only on one of them:
T(n) = T(n/2) + 1
Which will be in Theta(logn)
The base is not a factor in big O complexity, because
log_k(n) = log_2(n)/log_2(k)
and log_2(k) is constant, for any constant k.
I came across this question in one of the slides of Stanford, that what would be the effect on the complexity of the code of merge sort if we split the array into 4 or 8 instead of 2.
It would be the same: O(n log n). You will have a shorter tree and the base of the logarithm will change, but that doesn't matter for big-oh, because a logarithm in a base a differs from a logarithm in base b by a constant:
log_a(x) = log_b(x) / log_b(a)
1 / log_b(a) = constant
And big-oh ignores constants.
You will still have to do O(n) work per tree level in order to merge the 4 or 8 or however many parts, which, combined with more recursive calls, might just make the whole thing even slower in practice.
In general, you can split your array into equal size subarrays of any size and then sort the subarrays recursively, and then use a min-heap to keep extracting the next smallest element from the collection of sorted subarrays. If the number of subarrays you break into is constant, then the execution time for each min-heap per operation is constant, so you arrive at the same O(n log n) time.
Intuitively it would be the same as there is no much difference between splitting the array into two parts and then doing it again or splitting it to 4 parts from the beginning.
A more official proof by induction based on this (I'll assume that the array is split into k):
Definitions:
Let T(N) - number of array stores to mergesort of input of size N
Then mergesort recurrence T(N) = k*T(N/k) + N (for N > 1, T(1) = 0)
Claim:
If T(N) satisfies the recurrence above then T(N) = Nlg(N)
Note - all the logarithms below are on base k
Proof:
Base case: N=1
Inductive hypothesis: T(N) = NlgN
Goal: show that T(kN) = kN(lg(kN))
T(kN) = kT(N) + kN [mergesort recurrence]
= kNlgN + kN [inductive hypothesis]
= kN(lg[(kN/k)] [algebra]
= kN(lg(kN) - lgk) [algebra]
= kN(lg(kN) - 1) + kN [algebra - for base k, lg(k )= 1]
= kNlg(kN) [QED]
i came across this piece of code to perform merge sort on a link list..
the author claims that it runs in time O(nlog n)..
here is the link for it...
http://www.geeksforgeeks.org/merge-sort-for-linked-list/
my claim is that it takes atleast O(n^2) time...and here is my argument...
look, you divide the list(be it array or linked list), log n times(refer to recursion tree), during each partition, given a list of size i=n, n/2, ..., n/2^k, we would take O(i) time to partition the original/already divided list..since sigma O(i)= O(n),we can say , we take O(n) time to partition for any given call of partition(sloppily), so given the time taken to perform a single partition, the question now arises as to how many partitions are going to happen all in all, we observe that the number of partitions at each level i is equal to 2^i , so summing 2^0+2^1+....+2^(lg n ) gives us [2(lg n)-1] as the sum which is nothing but (n-1) on simplification , implying that we call partition n-1, (let's approximate it to n), times so , the complexity is atleast big omega of n^2..
if i am wrong, please let me know where...thanks:)
and then after some retrospection , i applied master method to the recurrence relation where i replaced theta of 1 which is there for the conventional merge sort on arrays with theta of n for this type of merge sort (because the divide and combine operations take theta of n time each), the running time turned out to be theta of (n lg n)...
also i noticed that the cost at each level is n (because 2 power i * (n/(2pow i)))...is the time taken for each level...so its theta of n at each level* lg n levels..implying that its theta of (n lg n).....did i just solve my own question??pls help i am kinda confused myself..
The recursive complexity definition for an input list of size n is
T(n) = O(n) + 2 * T(n / 2)
Expanding this we get:
T(n) = O(n) + 2 * (O(n / 2) + 2 * T(n / 4))
= O(n) + O(n) + 4 * T(n / 4)
Expanding again we get:
T(n) = O(n) + O(n) + O(n) + 8 * T(n / 8)
Clearly there is a pattern here. Since we can repeat this expansion exactly O(log n) times, we have
T(n) = O(n) + O(n) + ... + O(n) (O(log n) terms)
= O(n log n)
You are performing a sum twice for some weird reason.
To split and merge a linked list of size n, it takes O(n) time. The depth of recursion is O(log n)
Your argument was that a splitting step takes O(i) time and sum of split steps become O(n) And then you call it the time taken to perform only one split.
Instead, lets consider this, a problem of size n forms two n/2 problems, four n/4 problems eight n/8 and so on until 2^log n n/2^logn subproblems are formed. Sum these up you get O(nlogn) to perform splits.
Another O(nlogn) to combine sub problems.
I think it's interesting to solve this in holiday:
Given n integers, all of them within 1..n^3, find if there is a triple which matches pythagoras equation in O(n^2).
As you know pythagoras equation is a^2 + b^2 = c^2. for example 3^2 + 4^2 = 5^2.
As you know O(n^2 log n) is easy (with a little thinking), but will help to solve O(n^2). (space is not important).
Edit: As yi_H offered there is lookup table which can solve this problem easily, but for making it harder, space limit is O(n^2).
O(n2) time, O(n) space: square all array elements, sort, for each z in the array use the classic linear-time algorithm to determine whether there exist x, y in the array such that x + y = z.
All pythagorean triplets (a,b,c) have the following relation:
a = d * (2 * m * n)
b = d * (m^2 - n^2)
c = d * (m^2 + n^2)
where
d >= 1 and (m,n) = 1
(meaning: m and n have no comomn factor.
I guess one can find an algorithm to produce all triplets that are below n^3 using this info.
I guess O(n^2 log n) would be to sort the numbers, take any two pairs (O(n^2)) and see whether there is c in the number for which c^2 = a^2 + b^2. You can do the lookup for c with binary search, that's O(log(n)).
Now if space isn't an issue you can create a hash for all the values in O(n), then you can look up c in this hash with O(1), so it's going to be O(n^2). You can even create an lookup table for it since the numbers are between 1..n^2, so it's going to be a guaranteed O(1) lookup ;) You can also use a special lookup table which can do initialization, add and lookup in O(1).