time complexity of merge sort [closed] - algorithm

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
why the time complexity of best case of top-down merge sort is in O(nlogn)?
i think the best case of top-down merge sort is 1, only need to compare 1 time.
how about the time complexity of bottom-up merge sort in worst case, best case and average case.
One more question is why each iteration takes exactly O(n)? could some one help with that?

why the time complexity of best case of top-down merge sort is in
O(nlogn)?
Because at each iteration you split the array into two sublists, and recursively invoke the algorithm. At best case you split it exactly to half, and thus you reduce the problem (of each recursive call) to half of the original problem. You need log_2(n) iterations, and each iteration takes exactly O(n) (each iteration is on all sublists, total size is still n), so at total O(nlogn).
However, with a simple preprocessing to check if the list is already sorted - it can be reduced to O(n).
Since checking if a list is sorted is itself O(n) - it cannot be done in O(1). Note that the "best case" is the "best case" for general n, and not a specific size.
how about the time complexity of bottom-up merge sort in worst case,
best case and average case.
The same approach can give you O(n) best case to bottom up (simple pre processing). The worst case and best case of bottom up merge sort is O(nlogn) - since in this approach the list is always divided to 2 equally length (up to difference 1) lists.

Related

Quicksort complexities in depth [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
So I am having an exam, and a big part of this exam will be quicksort algorithm. As everyone knows, the best case scenario and actually an average case for this algorithm is: O(nlogn). The worst case scenario would be O(n^2).
As for the worst case scenario I know how to explain it: It happens when the selected pivot would be the smallest or the biggest value in the array, then we would have n quicksort calls which may take up to n time (I mean partition operation). Am I right?
Now the best/average case. I've read the Cormens book, I understood many things thanks to that book, but as for the quicksort algorithm he focuses on the mathematical formulas on how to explain O(nlogn) complexity. I just wanted to know why is it O(nlogn), not getting into some mathematical proof. For now I've only seen some Wikipedia explanation, that if we choose a pivot which divides our array into n/2, n/2+1 parts each time, then we would have a call tree of depth logn, but I don't know if that is true and even if so, why is it logn then.
I know that there are many materials covering quicksort on the internet, but they only cover implementation, or are just telling me the complexity, not explaining it.
Am I right?
Yes.
we would have a call tree of depth logn but I don't know if that is true
It is.
why is it logn?
Because we partition the array in half at every step, resulting in logn depth of the call graph. From this Intro:
See the tree and its depth, it's logn. Imagine it as the search in a BST costs logn, or why search takes logn too in Binary search in a sorted array.
PS: Math tell the truth, invest in understanding them, and you shall become a better Computer Scientist! =)
For the best case scenario, quick sort splits the current array 50% / 50% (in half) on each partition step for a time complexity of O(log2(n)) (1/.5 = 2), but the constant 2 is ignored, so it's O(n log(n).
If each partition step produced a 20% / 80% split, then the worst case time complexity would be based on the 80% or O(n log1.25(n)) (1/.8 = 1.25), but the constant 1.25 is ignored so it's also O(n log(n)), even though it's about 3 times slower than the 50% / 50% partition case for sorting 1 million elements.
The O(n^2) time complexity occurs when the partition split only produces a linear reduction in partition size with each partition step. The simplest and worst case example is when only 1 element is removed per partition step.

Given n coins, some of which are heavier, find the number of heavy coins? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
Given n coins, some of which are heavier, algorithm for finding the number of heavy coins using O(log^2 n) weighings. Note that all heavy coins have the same weight and all the light ones share the same weight too.
You are given a balance using which you can compare the weights of two disjoint subsets of coins. Note that the balance only indicates which subset is heavier, or whether they have equal weights, and not the absolute weights.
I won't give away the whole answer, but I'll help you break it down.
Find a O(log(n)) algorithm to find a single heavy coin.
Find a O(log(n)) algorithm to split a set into two sets with equal number of heavy and light counts plus up to two leftovers (for when there are not even amounts of each).
Combine algorithms #1 and #2.
Hints:
Algorithm #1 is independent of algorithm #2.
O(log(n)) hints at binary search
How might you end up with O(log^2(n)) with two O(log(n)) algorithms?

recursive algorithm's time usage [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
i and my friend discussed following algorithm problem,
"Describe a recursive algorithm for finding the maximum element in an array A of n
elements. What is your running time and space usage?"
we conclusioned that it has O(n) time usage. Accoriding to this statement, F(n) =compare A[n] with F(n-1), at base case of recursion, it compares A[0] and A[1], then returns bigger one, which will compare with A[2]. as recursion proceeds, finally in the end, it will return maximum element in an array.
each n time recursions, it compares only one time so finally we guessed it has O(n) time usage.
my question is we aren't sure with our solution, so we want any other comments about this algorithm and our solution. thank you.
You approach for finding the time complexity is fine if the array contains integers. In case of numbers, comparing two numbers can be considered to be a unit operation. And while iterating over the array, to find the maximum value, this operations is performed n times. Hence O(n).
But if the array contains complex datatypes, say string, then comparing two strings cannot be considered as a unit operation. To compare string you may have to iterate over each character of the string. In this case the time complexity of the algorithm may also start depending on the length of the strings in your array. Similarly for other datatypes, comparing two objects may not be a unit operation. But in your case, looks like the array contains numbers, so your are good.
Yes. you are correct, it is infact O(n). How you can do quite simply is,
The basic operation of the algorithm is the comparison. And in step of the recursion the comparison is done only once.
So you can say
m(n) = m(n-1) + 1
m(n-1) = m(n-2) + 1 + 1
m(n-2) = m(n-3) + 2 + 1
generalizing we get
m(n-i) = m(n-1-i) + i + 1
now in your basecase, you would be doing no comparisons (basecase is no elements left, so you return the current largest). you can write this as
m(0) = 1
now substituting in the recurrence equation to get the base case, let i = n-1
we get
m(n) = m(0) + n - 1 + 1
but m(0) = 0
So we get
m(n) = n
Hence your algorithm is O(n). There are other ways to prove this too. And even without a mathematical proof you can logically say your algorithm is O(n) since it does only one basic operation every recursive step, and the algorithm will always recurse n steps irrespective of the input.

O(n) sorting algorithm possible? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Just a simple algorithm to sort small integers, but it must be O(n).
A radix sort is one approach that's O(n). Since you're dealing with small integers, it shouldn't be too hard to implement.
Of course the fine print in the definition of O(n) there gets you. The radix sort, eg, is really n*log(n) when you figure that you must create a deeper tree as you accommodate more values -- they just manage to define it as O(n) by the trick of capping the number of values to be sorted. There's no way to really beat n*log(n) in the general sense.
Eg, for 8-bit values I can easily achieve O(n) by simply having a 256-entry array. But if I go to, say, even 32-bit values then I must have an array with 4G entries, and the address decoder for the memory chip for that array will have grown with log(n) of the size of the memory chip. Yes, I can say that the version with 4G entries is O(n), but at a electronic level the addressing is log(n) slower and more complex. Additionally, the buses inside the chip must drive more current and it will take longer for a memory cell, once "read", to dump its contents onto the bus. And all those effects are log(n).
Simply put :
If you have no prior information on your number you're sorting, you cannot do better than O(nlogn) in average
If you have more information (like the fact that you're dealing with integers), you can have some O(n) algorithms
A great resource are these Wikipedia tables. Have a look at the second one.
To the best of my knowledge, comparison based sorting algorithms share a lower bound of O(nlogn).
To achieve O(n), we probably can't use any comparison based algorithms. Also, the input must bear additional properties.
In your example, small integers, I guess, means that the integers fall within a specified range.
If that were the case, you could try bucket/radix sort algorithm, which does not require any comparisons.
For a simple example, suppose you have n integers to be sorted, all of which belong to the interval [1, 1000]. You just make 1000 buckets, and go over the n integers, if the integer is equal to 500, it goes to bucket 500, etc. Finally you concatenate all the buckets to obtain the sorted list. This algorithm takes O(n).
The optimum for comparison based sort is O(n*log(n)), the proof is not very difficult. BUT you may use counting sort, which is enumeration based or very similar bucket sort... You may also use radix sort, but it is not sort itself. Radix sort only iteratively calls some other stable sort...

find max consecutive sum, find segments containing a point [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
1) Given an array of integers (negative and positive) - what is the most efficient algorithm to return the max consecutive sum.
a) I thought to solve this with Dynamic Programing, but complexity is O(n^2). Is there another way?
b) What if we were given a infinite input of integers. Is there a way to output the current max consecutive sum? I guess not.
2) Given: an array of segments[start,end] (can elapse) ordered ascending by start point,
and a point.
What is the most efficient algorithm to return a segment that contains this point?/all segments that contain this point?
I thought to use binarySearch to hit the first segment that starts before this point the than trying to traverse right and left.
Any other idea ?
For 1) There is an algorithm that's working in O(n)
For 2) I think your approach is not bad (as long as you can't assume ordering w.r.t. ending points)
1) As long as the sum doesn't drop below zero, it's always better to continue with the consecutive summation. So you just pass through the array once (i.e. you have a linear runtime algorithm) from left to right and remember the current consecutive summation and the maximum consecutive summation so far, updating it whenever the current sum gets bigger then the max sum.
So at any point at of the array traversal, you can say what the max sum so far is. Hence you can use this algorithm for an (infinite) input stream, too.
2) Yes, binary search sounds good. But if I understand the question correctly, you can start with the right-most segment (that starts closest to the point) and then just traverse the segments to the left. Of course, the worst case runtime is still linear in the number of segments, but the average should be logarithmic.

Resources