This question already has answers here:
Prove that an algorithm has a lower bound
(2 answers)
Closed 3 years ago.
Let's suppose we have a sorted list containing N elements. I have read in a textbook that an algorithm which determines if this list has duplicates must perform at least n-1comparisons (that means n-1 is a lower bound of the number of comparisons). I don't understand this because let's say the 1st and 2nd elements are duplicates, then the algorithm would simply return 'yes' after performing exactly one comparison. What do I get wrong? is there any simple proof of the n-1 lower bound?
When talking about complexity, you never consider only one input, otherwise there would always be some O(1) algorithm returning the expected answer.
An algorithm is supposed to work whatever the given input is (according to the specifications).
So for you, n-1 is the lower bound on the worst case complexity for any algorithm (based on comparison), i.e. for any algorithm you can find one input for which it needs at least n-1 comparisons.
You are probably confusing the two terms "lower bound" and "worst/best case".
The order of input determines the worst/best case, so, the lower bound or "big omega" can be said as "n-1" for the worst case and "1" for the best case.
But generally, the time complexity is determined for the worst cases.
Related
Does every algorithm has a 'best case' and 'worst case' , this was a question raised by someone who answered it with no ! I thought that every algorithm has a case depending on its input so that one algorithm finds that a particular set of input are the best case but other algorithms consider it the worst case.
so which answer is correct and if there are algorithms that doesn't have a best case can you give an example ?
Thank You :)
No, not every algorithm has a best and worst case. An example of that is the linear search to find the max/min element in an unsorted array: it always checks all items in the array no matter what. It's time complexity is therefore Theta(N) and it's independent of the particular input.
Best Case input is the casein which your code would take the least number of procedure calls. eg. You have an if in your code and in that, you iterate for every element and no such functionality in else part. So, any input in which the code does not enter if block will be the best case input and conversely, any input in which code enters this if will be worst case for this algorithm.
If, for any algorithm, branching or recursion or looping causes a difference in complexity factor for that algorithm, it will have a possible best case or possible worst case scenario. Otherwise, you can say that it does not or that it has similar complexity for best case or worst case.
Talking about sorting algorithms, lets take example of merge and quick sorts. (I believe you know them well, and their complexities for that matter).
In merge sort every time, array is divided into two equal parts thus taking log n factor in splitting while in recombining, it takes O(n) time (for every split, of course). So, total complexity is always O(n log n) and it does not depend on the input. So, you can either say merge sort has no best/worst case conditions or its complexity is same for best/worst cases.
On the other hand, if quick sort (not randomized, pivot always the 1st element) is given a random input, it will always divide the array in two parts, (may or may not be equal, doesn't matter) and if it does this, log factor of its complexity comes into picture (though base won't always be 2). But, if the input is sorted already (ascending or descending) it will always split it into 1 element + rest of array, so will take n-1 iterations to split the array, which changes its O(log n) factor to O(n) thereby changing complexity to O(n^2). So, quick sort will have best and worst cases with different time complexities.
Well, I believe every algorithm has a best and worst case though there's no guarantee that they will differ. For example, the algorithm to return the first element in an array has an O(1) best, worst and average case.
Contrived, I know, but what I'm saying is that it depends entirely on the algorithm what their best and worst cases are, but the cases will exist, even if they're the same, or unbounded at the top end.
I think its reasonable to say that most algorithms have a best and a worst case. If you think about algorithms in terms of Asymptotic Analysis you can say that a O(n) search algorithm will perform worse than a O(log n) algorithm. However if you provide the O(n) algorithm with data where the search item is early on in the data set and the O(log n) algorithm with data where the search item is in the last node to be found the O(n) will perform faster than the O(log n).
However an algorithm that has to examine each of the inputs every time such as an Average algorithm won't have a best/worst as the processing time would be the same no matter the data.
If you are unfamiliar with Asymptotic Analysis (AKA big O) I suggest you learn about it to get a better understanding of what you are asking.
I've been trying to figure this out all day. Some other threads address this, but I really don't understand the answers. There are also many answers that contradict one another.
I understand that an algorithm will never take longer than the upper bound and never be faster than the lower bound. However, I didn't know an upper bound existed for best case time and a lower bound existed for worst case time. This question really threw me in a loop. I can't wrap my head around this... a given run time can have a different upper and lower bound?
For example, if someone asked: "Show that the worst-case running time of some algorithm on a heap of size n is Big Omega(lg(n))". How do you possibly get a lower bound, any bound for that matter, when given a run time?
So, in summation, an algorithm's worst case upper bound can be different than its worst case lower bound? How can this be? Once given the case, don't bounds become irrelevant? Trying to independent study algorithms and I really need to wrap my head around this first.
The meat of my accepted answer to that question is a function whose running time oscillates between n^2 and n^3 depending on whether n is odd. The point that I was trying to make is that sometimes bounds of the form O(n^k) and Omega(n^k) aren't sufficiently descriptive, even though the worst case running time is a perfectly well defined function (which, like all functions, is its own best lower and upper bound). This happens with more natural functions like n log n, which is Omega(n^k) but not O(n^k) for k ≤ 1, and O(n^k) but not Omega(n^k) for k > 1 (and hence not Theta(n^k) regardless of how we choose a constant k).
Suppose you write a program like this to find the smallest prime factor of an integer:
function lpf(n):
for i = 2 to n
if n%i == 0 then return i
If you run the function on the number 10^11 + 3, it will take 10^11 + 2 steps. If you run it on the number 10^11 + 4 it will take just one step. So the function's best-case time is O(1) steps and its worst-case time is O(n) steps.
Big O notation, describes efficiency in runtime iterations, generally based on size of an input data set.
The notation is written in its simplest form, ignoring multiples or additives, but keeping exponential form. If you have an operation of O(1) it is executed in constant time, no matter the input data.
However if you have something such as O(N) or O(log(N)), they will execute at different rates depending on input data.
The high and low bounds describe the largest and least iterations, respectively, that an algorithm can take.
Example: O(N), high bound is largest input data and low bound is smallest.
Extra sources:
Big O Cheat Sheet and MIT Lecture Notes
UPDATE:
Looking at the Stack Overflow question mentioned above, that algorithm is broken into three parts, where it has 3 possible types of runtime, depending on data. Now really, this is three different algorithms designed to handle for different data values. An algorithm is generally classified with just one notation of efficiency and that is of the notation taking the least time for ALL possible values of N.
In the case of O(N^2), larger data will take exponentially longer, and having a smaller number will proceed quickly. The algorithm determines how quickly a data set will be run, yet bounds are given depending on the range of data the algorithm is designed to handle.
I will try to explain it in the quicksort algorithm.
In quicksort you have an array and choose an element as pivot. The next step is to partition the input array into two arrays. The first one will contain elements < pivot and the second one elements > pivot.
Now assume you will apply quicksort on an already sorted list and the pivot element will always be the last element of the array. The result of partition will be an array of size n-1 and an array oft size 1 (the pivot element). This will result in a runtime of O(n*n). Now assume that the pivot element will always split the array in two equal sized array. In every step the array size will be cut in halves. This will result in O(n log n). I hope this example will make this a bit clearer for you.
Another well known sort algorithm is mergesort. Mergesort has always runtime of O(n log n). In mergesort you will cut the array down until only one element is left und will climb up the call stack to merge the one sized arrays and after that merge the array of size two and so on.
Let's say you implement a set using an array. To insert a element you simply put in the next available bucket. If there is no available bucket you increase the capacity of the array by a value m.
For the insert algorithm "there is no enough space" is the worse case.
insert (S, e)
if size(S) >= capacity(S)
reserve(S, size(S) + m)
put(S,e)
Assume we never delete elements. By keeping track of the last available position, put, size and capacity are Θ(1) in space and memory.
What about reserve? If it is implemented like [realloc in C][1], in the best case you just allocate new memory at the end of the existing memory (best case for reserve), or you have to move all existing elements as well (worse case for reserve).
The worst case lower bound for insert is the best case of
reserve(), which is linear in m if we dont nitpick. insert in
worst case is Ω(m) in space and time.
The worst case upper bound for insert is the worse case of
reserve(), which is linear in m+n. insert in worst case is
O(m+n) in space and time.
First, I know
lower bound is O(nlogn)
and how to prove it
And I agree the lower bound should be O(nlogn).
What I don't quite understand is:
For some special cases, the # of comparisons could actually be even lower than the lower bound. For example, use bubble sort to sort an already sorted array. The # of comparisons is O(n).
So how to actually understand the idea of lower bound?
The classical definition on Wikipedial: http://en.wikipedia.org/wiki/Upper_and_lower_bounds does not help much.
My current understanding of this is:
lower bound of the comparison-based sorting is actually the upper bound for the worst case.
namely, how best you could in the worst case.
Is this correct? Thanks.
lower bound of the comparison-based sorting is actually the upper bound for the best case.
No.
The function that you are bounding is the worst-case running time of the best possible sorting algorithm.
Imagine the following game:
We choose some number n.
You pick your favorite sorting algorithm.
After looking at your algorithm, I pick some input sequence of length n.
We run your algorithm on my input, and you give me a dollar for every executed instruction.
The O(n log n) upper bound means you can limit your cost to at most O(n log n) dollars, no matter what input sequence I choose.
The Ω(n log n) lower bound means that I can force you to pay at least Ω(n log n) dollars, no matter what sorting algorithm you choose.
Also: "The lower bound is O(n log n)" doesn't make any sense. O(f(n)) means "at most a constant times f(n)". But "lower bound" means "at least ...". So saying "a lower bound of O(n log n)" is exactly like saying "You can save up to 50% or more!" — it's completely meaningless! The correct notation for lower bounds is Ω(...).
The problem of sorting can be viewed as following.
Input: A sequence of n numbers .
Output: A permutation (reordering) of the input sequence such that a‘1 <= a‘2 ….. <= a‘n.
A sorting algorithm is comparison based if it uses comparison operators to find the order between two numbers. Comparison sorts can be viewed abstractly in terms of decision trees. A decision tree is a full binary tree that represents the comparisons between elements that are performed by a particular sorting algorithm operating on an input of a given size. The execution of the sorting algorithm corresponds to tracing a path from the root of the decision tree to a leaf. At each internal node, a comparison ai aj is made. The left subtree then dictates subsequent comparisons for ai aj, and the right subtree dictates subsequent comparisons for ai > aj. When we come to a leaf, the sorting algorithm has established the ordering. So we can say following about the decison tree.
1) Each of the n! permutations on n elements must appear as one of the leaves of the decision tree for the sorting algorithm to sort properly.
2) Let x be the maximum number of comparisons in a sorting algorithm. The maximum height of the decison tree would be x. A tree with maximum height x has at most 2^x leaves.
After combining the above two facts, we get following relation.
n! <= 2^x
Taking Log on both sides.
\log_2n! <= x
Since \log_2n! = \Theta(nLogn), we can say
x = \Omega(nLog_2n)
Therefore, any comparison based sorting algorithm must make at least \Omega(nLog_2n) comparisons to sort the input array, and Heapsort and merge sort are asymptotically optimal comparison sorts.
When you do asymptotic analysis you derive an O or Θ or Ω for all input.
But you can also make analysis on whether properties of the input affect the runtime.
For example algorithms that take as input something almost sorted have better performance than the formal asymptotic formula due to the input characteristics and the structure of the algorithm. Examples are bubblesort and quicksort.
It is not that you can go bellow the lower boundaries. It only behavior of the implementation on specific input.
Imagine all the possible arrays of things that could be sorted. Lets say they are arrays of length 'n' and ignore stuff like arrays with one element (which, of course, are always already sorted.
Imagine a long list of all possible value combinations for that array. Notice that we can simplify this a bit since the values in the array always have some sort of ordering. So if we replace the smallest one with the number 1, the next one with 1 or 2 (depending on whether its equal or greater) and so forth, we end up with the same sorting problem as if we allowed any value at all. (This means an array of length n will need, at most, the numbers 1-n. Maybe less if some are equal.)
Then put a number beside each one telling how much work it takes to sort that array with those values in it. You could put several numbers. For example, you could put the number of comparisons it takes. Or you could put the number of element moves or swaps it takes. Whatever number you put there indicates how many operations it takes. You could put the sum of them.
One thing you have to do is ignore any special information. For example, you can't know ahead of time that the arrangement of values in the array are already sorted. Your algorithm has to do the same steps with that array as with any other. (But the first step could be to check if its sorted. Usually that doesn't help in sorting, though.)
So. The largest number, measured by comparisons, is the typical number of comparisons when the values are arranged in a pathologically bad way. The smallest number, similarly, is the number of comparisons needed when the values are arranged in a really good way.
For a bubble sort, the best case (shortest or fastest) is if the values are in order already. But that's only if you use a flag to tell whether you swapped any values. In that best case, you look at each adjacent pair of elements one time and find they are already sorted and when you get to the end, you find you haven't swapped anything so you are done. that's n-1 comparisons total and forms the lowest number of comparisons you could ever do.
It would take me a while to figure out the worst case. I haven't looked at a bubble sort in decades. But I would guess its a case where they are reverse ordered. You do the 1st comparison and find the 1st element needs to move. You slide up to the top comparing to each one and finally swap it with the last element. So you did n-1 comparisons in that pass. The 2nd pass starts at the 2nd element and does n-2 comparisons and so forth. So you do (n-1)+(n-2)+(n-3)+...+1 comparisons in this case which is about (n**2)/2.
Maybe your variation on bubble sort is better than the one I described. No matter.
For bubble sort then, the lower bound is n-1 and the upper bound is (n**2)/2
Other sort algorithms have better performance.
You might want to remember that there are other operations that cost besides comparisons. We use comparisons because much sorting is done with strings and a string comparison is costly in compute time.
You could use element swaps to count (or the sum of swaps and elements swaps) but they are typically shorter than comparisons with strings. If you have numbers, they are similar.
You could also use more esoteric things like branch prediction failure or memory cache misses or for measuring.
There is a question on an assignment that was due today which solutions have been released for, and I don't understand the correct answer. The question deals with best-case performance of disjoint sets in the form of disjoint set forests that utilize the weighed union algorithm to improve performance (the smaller of the trees has its root connected as a child to the root of the larger of the two trees) but without using the path compression algorithm.
The question is whether the best case performance of doing (n-1) Union operations on n singleton nodes and m>=n Find operations in any order is Omega(m*logn) which the solution confirms is correct like this:
There is a sequence S of n-1 Unions followed by m >= n Finds that takes Omega(m log n) time. The sequence S starts with a sequence n-1 Unions that builds a tree with depth Omega(log n). Then it has m>=n Finds, each one for the deepest leaf of that tree, so each one takes
(log n) time.
My question is, why does that prove the lower bound is Omega(m*logn) is correct? Isn't that just an isolated example of when the bound would be Omega(m*logn) that doesn't prove it for all inputs? I am certain one needs to only show one counter-example when disproving a claim but needs to prove a predicate for all possible inputs in order to prove its correctness.
In my answer, I pointed out the fact that you could have a case when you start off by joining two singleton nodes together. You then join in another singleton to that 2-node tree with 3 nodes sharing the same parent, then another etc., until you join together all the n nodes. You then have a tree where n-1 nodes all point up to the same parent, which is essentially the result you obtain if you use path compression. Then every FIND is executed in O(1) time. Thus, a sequence of (n-1) Unions and m>=n Finds ends up being Omega(n-1+m) = Omega(n+m) = Omega(m).
Doesn't this imply that the Omega(m*logn) bound is not tight and the claim is, therefore, incorrect? I'm starting to wonder if I don't fully understand Big-O/Omega/Theta :/
EDIT : fixed up the question to be a little clearer
EDIT2: Here is the original question the way it was presented and the solution (it took me a little while to realize that Gambarino and the other guy are completely made up; hardcore Italian prof)
Seems like I indeed misunderstood the concept of Big-Omega. For some strange reason, I presumed Big-Omega to be equivalent to "what's the input into the function that results in the best possible performance". In reality, most likely unsurprisingly to the reader but a revelation to me, Big-Omega simply describes the lower bound of a function. That's it. Therefore, a worst case input will have a lower and upper bounds (big-O and omega), and so will the best possible input. In case of big-omega here, all we had to do was come up with a scenario where we pick the 'best' input given the limitations of the worst case, i.e. that there is some input of size n that will take the algorithm at least m*logn steps. If such input exists, then the lower bound is tight.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Plain english explanation of Big O
I'd imagine this is probably something taught in classes, but as I a self-taught programmer, I've only seen it rarely.
I've gathered it is something to do with the time, and O(1) is the best, while stuff like O(n^n) is very bad, but could someone point me to a basic explanation of what it actually represents, and where these numbers come from?
Big O refers to the worst case run-time order. It is used to show how well an algorithm scales based on the size of the data set (n->number of items).
Since we are only concerned with the order, constant multipliers are ignored, and any terms which increase less quickly than the dominant term are also removed. Some examples:
A single operation or set of operations is O(1), since it takes some constant time (does not vary based on data set size).
A loop is O(n). Each element in the data set is looped over.
A nested loop is O(n^2). A nested nested loop is O(n^3), and onward.
Things like binary tree searching are log(n), which is more difficult to show, but at every level in the tree, the possible number of solutions is halved, so the number of levels is log(n) (provided the tree is balanced).
Something like finding the sum of a set of numbers that is closest to a given value is O(n!), since the sum of each subset needs to be calculated. This is very bad.
It's a way of expressing time complexity.
O(n) means for n elements in a list, it takes n computations to sort the list. Which isn't bad at all. Each increase in n increases time complexity linearly.
O(n^n) is bad, because the amount of computation required to perform a sort (or whatever you are doing) will exponentially increase as you increase n.
O(1) is the best, as it means 1 computation to perform a function, think of hash tables, looking up a value in a hash table has O(1) time complexity.
Big O notation as applied to an algorithm refers to how the run time of the algorithm depends on the amount of input data. For example, a sorting algorithm will take longer to sort a large data set than a small data set. If for the sorting algorithm example you graph the run time (vertical-axis) vs the number of values to sort (horizontal-axis), for numbers of values from zero to a large number, the nature of the line or curve that results will depend on the sorting algorithm used. Big O notation is a shorthand method for describing the line or curve.
In big O notation, the expression in the brackets is the function that is graphed. If a variable (say n) is included in the expression, this variable refers to the size of the input data set. You say O(1) is the best. This is true because the graph f(n) = 1 does not vary with n. An O(1) algorithm takes the same amount of time to complete regardless of the size of the input data set. By contrast, the run time of an algorithm of O(n^n) increases with the square of the size of the input data set.
That is the basic idea, for a detailed explanation, consult the wikipedia page titled 'Big O Notation'.