This is a very simple question but I'm struggling too much to understand the concept completely.
I'm trying to understand the difference between the following statements:
There exists an algorithm which sorts an array of n numbers in O(n) in the best case.
Every algorithm sorts an array of n numbers in O(n) in the best case.
There exists an algorithm which sorts an array of n numbers in Omega(n) in the best case.
Every algorithm sorts an array of n numbers in Omega(n) in the best case.
I will first explain what is driving me crazy. I'm not sure regarding 1 and 3 - but I know that for one of them the answer is correct just by specifying one case and for the other one the answer is correct by examining all the possible inputs. Therefore I know one of them must be true just by specifying that the array is already sorted but I can't tell which.
My teacher always told me to think about it like we are examining who's the heighest guy in the class and again by one of these options(1,3) it's enough to say that he is and there is no reason to examine all the class.
I do know that if we were to examine the worst case then none of these statements could be true because the best sorting algorithm without any assumptions or additional memory is Omega(nlogn).
IMPORTANT NOTE: I'm not looking for a solution (an algorithm which is able to do the matching sort) - only trying to understand the concept a little better.
Thank you!
For 1+3 ask yourself - do you know an algorithm that can sort an array at best case in Theta(n) - if the answer is true, then both 1+3 are true - since Theta(n) is O(n) [intersection] Omega(n), and thus if you do have such an algorithm (that runs in Theta(n) best case) - both 1+3 are correct.
Hint: optimized bubble sort.
For 2: ask yourself - does EVERY algorithm sorts an array of numbers in O(n) best case? Do you know an algorithm that have a worst case and best case identical time complexity? What happens to the mentioned bubble sort if you take all optimizations off?
For 4: ask yourself - do you need to read all elements in order to ensure the array is sorted? If you do - Omega(n) is a definite lower bound, you cannot go better then it.
Good Luck!
The difference, obviously, is in terms "O" and "Omega". One says "rising not faster than", second says "rising not slower than".
Make sure that you understand the difference between those terms, and you'll see the difference in the sentences.
1 and 3 both state completely different things, just as 2 and 4 are.
Look at those (those are NOT the same!):
1~ there exists an algorithm that for 10 items doesn't take more than 30 in the best case.
3~ there exists an algorithm that for 10 items doesn't take less than 30 in the best case.
2~ every algorithm that for 10 items takes not more than 30 in the best case.
4~ every algorithm that for 10 items takes not less than 30 in the best case.
Do you sense the difference now? With O/Omega the difference is similar, but the subject of investigation differs. The examples above say about different performance in some point/case, while O/Omega notation tell you about the performance, related to the size of data, but only if the data "is large enough", be it three items or milions, and it drops constant factors:
function 1000000*n is O(n)
function 0.00000*n*n is O(n^2)
For small amounts data, second one is obviously very very better than first. But as the quantity of data rises, soon the first starts to be much better!
Rewriting the above examples into "more proper" terms, that are more similar to your original sentences:
1~ there exists an algorithm that, for more than N items, doesn't take more than X*N in the best case.
3~ there exists an algorithm that, for more than N items, doesn't take less than X*n in the best case.
2~ every algorithm that, for more than N items, takes not more than X*N in the best case.
4~ every algorithm that, for more than N items, takes not less than X*N in the best case.
I hope that this helps you with "seeing"/"feeling" the difference!
Related
There is a theorem in Cormen which says...
(Th 8.1)
"For comparison based sorting techniques you cannot have an algorithm to sort a given list, which takes time less than nlogn time (comparisons) in the worst case"
I.e.
the worst case time complexity is Omega (nlogn) for Comparison based sorting technique...
Now what I was searching is that whether there exists a statement in case of the best case..or even for avg case
Which states something like:
You cannot have a sorting Algorithm which takes time less than some X to sort a given list of elements...in the best case
Basically do we have any lower bound for best case Algorithm. Or even as a matter of fact for average case. (I tried my best to find this, but couldn't find anywhere). Please also tell me whether the point I am raising is even worth it.
Great question! The challenge with defining “average case” complexity is that you have to ask “averaged over what?”
For example, if we assume that the elements of the array have an equal probability of being in any one of the n! possible permutations of n elements, then the Ω(n log n) bound on comparison sorting still holds, though I seem to remember that the proof of this is fairly complicated.
On the other hand, if we assume that there are trends in the data (say, you’re measuring temperatures over the course of a day, where you know they generally trend upward and then downward). Many real world data sets look like this, and there are algorithms like Timsort that can take advantage of those patterns to speed up performance. So perhaps “average” here would mean “averaged over all possible plots formed by a rising and then falling sequence with noise terms added in.” I haven’t encountered anyone working on analyzing algorithms in those cases, but I’m sure some work has been done there and there may even be some nice average case measures there that are less well known.
Does every algorithm has a 'best case' and 'worst case' , this was a question raised by someone who answered it with no ! I thought that every algorithm has a case depending on its input so that one algorithm finds that a particular set of input are the best case but other algorithms consider it the worst case.
so which answer is correct and if there are algorithms that doesn't have a best case can you give an example ?
Thank You :)
No, not every algorithm has a best and worst case. An example of that is the linear search to find the max/min element in an unsorted array: it always checks all items in the array no matter what. It's time complexity is therefore Theta(N) and it's independent of the particular input.
Best Case input is the casein which your code would take the least number of procedure calls. eg. You have an if in your code and in that, you iterate for every element and no such functionality in else part. So, any input in which the code does not enter if block will be the best case input and conversely, any input in which code enters this if will be worst case for this algorithm.
If, for any algorithm, branching or recursion or looping causes a difference in complexity factor for that algorithm, it will have a possible best case or possible worst case scenario. Otherwise, you can say that it does not or that it has similar complexity for best case or worst case.
Talking about sorting algorithms, lets take example of merge and quick sorts. (I believe you know them well, and their complexities for that matter).
In merge sort every time, array is divided into two equal parts thus taking log n factor in splitting while in recombining, it takes O(n) time (for every split, of course). So, total complexity is always O(n log n) and it does not depend on the input. So, you can either say merge sort has no best/worst case conditions or its complexity is same for best/worst cases.
On the other hand, if quick sort (not randomized, pivot always the 1st element) is given a random input, it will always divide the array in two parts, (may or may not be equal, doesn't matter) and if it does this, log factor of its complexity comes into picture (though base won't always be 2). But, if the input is sorted already (ascending or descending) it will always split it into 1 element + rest of array, so will take n-1 iterations to split the array, which changes its O(log n) factor to O(n) thereby changing complexity to O(n^2). So, quick sort will have best and worst cases with different time complexities.
Well, I believe every algorithm has a best and worst case though there's no guarantee that they will differ. For example, the algorithm to return the first element in an array has an O(1) best, worst and average case.
Contrived, I know, but what I'm saying is that it depends entirely on the algorithm what their best and worst cases are, but the cases will exist, even if they're the same, or unbounded at the top end.
I think its reasonable to say that most algorithms have a best and a worst case. If you think about algorithms in terms of Asymptotic Analysis you can say that a O(n) search algorithm will perform worse than a O(log n) algorithm. However if you provide the O(n) algorithm with data where the search item is early on in the data set and the O(log n) algorithm with data where the search item is in the last node to be found the O(n) will perform faster than the O(log n).
However an algorithm that has to examine each of the inputs every time such as an Average algorithm won't have a best/worst as the processing time would be the same no matter the data.
If you are unfamiliar with Asymptotic Analysis (AKA big O) I suggest you learn about it to get a better understanding of what you are asking.
http://katemats.com/interview-questions/ says:
You are given a sorted array and you want to find the number N. How do you do the search as quickly as possible (not just traversing each element)?
How would the performance of your algorithm change if there were lots of duplicates in the array?
My answer to the first question is binary search, which is O(log(n)), where n is the number of elements in the array.
According to this answer, "we have a maximum of log_2(n-1) steps" in the worst case when "element K is not present in A and smaller than all elements in A".
I think the answer to the second question is it doesn't affect the performance. Is this correct?
If you are talking worst case / big O, then you are correct - log(n) is your bound. However, if your data is fairly uniformly distributed (or you can map to that distribution), interpolating where to pick your partition can get log(log(n)) behavior. When you do the interpolation too, you also get rid of your worse cases where you have looking for one of the end elements (of course there are new pathological cases though).
For many many duplicates you might be willing to stride further away the direct center on the next probe. With more dups, you get a better margin for guessing correctly. While always choosing the half-way point gets you there in good time, educated guesses might get you some really excellent average behavior.
When I interview, I like to hear those answers, both knowledge of the book and what the theoretical is, but also what things can be done to specialize to the given situation. Often these constant factors can be really helpful (look at quicksort and its partition choosing schemes).
I don't think having duplicates matters.
You're looking for a particular number N, what matters is whether or not the current node matches N.
If I'm looking for the number 1 in the list 1-2-3-4-5-6 the performance would be identical to searching the list 1-9-9-9-9-9.
If the number N is duplicated then you will have a chance of finding it a couple steps sooner. For example if the same search was done on the list 1-1-1-1-1-9.
I am not looking for an algorithm to the above question. I just want someone to comment on my answer.
I was asked the following question in an interview:
How to get top 100 numbers out of a large set of numbers (can't fit in
memory)
And this is what I said:
Divide the numbers in batches of 1000 each. Sort each batch in "O(1)" time. Total time taken is O(n) up till now. Now take 1st 100 numbers from 1st and 2nd batch (in O(1)). Take 1st 100 from the above computed nos and the 3rd batch and so on. This will take O(n) in total - so it is an O(n) algorithm.
The interviewer replies that sorting a batch of 1000 nos. won't take O(1) time and so won't picking out 1st 100 out of a batch and after a lot of discussion he said, he doesn't have problem with the algo taking O(n) time, he just has a problem with me saying that sorting the batch takes O(1) time.
My explanation was that 1000 doesn't depend on the input (n). Irrespective of what n is, I'll always make batches of 1000 nos. and if you have to calculate, the sorting takes O(1000*log 1000)) which is essentially O(1).
If you have to make proper calculations, it would be
1000*log 1000 to sort one batch
sort (n/1000) such batches
takes 1000 * log 1000 * n/1000 = O(n*log(1000)) time = O(n) time
I asked a lot of my friends also about this and although they agreed with me but partially.
So I wan't to know if my reasoning is 100% accurate (please criticize even if it is 99% correct).
Just remember, this post is not asking for the answer to the above posted question. I have already found a better answer at Retrieving the top 100 numbers from one hundred million of numbers
The interviewer is wrong, but it's useful to consider why. What you're saying is correct, but there is an unstated assumption that you depend on. Possibly, the interviewer is making a different assumption.
If we say that sorting 1000 numbers is O(1), we're being a bit informal. Specifically, what we mean is that, in the limit as N goes to infinity, there is a constant greater than or equal to the cost of sorting the 1000 numbers. Since the cost of sorting the fixed-size set is independent of N, the limit isn't going to depend on N, either. Thus, it's O(1) as N goes to infinity.
A generous interpretation is that the interviewer wanted you to treat the sorting step differently. You could be more precise and say that it was O(M*log(M)) as M goes to infinity (or M goes to N, if you prefer), with M representing the size of the batches of numbers. That would make an overall O(N*log(M)) for your approach, as N and M both approach infinity. Of course, that wasn't the limit you described.
Strictly speaking, it's meaningless to say that something is O(1) without specifying the limit. One usually doesn't need to bother for algorithms, because it's clear from the context: the limit commonly taken is as a single parameter approaches infinity. Your description is correct when considering only N, but you could consider more than just N.
It is indeed O(n) - but the constants are very high, especially considering you will need to read each element from the filesystem twice [once in the sort, and once in the second phase], and file system access, is much slower then memory access. Since this will probably be the bottleneck of the algorithm, your solution will probably run twice slower then using a priority-queue.
Note that for a constant top 100, even the naive solution is O(n):
for each i in range(1,100):
x <- find highest element
remove x from the list
append x to the solution
This solution is also O(n), since you have 100 iteration, in each iteration you need 2 traversals of the list [with some optimisations, 1 traversal per iteration can be done]. So, the total number of traversals is strictly smaller then 1000, and there are no more factors that depend on the size, thus the solution is O(n) - but it is definetly a terrible solution.
I think the interviewer meant that your solution - though O(n) has very large constants.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Are there any O(1/n) algorithms?
This just popped in my head for no particular reason, and I suppose it's a strange question. Are there any known algorithms or problems which actually get easier or faster to solve with larger input? I'm guessing that if there are, it wouldn't be for things like mutations or sorting, it would be for decision problems. Perhaps there's some problem where having a ton of input makes it easy to decide something, but I can't imagine what.
If there is no such thing as negative complexity, is there a proof that there cannot be? Or is it just that no one has found it yet?
No that is not possible. Since Big-Oh is suppose to be an approximation of the number of operations an algorithm performs related to its domain size then it would not make sense to describe an algorithm as using a negative number of operations.
The formal definition section of the wikipedia article actually defines the Big-Oh notation in terms of using positive real numbers. So there actually is not even a proof because the whole concept of Big-Oh has no meaning on the negative real numbers per the formal definition.
Short answer: Its not possible because the definition says so.
update
Just to make it clear, I'm answering this part of the question: Are there any known algorithms or problems which actually get easier or faster to solve with larger input?
As noted in accepted answer here, there are no algorithms working faster with bigger input.
Are there any O(1/n) algorithms?
Even an algorithm like sleep(1/n) has to spend time reading its input, so its running time has a lower bound.
In particular, author referes relatively simple substring search algorithm:
http://en.wikipedia.org/wiki/Horspool
PS But using term 'negative complexity' for such algorithms doesn't seem to be reasonable to me.
To think in an algorithm that executes in negative time, is the same as thinking about time going backwards.
If the program starts executing at 10:30 AM and stops at 10:00 AM without passing through 11:00 AM, it has just executed with time = O(-1).
=]
Now, for the mathematical part:
If you can't come up with a sequence of actions that execute backwards in time (you never know...lol), the proof is quite simple:
positiveTime = O(-1) means:
positiveTime <= c * -1, for any C > 0 and n > n0 > 0
Consider the "C > 0" restriction.
We can't find a positive number that multiplied by -1 will result in another positive number.
By taking that in account, this is the result:
positiveTime <= negativeNumber, for any n > n0 > 0
Wich just proves that you can't have an algorithm with O(-1).
Not really. O(1) is the best you can hope for.
The closest I can think of is language translation, which uses large datasets of phrases in the target language to match up smaller snippets from the source language. The larger the dataset, the better (and to a certain extent faster) the translation. But that's still not even O(1).
Well, for many calculations like "given input A return f(A)" you can "cache" calculation results (store them in array or map), which will make calculation faster with larger number of values, IF some of those values repeat.
But I don't think it qualifies as "negative complexity". In this case fastest performance will probably count as O(1), worst case performance will be O(N), and average performance will be somewhere inbetween.
This is somewhat applicable for sorting algorithms - some of them have O(N) best-case scenario complexity and O(N^2) worst case complexity, depending on the state of data to be sorted.
I think that to have negative complexity, algorithm should return result before it has been asked to calculate result. I.e. it should be connected to a time machine and should be able to deal with corresponding "grandfather paradox".
As with the other question about the empty algorithm, this question is a matter of definition rather than a matter of what is possible or impossible. It is certainly possible to think of a cost model for which an algorithm takes O(1/n) time. (That is not negative of course, but rather decreasing with larger input.) The algorithm can do something like sleep(1/n) as one of the other answers suggested. It is true that the cost model breaks down as n is sent to infinity, but n never is sent to infinity; every cost model breaks down eventually anyway. Saying that sleep(1/n) takes O(1/n) time could be very reasonable for an input size ranging from 1 byte to 1 gigabyte. That's a very wide range for any time complexity formula to be applicable.
On the other hand, the simplest, most standard definition of time complexity uses unit time steps. It is impossible for a positive, integer-valued function to have decreasing asymptotics; the smallest it can be is O(1).
I don't know if this quite fits but it reminds me of bittorrent. The more people downloading a file, the faster it goes for all of them