worst case is equal to best case algorithms - algorithm

I'm trying to answer this question about algorithms and I don't understand what possibly this could be. I don't have any example to provide to you and I'm sharing it same as it was shared with me:
"If the complexity of the X algorithm for the worst case is equal to the complexity of the Y algorithm for the best case, which of these two algorithms is faster? Explain why!"

They're not looking for any specific answer. They're looking for how you reason about the question. For example, you can reason as follows:
Obviously, one would prefer an algorithm whose worst case is as good as another algorithm's best case. Because in the worst case, they're equal and in the best case, it's better. But complexity isn't the only criteria by which algorithms should be judged, ...
This is one of those "see how you reason about things" questions and not a "get the right answer" question.

Let me try to explain it in steps:
Understand that an algorithm can have different best/average/worst time complexity depending on factors such as input size, etc.
If algorithm X's worst performance is equal to the complexity of the algorithm Y's best performance then you can reason that, overall, the algorithm X is faster than algorithm Y but this is considering only the asymptotic complexity. See 3.
Of course there are many other factors you have to consider. Consider the scenario when Algorithm X performs better than Y for very specific input but on average and in worst case both, X and Y perform the same, then it is worth understanding the trade offs between these two algorithms such as space complexity and amortized complexity.

Related

Why is O(2ⁿ) less complex than O(1)?

https://www.freecodecamp.org/news/big-o-notation-why-it-matters-and-why-it-doesnt-1674cfa8a23c/
Exponentials have greater complexity than polynomials as long as the coefficients are positive multiples of n
O(2ⁿ) is more complex than O(n⁹⁹), but O(2ⁿ) is actually less complex
than O(1). We generally take 2 as base for exponentials and logarithms
because things tends to be binary in Computer Science, but exponents
can be changed by changing the coefficients. If not specified, the
base for logarithms is assumed to be 2.
I thought O(1) is the simplest in complexity. Could anyone help me explain why O(2ⁿ) is less complex than O(1) ?
Errata. The author made an obvious mistake and you caught it. It's not the only mistake in the article. For example, I would expect O(n*log(n)) to be the more appropriate complexity for sorting algorithms than the one they claim (quoted below). Otherwise, you'd be able to sort a set without even seeing all of the data.
"As complexity is often related to divide and conquer algorithms, O(log(n)) is generally a good complexity you can reach for sorting algorithms."
It might be worthwhile to try to contact the author and give him a heads up so he can correct it and avoid confusing anyone else with misinformation.

What does scaling of the upper bound of your algorithm's runtime tell you?

Suppose the only thing you know is that your algorithm runs in O(n^2) time in the worst case. From this very fact you know that the upper bound looks like Cn^2 for some C > 0. Thus you know how the upper bound of your algorithm scales, namely, if you double the input size the upper bound quadruples.
Question: What practical question can you answer if you know the way the upper bound scales? I just can't understand if this particular knowledge is helpful in some way.
If you know of an alternative algorithm that is not in O(𝑛²), then you may conclude that there is some minimum input size above which your algorithm will outperform the alternative algorithm.
This is because if 𝑔(𝑛) is not in O(𝑛²), then there do not exist 𝐶 and N such that for every 𝑛 > 𝑁 we would have 𝑔(𝑛) < 𝐶𝑛². So you would also not find 𝐶 and 𝑁 such that 𝑔(𝑛) < 𝐶𝑛 or 𝑔(𝑛) < 𝐶𝑛log𝑛, ...etc. So 𝑔(𝑛) will not be O(1), O(𝑛), O(𝑛log𝑛), ... as it is not O(𝑛²).
Still, the input size for which your algorithm would outperform the alternative could be so astronomically great, that it would not be practical, and you would still prefer the alternative algorithm.
Please also realize that when coders speak of a time complexity for the worst case, they most often mean that this is a tight bound for the worst case. For example, if someone presents a sorting algorithm with a worst case time complexity of O(𝑛²), they mean that it does not have the worst case time complexity of O(𝑛log𝑛) that more efficient sorting algorithms have.
If all you know about your algorithm's performance is that it's in O(n2), then in practical terms, what you know is this:
If the system has slowed to a crawl over time, and the database is 10 times the size that it used to be, then there's a good chance that the problem is your algorithm taking 100 times longer.

Big O algorithms minimum time

I know that for some problems, no matter what algorithm you use to solve it, there will always be a certain minimum amount of time that will be required to solve the problem. I know BigO captures the worst-case (maximum time needed), but how can you find the minimum time required as a function of n? Can we find the minimum time needed for sorting n integers, or perhaps maybe finding the minimum of n integers?
what you are looking for is called best case complexity. It is kind of useless analysis for algorithms while worst case analysis is the most important analysis and average case analysis is sometimes used in special scenario.
the best case complexity depends on the algorithms. for example in a linear search the best case is, when the searched number is at the beginning of the array. or in a binary search it is in the first dividing point. in these cases the complexity is O(1).
for a single problem, best case complexity may vary depending on the algorithm. for example lest discuss about some basic sorting algorithms.
in bubble sort best case is when the array is already sorted. but even in this case you have to check all element to be sure. so the best case here is O(n). same goes to the insertion sort
for quicksort/mergesort/heapsort the best case complexity is O(n log n)
for selection sort it is O(n^2)
So from the above case you can understand that the complexity ( whether it is best , worst or average) depends on the algorithm, not on the problem

Using worst/avg/best case for asymptotic analysis [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I understand the worst/avg/best case are used to determine the complexity time of an algorithm into a function but how is that used in asymptotic analysis? I understand the upper/tight/lower bound(big O, big omega, big theta) are used to compare two functions and seeing what it's limit(growth) is in perspective to the other as n increases but i'm having trouble seeing the difference between worst/avg/best case big O and asymptotic analysis. What exactly do we get out of imputing our worst/avg/best case big O into the asymptotic analysis and measuring bounds? Would we use asymptotic analysis to specifically compare two algorithms of worst/avg/best case big O? If so do we use function f(n) for algorithm 1 and g(n) for algorithm 2 or do we have separate asymptotic analysis for each algorithm where algorithm 1 is f(n) and we try to find some c*g(n) such that => f(n) and such that c*g(n) <= f(n) and then do the same thing for algorithm 2. I'm not seeing the big picture here.
Since you want the big picture, let me try to give you the same.
Asymptotic analysis is used to study how the running time grows as size of input increases.This growth is studied in terms of the input size.Input size, which is usually denoted as N or M, it could mean anything from number of numbers(as in sorting), number of nodes(as in graphs) or even number of bits(as in multiplication of two numbers).
While dealing with asymptotic analysis our goal is find out which algorithm fares better in specific cases.Realize that an algorithm runs on quite varying times even for same sized inputs.To appreciate this, consider you are a sorting machine.You will be given a set of numbers and you need to sort them.If I give yuo a sorted list of numbers, you would have no work, and you are done already.If I gave you a reverse sorted list of numbers, imagine the number of operations you need to do to make the list sorted.Now that you see this, realize that we need a way of knowing what case the input would be?Would it be a best case?Would I get a worst case input?To answer this, we need some knowledge of the distribution of the input.Will it all be worst cases?Or would it be average cases?Or would it mostly be best cases?
The knowledge of the input distribution is fairly difficult to ascertain in most cases.Then we are left with two options.Either we can assume average case all the time and analyze our algorithm, or we could get a guarantee on the running case irrespective of the input distribution.The former is referred to as average case analysis, and to do such an analysis would require a formal definition of what makes an average case.Sometimes this is difficult to define and requires much mathematical insight.All the trouble is worth it, when you know that some algorithm runs much faster on the average case, compared to its worst case running time.There are several randomized algorithms that stand testimony to this.In such cases, doing an average case analysis reveals its practical applicability.
The latter, the worst case analysis is more often used since it provides a nice guarantee on the running time.In practice coming up with the worst case scenario is often fairly intuitive.Say you are the sorting machine, worst case is like reverse sorted array.What's the average case?
Yup, you are thinking, right?Not so intuitive.
The best case analysis is rarely used as one does not always get best cases.Still one can do such an analysis and find interesting behavior.
In conclusion, when we have a problem that we wanna solve, we come up with algorithms.Once we have an algorithm, we need to decide if it's of any practical use to our situation.If so we go ahead and shortlist the algorithms that can be applied, and compare them based on their time and space complexity.There could be more metrics for comparison, but these two are fundamental.One such metric could be ease of implementation.And depending on the situation at hand yu would employ either worst case analysis or average case analysis ir best case analysis.For example if you rarely have worst case scenarios, then its makes much more sense to carry out average case analysis.However if the performance of our code is of critical nature and we need to provide the output in a strict time limit, then its much more prudent to look at worst case analysis.Thus, the analysis that you make depends n the situation at hand, and with time, the intuition of which analysis to apply becomes second nature.
Please ask if you have more questions.
To know more about big-oh and the other notations read my answers here and here.
The Wikipedia article on quicksort provides a good example of how asymptotic analysis is used on best/average/worst case: it's got a worst case of O(n^2), an average case of O(n log n), and a best case of O(n log n), and if you're comparing this to another algorithm (say, heapsort) you would compare apples to apples, e.g. you'd compare quicksort's worst-case big-theta to heapsort's worst-case big-theta, or quicksort's space big-oh to heapsort's space big-oh.
You can also compare big-theta to big-oh if you're interested in upper bounds, or big-theta to big-omega if you're interested in lower bounds.
Big-omega is usually only of theoretical interest - you're far more likely to see analysis in terms of big-oh or big-theta.
What exactly do we get out of imputing our worst/avg/best case big O into the asymptotic analysis and measuring bounds?
It just gives an idea when you are comparing different approach for some problem. This will help you in comparing different approaches.
Would we use asymptotic analysis to specifically compare two algorithms of worst/avg/best case big O?
Generally only worse case gets more focus, compared to Big Omega and theta.
Yes, we use function f(n) for algorithm 1 and g(n) for algorithm. And these functions are Big O of their respective algorithms.

What is the Best Complexity of a Greedy Algorithm?

It seems like the best complexity would be linear O(n).
Doesn't matter the case really, I'm speaking of greedy algorithms in general.
Sometimes it pays off to be greedy?
In the specific case that I am interested would be computing change.
Say you need to give 35 cents in change. You have coins of 1, 5, 10, 25. The greedy algorithm, coded simply, would solve this problem quickly and easily. First grabbing 25 cents the highest value going in 35 and then next 10 cents to complete the total. This would be best case. Of course there are bad cases and cases where this greedy algorithm would have issues. I'm talking best case complexity for determining this type of problem.
Any algorithm that has an output of n items that must be taken individually has at best O(n) time complexity; greedy algorithms are no exception. A more natural greedy version of e.g. a knapsack problem converts something that is NP-complete into something that is O(n^2)--you try all items, pick the one that leaves the least free space remaining; then try all the remaining ones, pick the best again; and so on. Each step is O(n). But the complexity can be anything--it depends on how hard it is to be greedy. (For example, a greedy clustering algorithm like hierarchical agglomerative clustering has individual steps that are O(n^2) to evaluate (at least naively) and requires O(n) of these steps.)
When you're talking about greedy algorithms, typically you're talking about the correctness of the algorithm rather than the time complexity, especially for problems such as change making.
Greedy heuristics are used because they're simple. This means easy implementations for easy problems, and reasonable approximations for hard problems. In the latter case you'll find time complexities that are better than guaranteed correct algorithms. In the former case, you can't hope for better than optimal time complexity.
GREEDY APPROACH
knapsack problem...sort the given element using merge sort ..(nlogn)
find max deadline that will take O(n)
using linear search select one by one element....O(n²)
nlogn + n + n² = n² in worst case....
now can we apply binary search instead of linear search.....?
Greedy or not has essentially nothing to do with computational complexity, other than the fact that greedy algorithms tend to be simpler than other algorithms to solve the same problem, and hence they tend to have lower complexity.

Resources