Best-case running time: upper and lower bound? [closed] - algorithm

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
In my assignment I was given some information about an algorithm in form of statements. One of these statements was: "the best-case running time of Algorithm B is Ī©(n^2);".
I was under the impression that best-case running time of algorithms is always either lower-bound, upper-bound or tight-bound. I am wondering if an algorithm such as this can also have an upper-bound of its best-case running time. If so, what are some examples of algorithms where this occurs?

A case is a class of inputs for which you consider your algorithm's performance. The best case is the class of inputs for which your algorithm's runtime has the most desirable asymptotic bounds. Typically this might mean there is no other class which gives a lower Omega bound. The worst case is the class of inputs for which your algorithm's runtime has the least desirable asymptotic bounds. Typically this might mean there is no other class which gives a higher O bound. The average case has nothing to do with desirability of bounds but looks at the expected value of the runtime given some stated distribution of inputs. Etc. You can define whatever cases you want.
Imagine the following algorithm:
Weird(a[1...n])
if n is even then
flip a coin
if heads then bogosort(a[1...n])
else if tails then bubble_sort(a[1...n])
else if n is odd then
flip a coin
if heads then merge_sort(a[1...n])
else if tails then counting_sort(a[1...n])
Given a list of n integers each between 1 and 10, inclusive, this algorithm sorts the list.
In the best case, n is odd. A lower bound on the best case is Omega(n) and an upper bound on the best case is O(n log n).
In the worst case, n is even. A lower bound on the worst case is O(n^2) and there is no upper bound on the worst case (since bogosort may never finish, despite that being terribly unlikely).
To define an average case, we would need to define probabilities. Assuming even/odd are equally likely for all n, then there is no upper bound on the average case and a lower bound on the average case is Omega(n^2), same as for the worst case. To get a different bound for the average case, we'd need to define the distribution so that n being even gets increasingly unlikely for larger lists. Maybe the only even-length list you typically pass into this algorithm has length 2, for example.

Related

Which big O represents better worst running time? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 months ago.
Improve this question
I am looking at this question:
Which of the following time complexities represent better worst running time?
(a) O(lg(n!))
(b) O(n)
(c) O(nĀ²)
(d) O(lg(lg(n)))
(e) none of the above
Answer: (d)
According to options, option d should be the best case because it takes the least time.
And option c should be the worst case, according to options.
Why the answer is option (d) if the question is about the worst case?
Algorithm X can take anywhere between 1 second and 10 seconds. Algorithm Y can take anywhere between 5 seconds and 6 seconds. Which algorithm has better running time, X or Y?
It depends what exact characteristic of the algorithms you are comparing. There are several to consider. Among those are the best case running time, the worst case running time, and the average case running time.
If comparing the best case running time, X is better than Y, because the best time of X is smaller than the best time of Y.
If comparing the worst case running time, Y is better than X, because the worst time of Y is smaller than the worst time of X.
If comparing the average case running time ... that's the control question! What is the answer?
From Wikipedia:
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. [...] A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
Let's say we have a function š‘“ that has a big O description of O(š‘›Ā²), then that function might still behave much better in the best case, and in that case be described with -- let's say -- O(š‘›).
Worst and best case are only relevant concepts when there are aspects of the input that can vary, even when š‘› remains the same. A typical example is a sort function. In that case we take š‘› to be the size of the array to be sorted, but the actual values in that array can still vary. Sometimes those values give us a best case, sometimes a worst case, sometimes something in between...
If all we have is that big O notation, we cannot exclude there is a worst case for that function, where it really meets that asymptotic limit.
Now in that list of options we see what the worst case asymptotic behaviour of those functions would be, and from those we should pick the better one.

Quicksort complexities in depth [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
So I am having an exam, and a big part of this exam will be quicksort algorithm. As everyone knows, the best case scenario and actually an average case for this algorithm is: O(nlogn). The worst case scenario would be O(n^2).
As for the worst case scenario I know how to explain it: It happens when the selected pivot would be the smallest or the biggest value in the array, then we would have n quicksort calls which may take up to n time (I mean partition operation). Am I right?
Now the best/average case. I've read the Cormens book, I understood many things thanks to that book, but as for the quicksort algorithm he focuses on the mathematical formulas on how to explain O(nlogn) complexity. I just wanted to know why is it O(nlogn), not getting into some mathematical proof. For now I've only seen some Wikipedia explanation, that if we choose a pivot which divides our array into n/2, n/2+1 parts each time, then we would have a call tree of depth logn, but I don't know if that is true and even if so, why is it logn then.
I know that there are many materials covering quicksort on the internet, but they only cover implementation, or are just telling me the complexity, not explaining it.
Am I right?
Yes.
we would have a call tree of depth logn but I don't know if that is true
It is.
why is it logn?
Because we partition the array in half at every step, resulting in logn depth of the call graph. From this Intro:
See the tree and its depth, it's logn. Imagine it as the search in a BST costs logn, or why search takes logn too in Binary search in a sorted array.
PS: Math tell the truth, invest in understanding them, and you shall become a better Computer Scientist! =)
For the best case scenario, quick sort splits the current array 50% / 50% (in half) on each partition step for a time complexity of O(log2(n)) (1/.5 = 2), but the constant 2 is ignored, so it's O(n log(n).
If each partition step produced a 20% / 80% split, then the worst case time complexity would be based on the 80% or O(n log1.25(n)) (1/.8 = 1.25), but the constant 1.25 is ignored so it's also O(n log(n)), even though it's about 3 times slower than the 50% / 50% partition case for sorting 1 million elements.
The O(n^2) time complexity occurs when the partition split only produces a linear reduction in partition size with each partition step. The simplest and worst case example is when only 1 element is removed per partition step.

Amortized and Average runtime complexity [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
this is not homework, I am studying Amortized analysis. There are something confuse me .I can't totally understand the meaning between Amortized and Average complexity. Not sure this is right or not. Here is a question:
--
We know that the runtime complexity of a program depends on the program input combinations --- Suppose the probability of the program with runtime complexity O(n) is p, where p << 1, and in other cases (i.e for the (1-p)possible cases), the runtime complexity is O(logn). If we are running the program with K different input combinations, where K is a very large number, we can say that the amortized and average runtime complexity of this program is:
--
First question is: I have read the question here:Difference between average case and amortized analysis
So, I think there is no answer for the average runtime complexity. Because we have no idea about what average input. But it seems to be p*O(n)+(1-p)*O(logn). Which is correct and why?
Second, the amortized part. I have read Constant Amortized Time and we already know that the Amortized analysis differs from average-case analysis in that probability is not involved; an amortized analysis guarantees the average performance of each operation in the worst case.
Can I just say that the amortized runtime is O(n). But the answer is O(pn). I'm a little confuse about why the probability involved. Although O(n)=O(pn), but I really can't have any idea why p could appear there? I change the way of thinking. Suppose we do lost of times then K becomes very big so the amortized runtime is (KpO(n)+K*(1-p)O(logn))/k = O(pn). It seems to be the same idea with Average case.
Sorry for that confuse, help me please, thanks first!
With "average" or "expected" complexity, you are making assumptions about the probability distribution of the problem. If you are unlucky, (or if your problem generator maliciously fails to match your assumption 8^), all your operations will be very expensive, and your program might take a much greater time than you expect.
Amortized complexity is a guarantee on the total cost of any sequence of operations. That means, no matter how malicious your problem generator is, you don't have to worry about a sequence of operations taking a much greater time than you expect.
(Depending on the algorithm, it is not hard to accidentally stumble on the worst case. The classic example is the naive Quicksort, which does very badly on mostly-sorted input, even though the "average" case is fast)

what's the time complexity of the following program [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 8 years ago.
Improve this question
what's the time complexity of the following program?
sum=0;
for(i=1;i<=5;i++)
sum=sum+i;
and how to define this complexity in log ? i shall highly appreciate if someone explain complexity step by step. furthermore how to show in O(big o) and logn.
[Edited]
sum=0; //1 time
i=1; //1 time
i<=5; //6 times
i++ //5 times
sum=sum+i;//5 times
is time complexity 18? Correct?
Preliminaries
Time complexity isn't usually expressed in terms of a specific integer, so the statement "The time complexity of operation X is 18" isn't clear without a unit, e.g., 18 "doodads".
One usually expresses time complexity as a function of the size of the input to some function/operation.
You often want to ignore the specific amount of time a particular operation takes, due to differences in hardware or even differences in constant factors between different languages. For example, summation is still O(n) (in general) in C and in Python (you still have to perform n additions), but differences in constant factors between the two languages will result in C being faster in terms of absolute time the operation takes to halt.
One also usually assumes that "Big-Oh"--e.g, O(f(n))--is the "worst-case" running time of an algorithm. There are other symbols used to study more strict upper and lower bounds.
Your question
Instead of summing from 1 to 5, let's look at summing from 1 to n.
The complexity of this is O(n) where n is the number of elements you're summing together.
Each addition (with +) takes constant time, which you're doing n times in this case.
However, this particular operation that you've shown can be accomplished in O(1) (constant time), because the sum of the numbers from 1 to n can be expressed as a single arithmetic operation. I'll leave the details of that up to you to figure out.
As far as expressing this in terms of logarithms: not exactly sure why you'd want to, but here goes:
Because exp(log(n)) is n, you could express it as O(exp(log(n))). Why would you want to do this? O(n) is perfectly understandable without needing to invoke log or exp.
First of all the loop runs 5 times for 5 inputs hence it has a time complexity of O(n). I am assuming here that values in i are the inputs for sum.
Secondly you cant just define time complexity in log terms it should always in BIG O notation. For example if you perform a binary search then the worst case time complexity of that algorithm is O(log n) because you are getting result in say 3 iterations when the input arrays is 8.
Complexity = log2(base)8 = 3
now here your comlexity is in log.

Using worst/avg/best case for asymptotic analysis [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I understand the worst/avg/best case are used to determine the complexity time of an algorithm into a function but how is that used in asymptotic analysis? I understand the upper/tight/lower bound(big O, big omega, big theta) are used to compare two functions and seeing what it's limit(growth) is in perspective to the other as n increases but i'm having trouble seeing the difference between worst/avg/best case big O and asymptotic analysis. What exactly do we get out of imputing our worst/avg/best case big O into the asymptotic analysis and measuring bounds? Would we use asymptotic analysis to specifically compare two algorithms of worst/avg/best case big O? If so do we use function f(n) for algorithm 1 and g(n) for algorithm 2 or do we have separate asymptotic analysis for each algorithm where algorithm 1 is f(n) and we try to find some c*g(n) such that => f(n) and such that c*g(n) <= f(n) and then do the same thing for algorithm 2. I'm not seeing the big picture here.
Since you want the big picture, let me try to give you the same.
Asymptotic analysis is used to study how the running time grows as size of input increases.This growth is studied in terms of the input size.Input size, which is usually denoted as N or M, it could mean anything from number of numbers(as in sorting), number of nodes(as in graphs) or even number of bits(as in multiplication of two numbers).
While dealing with asymptotic analysis our goal is find out which algorithm fares better in specific cases.Realize that an algorithm runs on quite varying times even for same sized inputs.To appreciate this, consider you are a sorting machine.You will be given a set of numbers and you need to sort them.If I give yuo a sorted list of numbers, you would have no work, and you are done already.If I gave you a reverse sorted list of numbers, imagine the number of operations you need to do to make the list sorted.Now that you see this, realize that we need a way of knowing what case the input would be?Would it be a best case?Would I get a worst case input?To answer this, we need some knowledge of the distribution of the input.Will it all be worst cases?Or would it be average cases?Or would it mostly be best cases?
The knowledge of the input distribution is fairly difficult to ascertain in most cases.Then we are left with two options.Either we can assume average case all the time and analyze our algorithm, or we could get a guarantee on the running case irrespective of the input distribution.The former is referred to as average case analysis, and to do such an analysis would require a formal definition of what makes an average case.Sometimes this is difficult to define and requires much mathematical insight.All the trouble is worth it, when you know that some algorithm runs much faster on the average case, compared to its worst case running time.There are several randomized algorithms that stand testimony to this.In such cases, doing an average case analysis reveals its practical applicability.
The latter, the worst case analysis is more often used since it provides a nice guarantee on the running time.In practice coming up with the worst case scenario is often fairly intuitive.Say you are the sorting machine, worst case is like reverse sorted array.What's the average case?
Yup, you are thinking, right?Not so intuitive.
The best case analysis is rarely used as one does not always get best cases.Still one can do such an analysis and find interesting behavior.
In conclusion, when we have a problem that we wanna solve, we come up with algorithms.Once we have an algorithm, we need to decide if it's of any practical use to our situation.If so we go ahead and shortlist the algorithms that can be applied, and compare them based on their time and space complexity.There could be more metrics for comparison, but these two are fundamental.One such metric could be ease of implementation.And depending on the situation at hand yu would employ either worst case analysis or average case analysis ir best case analysis.For example if you rarely have worst case scenarios, then its makes much more sense to carry out average case analysis.However if the performance of our code is of critical nature and we need to provide the output in a strict time limit, then its much more prudent to look at worst case analysis.Thus, the analysis that you make depends n the situation at hand, and with time, the intuition of which analysis to apply becomes second nature.
Please ask if you have more questions.
To know more about big-oh and the other notations read my answers here and here.
The Wikipedia article on quicksort provides a good example of how asymptotic analysis is used on best/average/worst case: it's got a worst case of O(n^2), an average case of O(n log n), and a best case of O(n log n), and if you're comparing this to another algorithm (say, heapsort) you would compare apples to apples, e.g. you'd compare quicksort's worst-case big-theta to heapsort's worst-case big-theta, or quicksort's space big-oh to heapsort's space big-oh.
You can also compare big-theta to big-oh if you're interested in upper bounds, or big-theta to big-omega if you're interested in lower bounds.
Big-omega is usually only of theoretical interest - you're far more likely to see analysis in terms of big-oh or big-theta.
What exactly do we get out of imputing our worst/avg/best case big O into the asymptotic analysis and measuring bounds?
It just gives an idea when you are comparing different approach for some problem. This will help you in comparing different approaches.
Would we use asymptotic analysis to specifically compare two algorithms of worst/avg/best case big O?
Generally only worse case gets more focus, compared to Big Omega and theta.
Yes, we use function f(n) for algorithm 1 and g(n) for algorithm. And these functions are Big O of their respective algorithms.

Resources