Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
this is not homework, I am studying Amortized analysis. There are something confuse me .I can't totally understand the meaning between Amortized and Average complexity. Not sure this is right or not. Here is a question:
--
We know that the runtime complexity of a program depends on the program input combinations --- Suppose the probability of the program with runtime complexity O(n) is p, where p << 1, and in other cases (i.e for the (1-p)possible cases), the runtime complexity is O(logn). If we are running the program with K different input combinations, where K is a very large number, we can say that the amortized and average runtime complexity of this program is:
--
First question is: I have read the question here:Difference between average case and amortized analysis
So, I think there is no answer for the average runtime complexity. Because we have no idea about what average input. But it seems to be p*O(n)+(1-p)*O(logn). Which is correct and why?
Second, the amortized part. I have read Constant Amortized Time and we already know that the Amortized analysis differs from average-case analysis in that probability is not involved; an amortized analysis guarantees the average performance of each operation in the worst case.
Can I just say that the amortized runtime is O(n). But the answer is O(pn). I'm a little confuse about why the probability involved. Although O(n)=O(pn), but I really can't have any idea why p could appear there? I change the way of thinking. Suppose we do lost of times then K becomes very big so the amortized runtime is (KpO(n)+K*(1-p)O(logn))/k = O(pn). It seems to be the same idea with Average case.
Sorry for that confuse, help me please, thanks first!
With "average" or "expected" complexity, you are making assumptions about the probability distribution of the problem. If you are unlucky, (or if your problem generator maliciously fails to match your assumption 8^), all your operations will be very expensive, and your program might take a much greater time than you expect.
Amortized complexity is a guarantee on the total cost of any sequence of operations. That means, no matter how malicious your problem generator is, you don't have to worry about a sequence of operations taking a much greater time than you expect.
(Depending on the algorithm, it is not hard to accidentally stumble on the worst case. The classic example is the naive Quicksort, which does very badly on mostly-sorted input, even though the "average" case is fast)
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
So from my understading, we can only evaluate algorithm with asymptotic analysis but when executing a algorithm, it can only return amount of time.
My question is how can we compare those two ?
They are comparable but not in the way you want.
If you have an implementation that you evaluate asymptotically to be say O(N^2) and you measure to run in 60 seconds for an input of N=1000, then if you change the input to N=2000 I would expect the run-time to be on the order of 60*(2^2) 4 minutes (I increase the input by a factor of two, the runtime increases by a factor of 2 square).
Now, if you have another algorithm also O(N^2) you can observe it to run for N=1000 in 10 seconds (the compiler creates faster instructions, or the CPU is better). Now when you move to N=2000 the runtime I would expect to be around 40 seconds (same logic). If you actually measure it you might still see some differences from the expected value because of system load or optimizations but they become less significant as N grows.
So you can't really say which algorithm will be faster based on asymptotic complexity alone. The asymptotic complexity guarantees that there will be an input sufficiently large where the lower complexity is going to be faster, but there's no promise what "sufficiently large" means.
Another example is search. You can do linear search O(N) or binary search O(logN). If your input is small (<128 ints) the compiler and processor make linear search faster than binary search. However grow N to say 1 million items and the binary search will be much faster than linear.
As a rule, for large inputs optimize complexity first and for small inputs optimize run-time first. As always if you care about performance do benchmarks.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
So I am having an exam, and a big part of this exam will be quicksort algorithm. As everyone knows, the best case scenario and actually an average case for this algorithm is: O(nlogn). The worst case scenario would be O(n^2).
As for the worst case scenario I know how to explain it: It happens when the selected pivot would be the smallest or the biggest value in the array, then we would have n quicksort calls which may take up to n time (I mean partition operation). Am I right?
Now the best/average case. I've read the Cormens book, I understood many things thanks to that book, but as for the quicksort algorithm he focuses on the mathematical formulas on how to explain O(nlogn) complexity. I just wanted to know why is it O(nlogn), not getting into some mathematical proof. For now I've only seen some Wikipedia explanation, that if we choose a pivot which divides our array into n/2, n/2+1 parts each time, then we would have a call tree of depth logn, but I don't know if that is true and even if so, why is it logn then.
I know that there are many materials covering quicksort on the internet, but they only cover implementation, or are just telling me the complexity, not explaining it.
Am I right?
Yes.
we would have a call tree of depth logn but I don't know if that is true
It is.
why is it logn?
Because we partition the array in half at every step, resulting in logn depth of the call graph. From this Intro:
See the tree and its depth, it's logn. Imagine it as the search in a BST costs logn, or why search takes logn too in Binary search in a sorted array.
PS: Math tell the truth, invest in understanding them, and you shall become a better Computer Scientist! =)
For the best case scenario, quick sort splits the current array 50% / 50% (in half) on each partition step for a time complexity of O(log2(n)) (1/.5 = 2), but the constant 2 is ignored, so it's O(n log(n).
If each partition step produced a 20% / 80% split, then the worst case time complexity would be based on the 80% or O(n log1.25(n)) (1/.8 = 1.25), but the constant 1.25 is ignored so it's also O(n log(n)), even though it's about 3 times slower than the 50% / 50% partition case for sorting 1 million elements.
The O(n^2) time complexity occurs when the partition split only produces a linear reduction in partition size with each partition step. The simplest and worst case example is when only 1 element is removed per partition step.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 8 years ago.
Improve this question
what's the time complexity of the following program?
sum=0;
for(i=1;i<=5;i++)
sum=sum+i;
and how to define this complexity in log ? i shall highly appreciate if someone explain complexity step by step. furthermore how to show in O(big o) and logn.
[Edited]
sum=0; //1 time
i=1; //1 time
i<=5; //6 times
i++ //5 times
sum=sum+i;//5 times
is time complexity 18? Correct?
Preliminaries
Time complexity isn't usually expressed in terms of a specific integer, so the statement "The time complexity of operation X is 18" isn't clear without a unit, e.g., 18 "doodads".
One usually expresses time complexity as a function of the size of the input to some function/operation.
You often want to ignore the specific amount of time a particular operation takes, due to differences in hardware or even differences in constant factors between different languages. For example, summation is still O(n) (in general) in C and in Python (you still have to perform n additions), but differences in constant factors between the two languages will result in C being faster in terms of absolute time the operation takes to halt.
One also usually assumes that "Big-Oh"--e.g, O(f(n))--is the "worst-case" running time of an algorithm. There are other symbols used to study more strict upper and lower bounds.
Your question
Instead of summing from 1 to 5, let's look at summing from 1 to n.
The complexity of this is O(n) where n is the number of elements you're summing together.
Each addition (with +) takes constant time, which you're doing n times in this case.
However, this particular operation that you've shown can be accomplished in O(1) (constant time), because the sum of the numbers from 1 to n can be expressed as a single arithmetic operation. I'll leave the details of that up to you to figure out.
As far as expressing this in terms of logarithms: not exactly sure why you'd want to, but here goes:
Because exp(log(n)) is n, you could express it as O(exp(log(n))). Why would you want to do this? O(n) is perfectly understandable without needing to invoke log or exp.
First of all the loop runs 5 times for 5 inputs hence it has a time complexity of O(n). I am assuming here that values in i are the inputs for sum.
Secondly you cant just define time complexity in log terms it should always in BIG O notation. For example if you perform a binary search then the worst case time complexity of that algorithm is O(log n) because you are getting result in say 3 iterations when the input arrays is 8.
Complexity = log2(base)8 = 3
now here your comlexity is in log.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I understand the worst/avg/best case are used to determine the complexity time of an algorithm into a function but how is that used in asymptotic analysis? I understand the upper/tight/lower bound(big O, big omega, big theta) are used to compare two functions and seeing what it's limit(growth) is in perspective to the other as n increases but i'm having trouble seeing the difference between worst/avg/best case big O and asymptotic analysis. What exactly do we get out of imputing our worst/avg/best case big O into the asymptotic analysis and measuring bounds? Would we use asymptotic analysis to specifically compare two algorithms of worst/avg/best case big O? If so do we use function f(n) for algorithm 1 and g(n) for algorithm 2 or do we have separate asymptotic analysis for each algorithm where algorithm 1 is f(n) and we try to find some c*g(n) such that => f(n) and such that c*g(n) <= f(n) and then do the same thing for algorithm 2. I'm not seeing the big picture here.
Since you want the big picture, let me try to give you the same.
Asymptotic analysis is used to study how the running time grows as size of input increases.This growth is studied in terms of the input size.Input size, which is usually denoted as N or M, it could mean anything from number of numbers(as in sorting), number of nodes(as in graphs) or even number of bits(as in multiplication of two numbers).
While dealing with asymptotic analysis our goal is find out which algorithm fares better in specific cases.Realize that an algorithm runs on quite varying times even for same sized inputs.To appreciate this, consider you are a sorting machine.You will be given a set of numbers and you need to sort them.If I give yuo a sorted list of numbers, you would have no work, and you are done already.If I gave you a reverse sorted list of numbers, imagine the number of operations you need to do to make the list sorted.Now that you see this, realize that we need a way of knowing what case the input would be?Would it be a best case?Would I get a worst case input?To answer this, we need some knowledge of the distribution of the input.Will it all be worst cases?Or would it be average cases?Or would it mostly be best cases?
The knowledge of the input distribution is fairly difficult to ascertain in most cases.Then we are left with two options.Either we can assume average case all the time and analyze our algorithm, or we could get a guarantee on the running case irrespective of the input distribution.The former is referred to as average case analysis, and to do such an analysis would require a formal definition of what makes an average case.Sometimes this is difficult to define and requires much mathematical insight.All the trouble is worth it, when you know that some algorithm runs much faster on the average case, compared to its worst case running time.There are several randomized algorithms that stand testimony to this.In such cases, doing an average case analysis reveals its practical applicability.
The latter, the worst case analysis is more often used since it provides a nice guarantee on the running time.In practice coming up with the worst case scenario is often fairly intuitive.Say you are the sorting machine, worst case is like reverse sorted array.What's the average case?
Yup, you are thinking, right?Not so intuitive.
The best case analysis is rarely used as one does not always get best cases.Still one can do such an analysis and find interesting behavior.
In conclusion, when we have a problem that we wanna solve, we come up with algorithms.Once we have an algorithm, we need to decide if it's of any practical use to our situation.If so we go ahead and shortlist the algorithms that can be applied, and compare them based on their time and space complexity.There could be more metrics for comparison, but these two are fundamental.One such metric could be ease of implementation.And depending on the situation at hand yu would employ either worst case analysis or average case analysis ir best case analysis.For example if you rarely have worst case scenarios, then its makes much more sense to carry out average case analysis.However if the performance of our code is of critical nature and we need to provide the output in a strict time limit, then its much more prudent to look at worst case analysis.Thus, the analysis that you make depends n the situation at hand, and with time, the intuition of which analysis to apply becomes second nature.
Please ask if you have more questions.
To know more about big-oh and the other notations read my answers here and here.
The Wikipedia article on quicksort provides a good example of how asymptotic analysis is used on best/average/worst case: it's got a worst case of O(n^2), an average case of O(n log n), and a best case of O(n log n), and if you're comparing this to another algorithm (say, heapsort) you would compare apples to apples, e.g. you'd compare quicksort's worst-case big-theta to heapsort's worst-case big-theta, or quicksort's space big-oh to heapsort's space big-oh.
You can also compare big-theta to big-oh if you're interested in upper bounds, or big-theta to big-omega if you're interested in lower bounds.
Big-omega is usually only of theoretical interest - you're far more likely to see analysis in terms of big-oh or big-theta.
What exactly do we get out of imputing our worst/avg/best case big O into the asymptotic analysis and measuring bounds?
It just gives an idea when you are comparing different approach for some problem. This will help you in comparing different approaches.
Would we use asymptotic analysis to specifically compare two algorithms of worst/avg/best case big O?
Generally only worse case gets more focus, compared to Big Omega and theta.
Yes, we use function f(n) for algorithm 1 and g(n) for algorithm. And these functions are Big O of their respective algorithms.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How is it different from asymptotic analysis? When do you use it, and why?
I've read some articles that seem to have been written well, like these:
http://www.ugrad.cs.ubc.ca/~cs320/2010W2/handouts/aa-nutshell.pdf
http://www.cs.princeton.edu/~fiebrink/423/AmortizedAnalysisExplained_Fiebrink.pdf
but I've still not fully understood these concepts.
So, can anyone please simplify it for me?
Amortized analysis doesn't naively multiply the number of invocations with the worst case for one invocation.
For example, for a dynamic array that doubles in size when needed, normal asymptotic analysis would only conclude that adding an item to it costs O(n), because it might need to grow and copy all elements to the new array. Amortized analysis takes into account that in order to have to grow, n/2 items must have been added without causing a grow since the previous grow, so adding an item really only takes O(1) (the cost of O(n) is amortized over n/2 actions).
Amortized analysis is not the same as an "average performance" - amortized analysis gives a hard guarantee on what the performance will do if you do so much actions.
There are a lot of answers to "what", but none to "why".
As everyone else has said, asymptotic analysis is about how the performance of a given operation scales to a large data set. Amortized analysis is about how the average of the performance of all of the operations on a large data set scales. Amortized analysis never gives worse bounds than asymptotic, and sometimes gives much better ones.
If you are concerned with the total running time of a longer job, the better bounds of amortized analysis are probably what you care about. Which is why scripting languages (for instance) are often happy to grow arrays and hash tables by some factor even though that is an expensive operation. (The growing can be a O(n) operation, but amortized is O(1) because you do it rarely.)
If you are doing real time programming (individual operations must complete in a predictable time), then the better bounds from amortized analysis don't matter. It doesn't matter if the operation on average was fast, if you failed to finish it in time to get back and adjust the bandsaw before it cut too far...
Which one matters in your case depends on exactly what your programming problem is.
Asymptotic analysis
This term refers to the analysis of algorithm performance under the assumption that the data the algorithm operates on (the input) is, in layman's terms, "large enough that making it larger will not change the conclusion". Although the exact size of the input does not need to be specified (we only need an upper bound), the data set itself has to be specified.
Note that so far we have only talked about the method of analysis; we have not specified exactly which quantity we are analyzing (time complexity? space complexity?), and neither have we specified which metric we are interested in (worst case? best case? average?).
In practice the term asymptotic analysis commonly refers to upper bound time complexity of an algorithm, i.e. the worst case performance measured by total running time, which is represented by the big-Oh notation (e.g. a sorting algorithm might be O(nlogn)).
Amortized analysis
This term refers to the analysis of algorithm performance based on a specific sequence of operations that targets the worst case scenario -- that is, amortized analysis does imply that the metric is worst case performance (although it still does not say which quantity is being measured). To perform this analysis, we need to specify the size of the input, but we do not need to make any assumptions about its form.
In layman's terms, amortized analysis is picking an arbitrary size for the input and then "playing through" the algorithm. Whenever a decision that depends on the input must be made, the worst path is taken¹. After the algorithm has run to completion we divide the calculated complexity by the size of the input to produce the final result.
¹note: To be precise, the worst path that is theoretically possible. If you have a vector that dynamically doubles in size each time its capacity is exhausted, "worst case" does not mean to assume that it will need to double upon every insertion because the insertions are processed as a sequence. We are allowed to (and indeed must) use known state to mathematically eliminate as many "even worse" cases as we can, even while the input remains unknown.
The most important difference
The critical difference between asymptotic and amortized analysis is that the former is dependent on the input itself, while the latter is dependent on the sequence of operations the algorithm will execute.
Therefore:
asymptotic analysis allows us to assert that the complexity of the algorithm when it is given a best/worst/average case input of size approaching N is bounded by some function F(N) -- where N is a variable
amortized analysis allows us to assert that the complexity of the algorithm when it is given an input of unknown characteristics but known size N is no worse than the value of a function F(N) -- where N is a known value
The answer to this is succinctly defined by the first sentence of the Amortized Analysis chapter in the book - Introduction to Algorithms:
In an amortized analysis, the time required to perform a sequence of
data-structure operations is averaged over all the operations
performed.
We represent the complexity of a program's growth by Asymptotic analysis - which is bounding the program's growth by a function and defining the worst, best or average case of that.
But this can be misleading in cases where there is just one case where the program's complexity reaches a peak, but in general, the program doesn't take much computation.
Hence, it makes more sense to average the cost over a sequence of operations, even though a single operation might be expensive. This is Amortized Analysis!
Amortized Analysis is an alternate to Asymptotic technique used to calculate complexity. It helps us calculating a more true complexity in terms of practicality, so as to compare and decide between two or more algorithms.
The best reference I've found so far for understanding the amortized analysis of algorithms, is in the book Introduction to Algorithms, third edition, chapter 17: "Amortized Analysis". It's all there, explained much better than what can be found in a Stack Overflow post. You'll find the book in the library of any decent University.
Regular asymptotic analysis looks at the performance of an individual operation asymptotically, as a function of the size of the problem. The O() notation is what indicates an asymptotic analysis.
Amortized analysis (which is also an asymptotic analysis) looks at the total performance of multiple operations on a shared datastructure.
The difference is, amortized analysis typically proves that the total computation required for M operations has a better performance guarantee than M times the worst case for the individual operation.
For example, an individual operation on a splay tree of size N can take up to O(N) time. However, a sequence of M operations on a tree of size N is bounded by O( M(1+log N) + N log N ) time, which is roughly O(log N) per operation. However, note that an amortized analysis is much stricter than an "average-case" analysis: it proves that any possible sequence of operations will satisfy its asymptotic worst case.
Amortised analysis deals with the total cost over a number of runs of the routine, and the benefits that can be gained therein. For example searching an unsorted array of n items for a single match may take up to n comparisons and hence is o(n) complexity. However, if we know the same array is going to be searched for m items, repeating the total task would then have complexity O(m*n). However, if we sort the array in advance, the cost is O(n log(n)), and successive searches take only O(log(n)) for a sorted array. Thus the total amortised cost for m elements taking this approach is O(n*log(n) + m*log(n)). If m >= n, this equates to O(n log(n)) by pre-sorting compared to O(n^2) for not sorting. Thus the amortised cost is cheaper.
Put simply, by spending a bit extra early on, we can save a lot later.