the concept of Every-case Time complexity - algorithm

This is a slide from my lecture notes. I understand that if an algorithm's best-case and worst-case complexity are the same then it has "every-case" complexity. However I do not fully understand this concept. I tried researching online but it didn't help.
Can somebody with other examples explain what every-case time complexity more generally?

You found out that the lower bound - the asymptotic minimum number of operations to add array members is O(n).
You found out that the upper bound - the asymptotic maximum number of operations to add array members is O(n).
So, all the other cases are between O(n) and O(n). So they must be O(n) too.

The concept is from Napolitaner book Foundation of algorithms (chapter 1) and is related to algorithms that every time do the same steps regardless of input size (eg. add a number to every elements of an array). Pay attention that not every algorithms can be analized with every case analisys. HTH

Related

When (not how or why) to calculate Big O of an algorithm

I was asked this question in an interview recently and was curious as to what others thought.
"When should you calculate Big O?"
Most sites/books talk about HOW to calc Big O but not actually when you should do it. I'm an entry level developer and I have minimal experience so I'm not sure if I'm thinking on the right track. My thinking is you would have a target Big O to work towards, develop the algorithm then calculate the Big O. Then try to refactor the algorithm for efficiency.
My question then becomes is this what actually happens in industry or am I far off?
"When should you calculate Big O?"
When you care about the Time Complexity of the algorithm.
When do I care?
When you need to make your algorithm to be able to scale, meaning that it's expected to have big datasets as input to your algorithm (e.g. number of points and number of dimensions in a nearest neighbor algorithm).
Most notably, when you want to compare algorithms!
You are asked to do a task, for which several algorithms can be applied to. Which one do you choose? You compare the Space, Time and development/maintenance complexities of them, and choose the one that best fits your needs.
Big O or asymptotic notations allow us to analyze an algorithm's running time by identifying its behavior as the input size for the algorithm increases.
So whenever you need to analyse your algorithm's behavior with respect to growth of the input, you will calculate this. Let me give you an example -
Suppose you need to query over 1 billion data. So you wrote a linear search algorithm. So is it okay? How would you know? You will calculate Big-o. It's O(n) for linear search. So in worst case it would execute 1 billion instruction to query. If your machine executes 10^7 instruction per second(let's assume), then it would take 100 seconds. So you see - you are getting an runtime analysis in terms of growth of the input.
When we are solving an algorithmic problem we want to test the algorithm irrespective of hardware where we are running the algorithm. So we have certain asymptotic notation using which we can define the time and space complexities of our algorithm.
Theta-Notation: Used for defining average case complexity as it bounds the function from top and bottom
Omega-Notation: Bounds the function from below. It is used for best-time complexity
Big-O Notation: This is important as it tells about worst-case complexity and it bounds the function from top.
Now I think the answer to Why BIG-O is calculated is that using it we can get a fair idea that how bad our algorithm can perform as the size of input increases. And If we can optimize our algorithm for worst case then average and best case will take care for themselves.
I assume that you want to ask "when should I calculate time complexity?", just to avoid technicalities about Theta, Omega and Big-O.
Right attitude is to guess it almost always. Notable exceptions include piece of code you want to run just once and you are sure that it will never receive bigger input.
The emphasis on guess is because it does not matter that much whether complexity is constant or logarithmic. There is also a little difference between O(n^2) and O(n^2 log n) or between O(n^3) and O(n^4). But there is a big difference between constant and linear.
The main goal of the guess, is the answer to the question: "What happens if I get 10 times larger input?". If complexity is constant, nothing happens (in theory at least). If complexity is linear, you will get 10 times larger running time. If complexity is quadratic or bigger, you start to have problems.
Secondary goal of the guess is the answer to question: 'What is the biggest input I can handle?". Again quadratic will get you up to 10000 at most. O(2^n) ends around 25.
This might sound scary and time consuming, but in practice, getting time complexity of the code is rather trivial, since most of the things are either constant, logarithmic or linear.
It represents the upper bound.
Big-oh is the most useful because represents the worst-case behavior. So, it guarantees that the program will terminate within a certain time period, it may stop earlier, but never later.
It gives the worst time complexity or maximum time require to execute the algorithm

Big O algorithms minimum time

I know that for some problems, no matter what algorithm you use to solve it, there will always be a certain minimum amount of time that will be required to solve the problem. I know BigO captures the worst-case (maximum time needed), but how can you find the minimum time required as a function of n? Can we find the minimum time needed for sorting n integers, or perhaps maybe finding the minimum of n integers?
what you are looking for is called best case complexity. It is kind of useless analysis for algorithms while worst case analysis is the most important analysis and average case analysis is sometimes used in special scenario.
the best case complexity depends on the algorithms. for example in a linear search the best case is, when the searched number is at the beginning of the array. or in a binary search it is in the first dividing point. in these cases the complexity is O(1).
for a single problem, best case complexity may vary depending on the algorithm. for example lest discuss about some basic sorting algorithms.
in bubble sort best case is when the array is already sorted. but even in this case you have to check all element to be sure. so the best case here is O(n). same goes to the insertion sort
for quicksort/mergesort/heapsort the best case complexity is O(n log n)
for selection sort it is O(n^2)
So from the above case you can understand that the complexity ( whether it is best , worst or average) depends on the algorithm, not on the problem

Need an efficient selection algorithm?

I am looking for an algorithm for selecting A [N/4] the element in an unsorted array A where N is the Number of elements of the array A. I want the algorithm to do the selection in sublinear times .I have knowledge of basic structures like a BST etc? Which one will be the best algorithm for me keeping in mind I want it to be the fastest possible and should not be too tough for me to implement.Here N can vary upto 250000.Any help will be highly appreciated.Note array can have non unique elements
As #Jerry Coffin mentioned, you cannot hope to get a sublinear time algorithm here unless you are willing to do some preprocessing up front. If you want a linear-time algorithm for this problem, you can use the quickselect algorithm, which runs in expected O(n) time with an O(n2) worst-case. The median-of-medians algorithm has worst-case O(n) behavior, but has a high constant factor. One algorithm that you might find useful is the introselect algorithm, which combines the two previous algorithms to get a worst-case O(n) algorithm with a low constant factor. This algorithm is typically what's used to implement the std::nth_element algorithm in the C++ standard library.
If you are willing to do some preprocessing ahead of time, you can put all of the elements into an order statistic tree. From that point forward, you can look up the kth element for any k in time O(log n) worst-case. The preprocessing time required is O(n log n), though, so unless you are making repeated queries this is unlikely to be the best option.
Hope this helps!

What is amortized analysis of algorithms? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
How is it different from asymptotic analysis? When do you use it, and why?
I've read some articles that seem to have been written well, like these:
http://www.ugrad.cs.ubc.ca/~cs320/2010W2/handouts/aa-nutshell.pdf
http://www.cs.princeton.edu/~fiebrink/423/AmortizedAnalysisExplained_Fiebrink.pdf
but I've still not fully understood these concepts.
So, can anyone please simplify it for me?
Amortized analysis doesn't naively multiply the number of invocations with the worst case for one invocation.
For example, for a dynamic array that doubles in size when needed, normal asymptotic analysis would only conclude that adding an item to it costs O(n), because it might need to grow and copy all elements to the new array. Amortized analysis takes into account that in order to have to grow, n/2 items must have been added without causing a grow since the previous grow, so adding an item really only takes O(1) (the cost of O(n) is amortized over n/2 actions).
Amortized analysis is not the same as an "average performance" - amortized analysis gives a hard guarantee on what the performance will do if you do so much actions.
There are a lot of answers to "what", but none to "why".
As everyone else has said, asymptotic analysis is about how the performance of a given operation scales to a large data set. Amortized analysis is about how the average of the performance of all of the operations on a large data set scales. Amortized analysis never gives worse bounds than asymptotic, and sometimes gives much better ones.
If you are concerned with the total running time of a longer job, the better bounds of amortized analysis are probably what you care about. Which is why scripting languages (for instance) are often happy to grow arrays and hash tables by some factor even though that is an expensive operation. (The growing can be a O(n) operation, but amortized is O(1) because you do it rarely.)
If you are doing real time programming (individual operations must complete in a predictable time), then the better bounds from amortized analysis don't matter. It doesn't matter if the operation on average was fast, if you failed to finish it in time to get back and adjust the bandsaw before it cut too far...
Which one matters in your case depends on exactly what your programming problem is.
Asymptotic analysis
This term refers to the analysis of algorithm performance under the assumption that the data the algorithm operates on (the input) is, in layman's terms, "large enough that making it larger will not change the conclusion". Although the exact size of the input does not need to be specified (we only need an upper bound), the data set itself has to be specified.
Note that so far we have only talked about the method of analysis; we have not specified exactly which quantity we are analyzing (time complexity? space complexity?), and neither have we specified which metric we are interested in (worst case? best case? average?).
In practice the term asymptotic analysis commonly refers to upper bound time complexity of an algorithm, i.e. the worst case performance measured by total running time, which is represented by the big-Oh notation (e.g. a sorting algorithm might be O(nlogn)).
Amortized analysis
This term refers to the analysis of algorithm performance based on a specific sequence of operations that targets the worst case scenario -- that is, amortized analysis does imply that the metric is worst case performance (although it still does not say which quantity is being measured). To perform this analysis, we need to specify the size of the input, but we do not need to make any assumptions about its form.
In layman's terms, amortized analysis is picking an arbitrary size for the input and then "playing through" the algorithm. Whenever a decision that depends on the input must be made, the worst path is taken¹. After the algorithm has run to completion we divide the calculated complexity by the size of the input to produce the final result.
¹note: To be precise, the worst path that is theoretically possible. If you have a vector that dynamically doubles in size each time its capacity is exhausted, "worst case" does not mean to assume that it will need to double upon every insertion because the insertions are processed as a sequence. We are allowed to (and indeed must) use known state to mathematically eliminate as many "even worse" cases as we can, even while the input remains unknown.
The most important difference
The critical difference between asymptotic and amortized analysis is that the former is dependent on the input itself, while the latter is dependent on the sequence of operations the algorithm will execute.
Therefore:
asymptotic analysis allows us to assert that the complexity of the algorithm when it is given a best/worst/average case input of size approaching N is bounded by some function F(N) -- where N is a variable
amortized analysis allows us to assert that the complexity of the algorithm when it is given an input of unknown characteristics but known size N is no worse than the value of a function F(N) -- where N is a known value
The answer to this is succinctly defined by the first sentence of the Amortized Analysis chapter in the book - Introduction to Algorithms:
In an amortized analysis, the time required to perform a sequence of
data-structure operations is averaged over all the operations
performed.
We represent the complexity of a program's growth by Asymptotic analysis - which is bounding the program's growth by a function and defining the worst, best or average case of that.
But this can be misleading in cases where there is just one case where the program's complexity reaches a peak, but in general, the program doesn't take much computation.
Hence, it makes more sense to average the cost over a sequence of operations, even though a single operation might be expensive. This is Amortized Analysis!
Amortized Analysis is an alternate to Asymptotic technique used to calculate complexity. It helps us calculating a more true complexity in terms of practicality, so as to compare and decide between two or more algorithms.
The best reference I've found so far for understanding the amortized analysis of algorithms, is in the book Introduction to Algorithms, third edition, chapter 17: "Amortized Analysis". It's all there, explained much better than what can be found in a Stack Overflow post. You'll find the book in the library of any decent University.
Regular asymptotic analysis looks at the performance of an individual operation asymptotically, as a function of the size of the problem. The O() notation is what indicates an asymptotic analysis.
Amortized analysis (which is also an asymptotic analysis) looks at the total performance of multiple operations on a shared datastructure.
The difference is, amortized analysis typically proves that the total computation required for M operations has a better performance guarantee than M times the worst case for the individual operation.
For example, an individual operation on a splay tree of size N can take up to O(N) time. However, a sequence of M operations on a tree of size N is bounded by O( M(1+log N) + N log N ) time, which is roughly O(log N) per operation. However, note that an amortized analysis is much stricter than an "average-case" analysis: it proves that any possible sequence of operations will satisfy its asymptotic worst case.
Amortised analysis deals with the total cost over a number of runs of the routine, and the benefits that can be gained therein. For example searching an unsorted array of n items for a single match may take up to n comparisons and hence is o(n) complexity. However, if we know the same array is going to be searched for m items, repeating the total task would then have complexity O(m*n). However, if we sort the array in advance, the cost is O(n log(n)), and successive searches take only O(log(n)) for a sorted array. Thus the total amortised cost for m elements taking this approach is O(n*log(n) + m*log(n)). If m >= n, this equates to O(n log(n)) by pre-sorting compared to O(n^2) for not sorting. Thus the amortised cost is cheaper.
Put simply, by spending a bit extra early on, we can save a lot later.

Is it possible to calculate median of a list of numbers better than O(n log n)?

I know that it is possible to calculate the mean of a list of numbers in O(n). But what about the median? Is there any better algorithm than sort (O(n log n)) and lookup middle element (or mean of two middle elements if an even number of items in list)?
Yes. You can do it (deterministically) in O(n).
What you're talking about is a selection algorithm, where k = n/2. There is a method based on the same partitioning function used in quicksort which works. It is called, not surprisingly, quickselect. While it can, like quicksort, have a O(n2) worst case, this can be brought down to linear time using the proper pivot selection.
Partially irrelevant, but: a quick tip on how to quickly find answers to common basic questions like this on the web.
We're talking about medians? So Gg to the page about medians in wikipedia
Search page for algorithm:
Efficient computation of the sample median
Even though sorting n items takes in general O(n log n) operations, by using a "divide and conquer" algorithm the median of n items can be computed with only O(n) operations (in fact, you can always find the k-th element of a list of values with this method; this is called the selection problem).
Follow the link to the selection problem for the description of algorithm. Read intro:
... There are worst-case linear time selection algorithms. ...
And if you're interested read about the actual ingenious algorithm.
If the numbers are discrete (e.g. integers) and there is a manageable number of distinct values, you can use a "bucket sort" which is O(N), then iterate over the buckets to figure out which bucket holds the median. The complete calculation is O(N) in time and O(B) in space.
Just for fun (and who knows, it may be faster) there's another randomized median algorithm, explained technically in Mitzenmacher's and Upfall's book. Basically, you choose a polynomially-smaller subset of the list, and (with some fancy bookwork) such that it probably contains the real median, and then use it to find the real median. The book is on google books, and here's a link. Note: I was able to read the pages of the algorthm, so assuming that google books reveals the same pages to everyone, you can read them too.
It is a randomized algorithm s.t. if it finds the answer, it is 100% certain that it is the correct answer (this is called Las Vegas style). The randomness arises from the runtime --- occasionally (with probability 1/(sqrt(n)), I think) it FAILS to find the median, and must be re-run.
Asymptotically, it is exactly linear when you take into the chance of failure --- that is to say, it is a wee bit less than linear, exactly such that when you take into account the number of times you may need to re-run it, it becomes linear.
Note: I'm not saying this is better or worse --- I certainly haven't done a real-life runtime comparison between these algorithms! I'm simply presenting an additional algorithm that has linear runtime, but works in a significantly different way.
This link has popped up recently on calculating median: http://matpalm.com/median/question.html .
In general I think you can't go beyond O(n log n) time, but I don't have any proof on that :). No matter how much you make it parallel, aggregating the results into a single value takes at least log n levels of execution.
Try the randomized algorithm, the sampling size (e.g. 2000) is independent from the data size n, still be able to get sufficiently high (99%) accuracy. If you need higher accuracy, just increase sampling size. Using Chernoff bound can proof the probability under a certain sampling size. I've write some JavaScript Code to implement the algorithm, feel free to take it. http://www.sfu.ca/~wpa10

Resources