Complexity of Heapsort algorithm - algorithm

In the book is written:
The worst-case running time of heapsort is (nlgn). This is clear since
sorting has a lower bound of (nlgn)
But can someone help me and show me explicitly that the lower-bound of this function is equal to Omega(nlgn)?

It sounds like the book is drawing on the fact that heapsort is a comparison-based sorting algorithm in that statement. So automatically, this paradigm of sorting algorithms have already been shown to have a lower-bound of Omega(nlgn):
http://en.wikipedia.org/wiki/Comparison_sort#Number_of_comparisons_required_to_sort_a_list

Related

Time Complexity quick sort

I am learning about time complexity recently. As I trying to find the Big O of quicksort, I saw on the Internet saying quicksort has an O(n log n) for Best-case performance and O(n^2) for Worst-case performance. From what I have understood, Big O notation describes the upper bound limit of an algorithm, and it states its worst-case performance. Why is there saying "quicksort has an O(n log n) for Best-case performance"? I thought Big O is used to describe the worst-case performance.
This happens because the worst-case performance rarely occurs in QuickSort - you have to be really unlucky to partition your input so that your pivot is always your smallest or largest element in that input.
Since the average-case performance, which is equal to the best-case, happens most of the time, people choose to mention that instead or both. I think that many programming languages with built-in sorting functions have implemented QuickSort for their function (but not for every type of input). That shows how strong this algorithm is.
You may also find this article useful: Analysis of quicksort

Is an algorithm with a worst-case time complexity of O(n) always faster than an algorithm with a worst-case time complexity of O(n^2)?

This question has appeared in my algorithms class. Here's my thought:
I think the answer is no, an algorithm with worst-case time complexity of O(n) is not always faster than an algorithm with worst-case time complexity of O(n^2).
For example, suppose we have total-time functions S(n) = 99999999n and T(n) = n^2. Then clearly S(n) = O(n) and T(n) = O(n^2), but T(n) is faster than S(n) for all n < 99999999.
Is this reasoning valid? I'm slightly skeptical that, while this is a counterexample, it might be a counterexample to the wrong idea.
Thanks so much!
Big-O notation says nothing about the speed of an algorithm for any given input; it describes how the time increases with the number of elements. If your algorithm executes in constant time, but that time is 100 billion years, then it's certainly slower than many linear, quadratic and even exponential algorithms for large ranges of inputs.
But that's probably not really what the question is asking. The question is asking whether an algorithm A1 with worst-case complexity O(N) is always faster than an algorithm A2 with worst-case complexity O(N^2); and by faster it probably refers to the complexity itself. In which case you only need a counter-example, e.g.:
A1 has normal complexity O(log n) but worst-case complexity O(n^2).
A2 has normal complexity O(n) and worst-case complexity O(n).
In this example, A1 is normally faster (i.e. scales better) than A2 even though it has a greater worst-case complexity.
Since the question says Always it means it is enough to find only one counter example to prove that the answer is No.
Example for O(n^2) and O(n logn) but the same is true for O(n^2) and O(n)
One simple example can be a bubble sort where you keep comparing pairs until the array is sorted. Bubble sort is O(n^2).
If you use bubble sort on a sorted array, it will be faster than using other algorithms of time complexity O(nlogn).
You're talking about worst-case complexity here, and for some algorithms the worst case never happen in a practical application.
Saying that an algorithm runs faster than another means it run faster for all input data for all sizes of input. So the answer to your question is obviously no because the worst-case time complexity is not an accurate measure of the running time, it measures the order of growth of the number of operations in a worst case.
In practice, the running time depends of the implementation, and is not only about this number of operations. For example, one has to care about memory allocated, cache-efficiency, space/temporal locality. And obviously, one of the most important thing is the input data.
If you want examples of when the an algorithm runs faster than another while having a higher worst-case complexity, look at all the sorting algorithms and their running time depending of the input.
You are correct in every sense, that you provide a counter example to the statement. If it is for exam, then period, it should grant you full mark.
Yet for a better understanding about big-O notation and complexity stuff, I will share my own reasoning below. I also suggest you to always think the following graph when you are confused, especially the O(n) and O(n^2) line:
Big-O notation
My own reasoning when I first learnt computational complexity is that,
Big-O notation is saying for sufficient large size input, "sufficient" depends on the exact formula (Using the graph, n = 20 when compared O(n) & O(n^2) line), a higher order one will always be slower than lower order one
That means, for small input, there is no guarantee a higher order complexity algorithm will run slower than lower order one.
But Big-O notation tells you an information: When the input size keeping increasing, keep increasing....until a "sufficient" size, after that point, a higher order complexity algorithm will be always slower. And such a "sufficient" size is guaranteed to exist*.
Worst-time complexity
While Big-O notation provides a upper bound of the running time of an algorithm, depends on the structure of the input and the implementation of the algorithm, it may generally have a best complexity, average complexity and worst complexity.
The famous example is sorting algorithm: QuickSort vs MergeSort!
QuickSort, with a worst case of O(n^2)
MergeSort, with a worst case of O(n lg n)
However, Quick Sort is basically always faster than Merge Sort!
So, if your question is about Worst Case Complexity, quick sort & merge sort maybe the best counter example I can think of (Because both of them are common and famous)
Therefore, combine two parts, no matter from the point of view of input size, input structure, algorithm implementation, the answer to your question is NO.

the concept of Every-case Time complexity

This is a slide from my lecture notes. I understand that if an algorithm's best-case and worst-case complexity are the same then it has "every-case" complexity. However I do not fully understand this concept. I tried researching online but it didn't help.
Can somebody with other examples explain what every-case time complexity more generally?
You found out that the lower bound - the asymptotic minimum number of operations to add array members is O(n).
You found out that the upper bound - the asymptotic maximum number of operations to add array members is O(n).
So, all the other cases are between O(n) and O(n). So they must be O(n) too.
The concept is from Napolitaner book Foundation of algorithms (chapter 1) and is related to algorithms that every time do the same steps regardless of input size (eg. add a number to every elements of an array). Pay attention that not every algorithms can be analized with every case analisys. HTH

Is a better sorting algorithm required with a time complexity of O(n)?

I am working on a program that uses just one for-loop for N times and sorts the N elements.
Just wished to ask, is it worth it? Because I know it's gonna work because it is working pretty well on paper.
It also uses comparisons.
I also wished to know if there were any drawbacks in Radix Sort.
Cheers.
Your post mentions that you are using comparisons. Comparison-based sorting algorithms need at least O(n log n) comparisons for average inputs. Please note that Ω(n log n) lower bound on comparison sorting algorithms has been proven mathematically using information theory. You can only achieve O(n) is the best case scenario where the input data is already sorted. There is a lot more detail on sorting algorithm on Wikipedia.
I would only implement your sorting algorithm as a challenging programming exercise. Most modern languages already provide fast sorting algorithms that have been thoroughly tested.

Time Complexity of Algorithms (Big Oh notation)

Hey just a quick question,
I've just started looking into algorithm analysis and I'm attempting to learn Big-Oh notation.
The algorithm I'm looking at contains a quicksort (of complexity O(nlog(n))) to sort a dataset, and then the algorithm that operates upon the set itself has a worst case run-time of n/10 and complexity O(n).
I believe that the overall complexity of the algorithm would just be O(n), because it's of the highest order, so it makes the complexity of the quicksort redundant. However, could someone confirm this or tell me if I'm doing something wrong?
Wrong.
Quicksort has worst case complexity O(n^2). But even if you have an O(nlogn) sort algorithm, this is still more than O(n).

Resources