What does Worst case running time of Insertion Sort really means - insertion-sort

I don't know whether it is a right platform for asking this Question so don't be angry. My question is that we all know that Worst Case running time for Insertion Sort is O(n^2). Let's say that we have array of length 5 then how many number of Comparisons will be required to sort an array.
I am confused that for an array of length 5 will take 5^2 =25 number of comparisons. Am i right please correct me.

Related

Number of comparisons in duplicates finding algorithm [duplicate]

This question already has answers here:
Prove that an algorithm has a lower bound
(2 answers)
Closed 3 years ago.
Let's suppose we have a sorted list containing N elements. I have read in a textbook that an algorithm which determines if this list has duplicates must perform at least n-1comparisons (that means n-1 is a lower bound of the number of comparisons). I don't understand this because let's say the 1st and 2nd elements are duplicates, then the algorithm would simply return 'yes' after performing exactly one comparison. What do I get wrong? is there any simple proof of the n-1 lower bound?
When talking about complexity, you never consider only one input, otherwise there would always be some O(1) algorithm returning the expected answer.
An algorithm is supposed to work whatever the given input is (according to the specifications).
So for you, n-1 is the lower bound on the worst case complexity for any algorithm (based on comparison), i.e. for any algorithm you can find one input for which it needs at least n-1 comparisons.
You are probably confusing the two terms "lower bound" and "worst/best case".
The order of input determines the worst/best case, so, the lower bound or "big omega" can be said as "n-1" for the worst case and "1" for the best case.
But generally, the time complexity is determined for the worst cases.

Confusion in calculating number of steps for various algorithms?

I've been learning data structures and algorithms from a book, in which it compares time efficiency in terms of number of steps taken by various sorting algorithms. I'm confused as to what we define as one step while doing this.
So while counting no. of steps we consider the worst case scenarios. I understood how we come up with the no. of steps for bubble sort. But for selection sort, I am confused about the part where we compare every element with the current lowest value.
For example, in the worst case array, lets say 5,4,3,2,1, and lets say we are in the first pass through. When we start, 5 is the current lowest value. When we move to 4, and compare it to 5, we change the current lowest value to 4.
Why isnt this action of changing the current lowest value to 4 counted as a swap or an additional step? I mean, it is a step separate from the comparison step. The book I am referring to states that in the first passthrough, the number of comparisons are n-1 but the no. of swaps is only 1, even in worst case, for an n size array. Here they are assuming that the step of changing the current lowest value is a part of the comparison step, which I think should not be a valid assumption, since there can be an array in which you compare but don't need to change the lowest current value and hence your no. of steps eventually reduce. Point being, we cant assume that the no. of steps in the first pass through for selection sort in the worst case is n-1 (comparisons) + 1 (swap). It should be more than (n-1) + (1).
I understand that both selection sort and bubble sort lie in the same classification of time complexity as per big O methodology, but the book goes on to claim that selection sort has lesser steps than bubble sort in worst case scenarios, and I'm doubting that. This is what the book says: https://ibb.co/dxFP0
Generally in these kinds of exercises you’re interested in whether the algorithm is O(1), O(n), O(n^2) or something higher. You’re generally not interested in O(1) vs O(2) or in O(3n) vs O(5n) because for sufficiently large n only the power of n matters.
To put it another way, small differences in the complexity of each step, maybe favors of 2 or 3 or even 10, don’t matter against choosing an algorithm with a factor of n = 300 or more additional work

Bubble sort complexity O(n)

We have the series of numbers.We can see that this series is almost sorted.
Since this series is almost sorted does it mean that the complexity is O(n)?
No. There are so many reasons it's hard to know where to start. First, O() notation is not defined for specific input examples. The complexity of an algorithm is defined for any possible input.
Aside from that, even an almost sorted list can require O(N^2) time to sort. Simply take a sorted list, swap the first and last elements, and pass that to Bubble Sort. That seems like it would meet the definition of almost sorted, but Bubble Sort will take N^2 operations to put the list in total order.
Yes. This example can be considered as O(n).,
There are cases when O(n) and even less than that is possible.
Examples-
Already sorted array (1 2 3 4 5 6)
An array in which only the alternate values are exchanged (2 1 4 3 6 5)
etc.
Keeping these best cases or exceptional cases aside, the complexity of Bubble sort for a given random unsorted array is O(N^2).
This is very vague, but O() notation talks about worst-case runtime. So whatever input is handed to bubble-sort (for instance) can take at most n^2 number of operations to sort. Specific examples may take anywhere from the least amount of operations possible to the most operations possible (with bubble sort that is O(n^2)).

Upper bound and lower bound of sorting algorithm

This is a very simple question but I'm struggling too much to understand the concept completely.
I'm trying to understand the difference between the following statements:
There exists an algorithm which sorts an array of n numbers in O(n) in the best case.
Every algorithm sorts an array of n numbers in O(n) in the best case.
There exists an algorithm which sorts an array of n numbers in Omega(n) in the best case.
Every algorithm sorts an array of n numbers in Omega(n) in the best case.
I will first explain what is driving me crazy. I'm not sure regarding 1 and 3 - but I know that for one of them the answer is correct just by specifying one case and for the other one the answer is correct by examining all the possible inputs. Therefore I know one of them must be true just by specifying that the array is already sorted but I can't tell which.
My teacher always told me to think about it like we are examining who's the heighest guy in the class and again by one of these options(1,3) it's enough to say that he is and there is no reason to examine all the class.
I do know that if we were to examine the worst case then none of these statements could be true because the best sorting algorithm without any assumptions or additional memory is Omega(nlogn).
IMPORTANT NOTE: I'm not looking for a solution (an algorithm which is able to do the matching sort) - only trying to understand the concept a little better.
Thank you!
For 1+3 ask yourself - do you know an algorithm that can sort an array at best case in Theta(n) - if the answer is true, then both 1+3 are true - since Theta(n) is O(n) [intersection] Omega(n), and thus if you do have such an algorithm (that runs in Theta(n) best case) - both 1+3 are correct.
Hint: optimized bubble sort.
For 2: ask yourself - does EVERY algorithm sorts an array of numbers in O(n) best case? Do you know an algorithm that have a worst case and best case identical time complexity? What happens to the mentioned bubble sort if you take all optimizations off?
For 4: ask yourself - do you need to read all elements in order to ensure the array is sorted? If you do - Omega(n) is a definite lower bound, you cannot go better then it.
Good Luck!
The difference, obviously, is in terms "O" and "Omega". One says "rising not faster than", second says "rising not slower than".
Make sure that you understand the difference between those terms, and you'll see the difference in the sentences.
1 and 3 both state completely different things, just as 2 and 4 are.
Look at those (those are NOT the same!):
1~ there exists an algorithm that for 10 items doesn't take more than 30 in the best case.
3~ there exists an algorithm that for 10 items doesn't take less than 30 in the best case.
2~ every algorithm that for 10 items takes not more than 30 in the best case.
4~ every algorithm that for 10 items takes not less than 30 in the best case.
Do you sense the difference now? With O/Omega the difference is similar, but the subject of investigation differs. The examples above say about different performance in some point/case, while O/Omega notation tell you about the performance, related to the size of data, but only if the data "is large enough", be it three items or milions, and it drops constant factors:
function 1000000*n is O(n)
function 0.00000*n*n is O(n^2)
For small amounts data, second one is obviously very very better than first. But as the quantity of data rises, soon the first starts to be much better!
Rewriting the above examples into "more proper" terms, that are more similar to your original sentences:
1~ there exists an algorithm that, for more than N items, doesn't take more than X*N in the best case.
3~ there exists an algorithm that, for more than N items, doesn't take less than X*n in the best case.
2~ every algorithm that, for more than N items, takes not more than X*N in the best case.
4~ every algorithm that, for more than N items, takes not less than X*N in the best case.
I hope that this helps you with "seeing"/"feeling" the difference!

How to find the best way to sort 8 elements and prove that there is no better way (no more efficient way)? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Fastest sort of fixed length 6 int array
The task is to find a way to sort 8 random numbers with the least number of comparisons (not operations). I expect that I have to use qSort (divide an array by half, sort and then merge and so on.. it must be quicksort i think). For 8 elements number of comparisons is 17, and i have to prove that there is no way to sort random array with 16 (n minus 1) comparisons.
Thanks
Any case, so must be worst also. I'm in first year of studies, so I don't think we have to do something extraordinary (I study math not IT). And kind of sort I use is mergesort! Thanks in advance.
Mergesort/merge-insertion sort will require 16 comparisons for n=8, which is the minimum worst case number of comparisons.
http://oeis.org/A001768 (number of comparisons for mergesort)
http://oeis.org/A036604 (minimum number of comparisons in general)
see also
Sorting an array with minimal number of comparisons
EDIT: without assuming "random numbers" are range restricted integers. If you can make assumptions about the range of values, then there are alternatives.
Radix sort requires no comparisons at all :)

Resources