Sorting algorithm best case/worst case scenario - algorithm

This is a practice exam question i'm working on, i have a general idea of what the answer is but would like some clarification.
The
following
is
a
sorting
algorithm
for
n
integers
in
an
array.
In
step
1,
you
iterate
through
the
array,
and
compare
each
pair
of
adjacent
integers
and
swap
each
pair
if
they
are
in
the
wrong
order.
In
step
2,
you
repeat
step
1
as
many
times
as
necessary
until
there
is
an
iteration
where
no
swaps
are
made
(in
which
case
the
list
is
sorted
and
you
can
stop).
What
is
the
worst case
complexity
of
this algorithm?
What is the best case complexity of this algorithm?

Basically the algorithm presented here is a bubble sort.
The worst case complexity here is O(n^2).
The best case complexity is O(n).
Here is the explanation:
The best case situation here would be "Already sorted array". so all you need is N comparisions(To be precise its n-1) so the complexity is O(n).
The worst case situation is reverse ordered array.
To better understand why its O(n^2), consider just first element of reverse ordered array which indeed is a largest element, to make this array sorted you need to get that element to the last index of the array. Through the algorithm explained in the question, on each iteration it takes the largest element one index towards its actual position(last index here) and it requires O(n) comparisions to move one posistion. and hence O(n^2) comparision to move it to its actual position.

In the best case, no swapping will be required and a single pass of the array would suffice. So the complexity is O(n).
In the worst case, the elements of the array could be in the reverse order. So the first iteration requires (n-1) swaps, the next one (n-2) and do on...
So it would lead to O(n^2) complexity.

As others have said, this is bubble sort. But if you are measuring complexity in terms of comparisons, you can easily be more precise than big-O.
In the best case, you need only compare n-1 pairs to verify they're all in the right order.
In the worst case, the first element is the one that should be in the last position, so n-1 passes will be needed, each advancing that element one more position toward the end of the list. Each pass requires n-1 comparisons. In all, then, (n-1)^2 comparisons are needed.

Related

Best Case For Merge Sort

I've made a program that count cost of mergesort algorithm for different value of n, i've taken cost variable and i am incrementing it every time loop encounter or condition chech occure and when i get sorted array i gave that sorted array in and input to merge sort again and after that in third case i am reversing the sorted array so it would be worst case but for all three cases i am getting the same cost,so what would be the Best And Worst Case For Mergesort.
The cost of mergesort implemented classically either as a top-down recursive function or a bottom-up iterative with a small local array of pointers is the same: O(N.log(N)). The number of comparisons will vary depending on the actual contents of the array, but by at most a factor of 2.
You can improve this algorithm at a linear cost by adding an initial comparison between the last element of the left slice and the first element of the right slice in the merge phase. If the comparison yields <= then you can skip the merge phase for this pair of slices.
With this modification, a fully sorted array will sort much faster, with a linear complexity, making it the best case, and a partially sorted array will behave better as well.

Comparison-based algorithm that pairs the largest element with the smallest one in linear time

Given an array of integers. I want to design a comparison-based algorithm
that pairs the largest element with the smallest one, the second largest one with the second smallest one and so on. Obviously, this is easy if I sort the array, but I want to do it in O(n) time. How can I possibly solve this problem?
Well i can prove that it does not exists.
Let`s proof by contradiction: suppose there was such algorithm
When we could get an array of kth min and kth max pairs.
We could when get sorted array by taking all mins in order then all max in order,
so we could get original array sorted in O(n) steps.
So we could get a comparision based sorting algorithm that sorts in O(n)
Yet it can be proven that comparision based sorting algorithm must take atleast n
log n steps. (many proofs online. i.e. https://www.geeksforgeeks.org/lower-bound-on-comparison-based-sorting-algorithms/)
Hence we have a contradiction so such algortihm does not
exist.

Does sorting time of n numbers depend on a permutation of the numbers?

Consider this problem:
A comparison-based sorting algorithm sorts an array with n items. For which fraction of n! permutations, the number of comparisons may be cn where c is a constant?
I know the best time complexity for sorting an array with arbitrary items is O(nlogn) and it doesn't depend on any order, right? So, there is no fraction that leads to cn comparisons. Please guide me if I am wrong.
This depends on the sorting algorithm you use.
Optimized Bubble Sort for example, compares all neighboring elements of an array and swaps them when the left element is larger then right one. This is repeated until no swaps where performed.
When you give Bubble Sort a sorted array it won't perform any swaps in the first iteration and thus sorts in O(n).
On the other hand, Heapsort will take O(n log n) independent of the order of the input.
Edit:
To answer your question for a given sorting algorithm, might be non-trivial. Only one out of n! permutations is sorted (assuming no duplicates for simplicity). However, for the example of bubblesort you could (starting for the sorted array) swap each pair of neighboring elements. This input will take Bubblesort two iterations which is also O(n).

What is the cut point where "quick sort" transforms from nlgn to n2?

I know the worst case of the algorithm - which is when the elements are already sorted or when all the elements are same,but want to know the point at which the algorithm moves from a complexity of nlgn to n2.
It depends on how we choose the pivot.
One view says that when all the elements are already sorted. Well, it is not 100% right. In this condition, if we choose the first element as the pivot, the complexity becomes N^2.
Since we have,
T(N) = T(N-1) + cN (N >1), if you are good at basic math, then:
T(N) = O(N^2)
As mentioned above, it depends on how we choose the pivot. Although in some textbooks, it chooses the first pivot mainly, that is not recommenced.
One popular method is : median-of-three partitioning. It choose median value of a[left],a[right] and a[(left+right)/2].
It will perform worst i.e,
O(n^2)
in following cases
If the list is already sorted and pivot is first element
If list is sorted in reverse order and pivot is last element.
If all elements are same in the list. In this case pivot selection do not matter.
Note- Already sorted cannot be the worst case if pivot is selected as median.
The worst case time for quick sort occurs when the chosen pivot does not divide the array. For example, if we choose the first element as the pivot every time and the array is already sorted, then the array is not divided at all. Hence the complexity is O(n^2).
To avoid this we randomize the index for the pivot. Assuming that the pivot splits the array in two equal sized parts we have a complexity of O(n log n).
For exact analysis see Formal analysis section in https://en.wikipedia.org/wiki/Quicksort

A basic confusion on quicksort

Suppose we choose a pivot as the first element of an array in case of a quicksort. Now the best/worst case complexity is O(n^2) whereas in average case it is O(nlogn). Is not it weird (best case complexity is greater than worst case complexity)?
The best case complexity is O(nlogn), as the average case. The worst case is O(n^2). Check http://en.wikipedia.org/wiki/Quick_sort.
While other algorithms like Merge Sort and Heap Sort have a better worst case complexity (O(nlogn)), usually Quick Sort is faster - this is why it's the most common used sorting algorithm. An interesting answer about this can be found at Why is quicksort better than mergesort?.
The best-case of quicksort 0(nlogn) is when the chosen pivot splits the subarray in two +- equally sized parts in every iteration.
Worst-case of quicksort is when the chosen pivot is the smallest element in the subarray, so that the array is split in two parts where one part consists of one element (the pivot) and the other part of all the other elements of the subarray.
So choosing the first element as the pivot in an already sorted array, will get you 0(n^2). ;)
Therefore it is important to choose a good pivot. For example by using the median of the first, middle and last element of the subarray, as the pivot.

Resources