This question already has answers here:
Why is bubble sort O(n^2)?
(6 answers)
Closed 7 years ago.
I know that bubble sort has average time complexity O(n^2). Can anyone explain how to calculate this complexity? I usually find just people saying this is the average complexity but I don't know why. (In other words what is the average complexity for random permutation of numbers from 1 to n)
If the complexity is O(n^2), that would suggest that an algorithm must perform some operation on every combination of two elements of the input.
First of all, note that Bubble Sort compares adjacent items and swaps them over if they are out of order.
In the best case Bubble Sort is O(n). This is when the list is already sorted. It can pass over the list once and only needs to compare each item once with its neighbour before establishing that it is already sorted.
O(n^2) is the worst case for Bubble Sort. This is when the input list is already reverse sorted. Think about how the algorithm moves the first item from index 0 to it's sorted position at index n-1. It will have to compare that item to every other item once (n operations). It repeats this process with every item, hence O(n^2).
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
Suppose that we are given an array A already sorted in increasing order. Which is asymptotically faster, insertion-sort or merge-sort?
Like wise, suppose we are given an array B sorted in decreasing order, so it needs to be reversed. Which is now asymptotically faster?
I'm having a hard time grasping this, I already know that insertion-sort is better for smaller data sets and merge-sort is better for larger data sets. However I'm not sure why one is faster than the other depending on whether or not the data set is already sorted or not.
Speaking about worst case, the merge sort is faster with O(N logN) against O(N^2) for insertion sort. However, another characteristic of an algorithm is omega - best case complexity, which is Omega(N) for insertion sort against Omega(N logN) of merge sort.
The latter can be explained when looking at the algorithms at hand:
Merge sort works by dividing the array in half (if possible), recursively sorting those halves and merging them. Look how it does not depend on the actual order of elements: we will do recursive calls regardless of whether the part we're sorting is already in order (unless it's the base case).
Insertion sort seeks for the first element which is out of the desired order, and shifts it to the left, until it's in order. If there's no such index, no shifting will occur, and the algorithm will finish, doing only O(N) comparisons.
However, the merge sort is quite fixable w.r.t. best running time! You can check if the part at hand is already sorted before going into recursion. This will not change the worst case complexity of O(N logN) (however, the constant will double), but will bring the best case complexity to Omega(N).
In the case where the data is sorted in the reverse order, the insertion sort's worst case will show itself, since we'll have to move each element (in the order of iteration) from its position to the first position, doing N(N-1)/2 swaps, which belongs to O(N^2). The merge sort, however, still takes O(N logN) because of its recursive approach.
The unusual Θ(n2) implementation of Insertion Sort to sort an array uses linear search to identify the position where an element is to be inserted into the already sorted part of the array. If then, instead, we use binary search to identify the position, the worst case running time will then
A) remain Θ(n2)
B) become Θ(n(logn)2)
C) become Θ(nlogn)
D) become Θ(n)
This is my first question on stackoverflow please forgive any mistakes.
First of all the question is about Insertion Sort not Quicksort as you display above.
The correct answer is A-Remain Θ(n^2) since even if you can binary search the position of the element in the already sorted part of the array you have to move every element greater than it one position to the right which cause an Θ(k) amount of moves if the original array's element ordering is from greatest to lowest, where k is the initial index of the element being added to the sorted part. The total running time is Θ(n^2) when you do the math.
Question answer aside: the time complexity average case of Randomized-QuickSort is O(nlogn) and it can be proved if you have a mathematical background in expected value (probabilities). You can find more about it reading the quicksort section in the book Introduction to Algorithms (Cormen).
This question already has answers here:
Finding the median of an unsorted array
(9 answers)
Closed 6 years ago.
I would like to know if there exists an algorithm to find the median of an array of odd length. Obviously one could just sort the array and take the middle but ideally by only being interested in the median one could make gains in terms of time complexity of the algorithm.
If no such algorithm exists, any suggestions regarding how to go about developing such an algorithm would be great.
Thanks
This is solved by a selection algorithm, and can be done in O(n) time. Quickselect, or its refinement introselect, are popular methods.
A very brief summary of quickselect is to run quicksort, but rather than sorting both halves at each step, you only sort the half that contains the element you're looking for, which can be determined by counting how many elements are in each partition.
C++, for example, actually has this as a standard library function: nth_element.
You can use the Selection algorithm that can find the kth smallest element of an array with k is the half of the size of the array.
For unstructured data, it's within O(n).
But always keep in mind, that theoretical complexity is not everything!
Read also this question.
Yes, an algorithm exists. The problem you are talking about is finding the kth largest element where k is the value of half+1 of the array length. Here is a link to a way to do it in O(n) time, Median of medians.
Let the length of a list be n, and the number of inversions be d. Why does insertion sort run in O(n+d) time and why does bubble sort not?
When I consider this problem I am thinking of the worst case scenario. Since the worse case for inversions is n(n-1)\2, both bubble and insertion sort run in the same time. But then I don't know how to answer the question since I find them the same. Can someone help me with this?
For bubble sort, if the last element needs to get to the first position (n inversions) you need to loop over the entire array n times, each time moving the element one position forward so you get n^2 steps, so you get O(N^2) regardless of the value of d.
The same setup in insertion sort will do only n+n steps to get everything sorted (O(N+d)). d is actually the total number of swaps insertion sort will need to do to get the thing sorted.
You went wrong when you assumed the worst case value of d is n(n-1)/2. While this is true, if you want to express the complexity in terms of d you can't replace it with it's worst value case, unless you're ok with a higher bound.
This is a practice exam question i'm working on, i have a general idea of what the answer is but would like some clarification.
The
following
is
a
sorting
algorithm
for
n
integers
in
an
array.
In
step
1,
you
iterate
through
the
array,
and
compare
each
pair
of
adjacent
integers
and
swap
each
pair
if
they
are
in
the
wrong
order.
In
step
2,
you
repeat
step
1
as
many
times
as
necessary
until
there
is
an
iteration
where
no
swaps
are
made
(in
which
case
the
list
is
sorted
and
you
can
stop).
What
is
the
worst case
complexity
of
this algorithm?
What is the best case complexity of this algorithm?
Basically the algorithm presented here is a bubble sort.
The worst case complexity here is O(n^2).
The best case complexity is O(n).
Here is the explanation:
The best case situation here would be "Already sorted array". so all you need is N comparisions(To be precise its n-1) so the complexity is O(n).
The worst case situation is reverse ordered array.
To better understand why its O(n^2), consider just first element of reverse ordered array which indeed is a largest element, to make this array sorted you need to get that element to the last index of the array. Through the algorithm explained in the question, on each iteration it takes the largest element one index towards its actual position(last index here) and it requires O(n) comparisions to move one posistion. and hence O(n^2) comparision to move it to its actual position.
In the best case, no swapping will be required and a single pass of the array would suffice. So the complexity is O(n).
In the worst case, the elements of the array could be in the reverse order. So the first iteration requires (n-1) swaps, the next one (n-2) and do on...
So it would lead to O(n^2) complexity.
As others have said, this is bubble sort. But if you are measuring complexity in terms of comparisons, you can easily be more precise than big-O.
In the best case, you need only compare n-1 pairs to verify they're all in the right order.
In the worst case, the first element is the one that should be in the last position, so n-1 passes will be needed, each advancing that element one more position toward the end of the list. Each pass requires n-1 comparisons. In all, then, (n-1)^2 comparisons are needed.