According to here
Use insertion sort...for invocations on small arrays (i.e. where
the length is less than a threshold k determined experimentally). This
can be implemented by simply stopping the recursion when less than k
elements are left, leaving the entire array k-sorted: each element
will be at most k positions away from its final position. Then, a
single insertion sort pass finishes the sort in O(k×n) time.
I'm not sure I'm understanding correctly. One way to do it that involves calling insertion sort multiple times is
quicksort(A, i, k):
if i+threshold < k:
p := partition(A, i, k)
quicksort(A, i, p - 1)
quicksort(A, p + 1, k)
else
insertionsort(A, i, k)
but this would call insertionsort() for each subarray. It sounds like insertion sort could be called only once, but I sort of don't understand this because it doesn't matter how many times insertion sort is called it's still generally slower than quicksort.
Is the idea like this?
sort(A)
quicksort(A, 0, A.length-1)
insertionsort(A, 0, A.length-1)
So basically call insertion sort once at the very end? How do you know it would only take one pass and not run at O(n)?
Yes, your second pseudocode is correct. The usual analysis of insertion sort is that the outer loop inserts each element in turn (O(n) iterations), and the inner loop moves that element to its correct place (O(n) iterations), for a total of O(n^2). However, since your second quicksort leaves an array that can be sorted by permuting elements within blocks of size at most threshold, each element moves at most threshold positions, and the new analysis is O(n*threshold), which is equivalent to running insertion sort on each block separately.
Described by Bentley in the 1999 edition of Programming Pearls, this idea (per Wikipedia) avoids the overhead of starting and stopping the insertion sort loop many times (in essence, we have a natural sentinel value for the insertion loop already in the array). IMHO, it's a cute idea but not clearly still a good one given how different the performance characteristics of commodity hardware are now (specifically, the final insertion sort requires another pass over the data, which has gotten relatively more expensive, and the cost of starting the loop (moving some values in registers) has gotten relatively less expensive).
Related
Given an array of integers. I need to find k nearest integers for every element in array. (without element itself)
Example:
k = 2
Array: [1, 2, 4, 7]
Answer: [4, 3, 5, 8]
I've come up with the following algorithm. (There can be inaccuracies with indices, but I hope the main idea is clear).
Sort an array
Suppose we have an answer for ith element, i.e the segment L, which is before i, and segment R after i, such that |L|+|R|=k
Considering answer for i+1 element, we can take all elements from R (except a[i+1] itself), because they're even closer to a[i+1] if the array is sorted. Then I find other k-|R|+1 elements using two pointers, which move in different directions: l starting from i, r starting from i+|R|.
I don't like that I'm scanning previous elements using that l pointer. I suppose in a worst case scenario this algorithm would have O(n^2) time complexity. How can I improve it?
Your algorithm could use a small tweak, that's it. It doesn't have O(n²) behavior. As is, it's by description O(max(k × n, n log n)) because, if I'm reading you correctly, you think you're doing a O(k) scan outwards for each element that does not meaningfully benefit from the state of the window you used for the prior element. A small tweak would drop the k × n term to n, leaving it O(cost-of-sorting algorithm), e.g. O(n log n) for good general purpose sorts, or some lower big-O greater than or equal to O(n) (for special purpose sorts like counting sort). And really, even without that tweak your existing code would already behave that way, you just don't see it (the tweak is more how you think about it, where the lack of the tweak just adds an additional O(n) step, and going from n to 2n work doesn't change the big-O).
Your lower bound is O(n log n) if you use general purpose sorting. If the inputs fall in restricted ranges you could get O(n + k) computational work, with O(k) auxiliary storage, by using a special purpose sort like counting sort, where k is the size of the range, but we'll assume the integers can run from -inf to inf, so the O(n + k) is worse than O(n log n) general purpose sorting. Either way, that's a lower bound, if the rest of your algorithm is O(n) (and it has to be at least that high, since it must traverse the input element by element), then your sorting algorithm determines your overall work.
But while the way you describe it makes it sound like the per-element work for window adjustment is O(k), in practice, it's actually doing n - k work to perform the window shifts for the whole sequence, with the -k factor spread over the whole sequence. Your auxiliary pointers would never scan backwards, because each time you move forward, your old value is closer to the new value than the furthest left value was by definition. So the only possibilities are:
The window for the prior value included the next value; replace the new value with the prior value, and then check:
a. If some number of values above the old window are closer than the values at the bottom of the old window, the window slides forward (up to k times), or
b. If the value above the old window is further than the bottom value in the window, the window doesn't shift (aside from the tweak to include the prior value in place of the next value)
The window for the prior value was entirely to its left: Replace the bottom of the window with the prior value unconditionally (instead of replacing the new value with the prior value as in #1, since the new value wasn't in there), then follow the same rules as case #1 (either the window remains unmoved beyond the swap, or it shifts right up to k times)
So the window sliding work can be as much as k at any given step (say, if k is 2 and you're at the value 8 in [6, 7, 8, 100, 101, 102], then when you shift to 100, your preliminary window from 6-7 unconditionally becomes 7-8, then you conditionally shift twice, first to 8-101, then to 101-102 where it stops). But since the only options at any given step are:
The window doesn't move (effectively moving backwards by one relative to advancing value's position), or
The window moves up to k + 1 forward
this means that for every time the window moves forward a given number of steps, s, it must not have moved at all for s prior values (to move it backward enough that it could move forward that far), meaning that the peak per-element work is O(k), but the amortized per-element work is O(1) (and it's actually slightly less than one, because over the course of the whole traversal, you move from a window that is definitionally k elements to the right of the current value, to one k elements to the left, and those leftward "moves" are actually staying put, so to speak, so your window shifts slightly less than once per element).
Your original plan is effectively the same, you'd just end up doing an unnecessary check at each stage to see if the window could move backwards (it never could). Since that single check is fixed cost, it's O(1) per-element, imposing no multiplier on the O(n) cost for processing the whole sequence.
In short: Your algorithm is already big-O optimal at O(n) for the work done post-sort. If you use a general purpose sorting algorithm, the O(n log n) work it does dominates everything else; a special purpose algorithm like counting sort would be the only way to lower your big-O of the whole process, including the sort, any lower, and the minimum cost would be O(n) even if you had a magical "sorts for free" tool.
I'm trying to understand a few sorting algorithms, but I'm struggling to see the difference in the bubble sort and insertion sort algorithm.
I know both are O(n2), but it seems to me that bubble sort just bubbles the maximum value of the array to the top for each pass, while insertion sort just sinks the lowest value to the bottom each pass. Aren't they doing the exact same thing but in different directions?
For insertion sort, the number of comparisons/potential swaps starts at zero and increases each time (ie 0, 1, 2, 3, 4, ..., n) but for bubble sort this same behaviour happens, but at the end of the sorting (ie n, n-1, n-2, ... 0) because bubble sort no longer needs to compare with the last elements as they are sorted.
For all this though, it seems a consensus that insertion sort is better in general. Can anyone tell me why?
Edit: I'm primarily interested in the differences in how the algorithms work, not so much their efficiency or asymptotic complexity.
Insertion Sort
After i iterations the first i elements are ordered.
In each iteration the next element is bubbled through the sorted section until it reaches the right spot:
sorted | unsorted
1 3 5 8 | 4 6 7 9 2
1 3 4 5 8 | 6 7 9 2
The 4 is bubbled into the sorted section
Pseudocode:
for i in 1 to n
for j in i downto 2
if array[j - 1] > array[j]
swap(array[j - 1], array[j])
else
break
Bubble Sort
After i iterations the last i elements are the biggest, and ordered.
In each iteration, sift through the unsorted section to find the maximum.
unsorted | biggest
3 1 5 4 2 | 6 7 8 9
1 3 4 2 | 5 6 7 8 9
The 5 is bubbled out of the unsorted section
Pseudocode:
for i in 1 to n
for j in 1 to n - i
if array[j] > array[j + 1]
swap(array[j], array[j + 1])
Note that typical implementations terminate early if no swaps are made during one of the iterations of the outer loop (since that means the array is sorted).
Difference
In insertion sort elements are bubbled into the sorted section, while in bubble sort the maximums are bubbled out of the unsorted section.
In bubble sort in ith iteration you have n-i-1 inner iterations (n^2)/2 total, but in insertion sort you have maximum i iterations on i'th step, but i/2 on average, as you can stop inner loop earlier, after you found correct position for the current element. So you have (sum from 0 to n) / 2 which is (n^2) / 4 total;
That's why insertion sort is faster than bubble sort.
Another difference, I didn't see here:
Bubble sort has 3 value assignments per swap:
you have to build a temporary variable first to save the value you want to push forward(no.1), than you have to write the other swap-variable into the spot you just saved the value of(no.2) and then you have to write your temporary variable in the spot other spot(no.3).
You have to do that for each spot - you want to go forward - to sort your variable to the correct spot.
With insertion sort you put your variable to sort in a temporary variable and then put all variables in front of that spot 1 spot backwards, as long as you reach the correct spot for your variable. That makes 1 value assignement per spot. In the end you write your temp-variable into the the spot.
That makes far less value assignements, too.
This isn't the strongest speed-benefit, but i think it can be mentioned.
I hope, I expressed myself understandable, if not, sorry, I'm not a nativ Britain
The main advantage of insert sort is that it's online algorithm. You don't have to have all the values at start. This could be useful, when dealing with data coming from network, or some sensor.
I have a feeling, that this would be faster than other conventional n log(n) algorithms. Because the complexity would be n*(n log(n)) e.g. reading/storing each value from stream (O(n)) and then sorting all the values (O(n log(n))) resulting in O(n^2 log(n))
On the contrary using Insert Sort needs O(n) for reading values from the stream and O(n) to put the value to the correct place, thus it's O(n^2) only. Other advantage is, that you don't need buffers for storing values, you sort them in the final destination.
Bubble Sort is not online (it cannot sort a stream of inputs without knowing how many items there will be) because it does not really keep track of a global maximum of the sorted elements. When an item is inserted you will need to start the bubbling from the very beginning
well bubble sort is better than insertion sort only when someone is looking for top k elements from a large list of number
i.e. in bubble sort after k iterations you'll get top k elements. However after k iterations in insertion sort, it only assures that those k elements are sorted.
Though both the sorts are O(N^2).The hidden constants are much smaller in Insertion sort.Hidden constants refer to the actual number of primitive operations carried out.
When insertion sort has better running time?
Array is nearly sorted-notice that insertion sort does fewer operations in this case, than bubble sort.
Array is of relatively small size: insertion sort you move elements around, to put the current element.This is only better than bubble sort if the number of elements is few.
Notice that insertion sort is not always better than bubble sort.To get the best of both worlds, you can use insertion sort if array is of small size, and probably merge sort(or quicksort) for larger arrays.
Number of swap in each iteration
Insertion-sort does at most 1 swap in each iteration.
Bubble-sort does 0 to n swaps in each iteration.
Accessing and changing sorted part
Insertion-sort accesses(and changes when needed) the sorted part to find the correct position of a number in consideration.
When optimized, Bubble-sort does not access what is already sorted.
Online or not
Insertion-sort is online. That means Insertion-sort takes one input at a time before it puts in appropriate position. It does not have to compare only adjacent-inputs.
Bubble-sort is not-online. It does not operate one input at a time. It handles a group of inputs(if not all) in each iteration. Bubble-sort only compare and swap adjacent-inputs in each iteration.
insertion sort:
1.In the insertion sort swapping is not required.
2.the time complexity of insertion sort is Ω(n)for best case and O(n^2) worst case.
3.less complex as compared to bubble sort.
4.example: insert books in library, arrange cards.
bubble sort:
1.Swapping required in bubble sort.
2.the time complexity of bubble sort is Ω(n)for best case and O(n^2) worst case.
3.more complex as compared to insertion sort.
I will try to give a more concise and informative answer than others.
Yes, after each pass, insertion sort and bubble sort intuitively seem the same - they both build a sorted sublist at the edge.
However, insertion sort will perform fewer comparisons in general. With insertion sort, we are only performing a linear search in the sorted sublist with each pass. With random data, you can expect to make m/2 comparisons and swaps, where m is the size of the sorted sublist.
With bubble sort, we are always comparing EVERY pair in the unsorted sublist with each pass, so that's n-m comparisons (twice as many as insertion sort on random data). This means bubble sort is bad if comparisons are expensive/slow.
Also, the branching associated with swaps and compares for insertion sort is more predictable. We do a linear search at the same time as a linear insert, and we can generally predict/assume that the linear search/insert will continue until the correct space is found. With bubble sort, branching is essentially random, and we can expect a branch miss half the time! With every single compare! This means bubble sort is bad for pipelined processors if comparisons and swaps are relatively cheap/fast.
These factors make bubble sort much slower in general than insertion sort.
Insertion Sort: We insert the elements into their proper positions in the array, one at a time. When we reach the nth element in the array, the n-1 elements are sorted.
Bubble Sort: We start with a bubble of one element and keep extending the bubble by a quantity of 1, until all elements are added. At any iteration, we simply swap the adjacent elements in the proper order so as to get the largest element at the end of the bubble. In this way, we keep on putting the largest element at the end of the array, and finally after all iterations our sorting is done.
Bubble Sort and Insertion sort complexity: O(n^2)
Insertion is faster as compared to Bubble sort, for the following reason:
Insertion sort just compares an element to a sorted array, that is ith element to the array containing 1...i-1 elements, which are sorted already. Therefore, there are less number of comparisons and swaps.
In Bubble sort, however, as the bubble increases, the same iteration of comparing each pair of neighbors runs. This leads to a lot more comparisons and swapping as compared to Insertion Sort.
Therefore, even though the time complexity of both the algorithms is O(n^2); insertion sort results in a faster approach that bubble sort.
Insertion sort can be resumed as "Look for the element which should be at first position(the minimum), make some space by shifting next elements, and put it at first position. Good. Now look at the element which should be at 2nd...." and so on...
Bubble sort operate differently which can be resumed as "As long as I find two adjacent elements which are in the wrong order, I swap them".
Bubble sort is almost useless under all circumstances. In use cases when insertion sort may have too many swaps, selection sort can be used because it guarantees less than N times of swap. Because selection sort is better than bubble sort, bubble sort has no use cases.
There is an external array of integers on which you can perform the following operations in O(1) time.
get(int i) - returns the value at the index 'i' in the external array.
reverse( int i, int j) - returns the reverse of the array between index positions i and j (including i and j).
example for reverse: consider an array {1,2,3,4,5}. reverse(0,2) will return {3,2,1,4,5} and reverse(1,4) will return {1,5,4,3,2}.
Write a code to sort the external array. Mention the time and space complexity for your code.
Obviously We can sort in nlogn using quick sort or merge sort. But given the scenerio can we do better?
To sort an array is to find the permutation, or shuffle, that restores it to a sorted state. In other words, your algorithm determines which of the n! possible permutations must be applied, and applies it. Since your algorithm explores the array by asking yes-no questions (Is cell i smaller or greater than cell j?) it follows an implicit decision tree that has depth log(n!) ~ n*log(n).
This means there will be O(n*log(n)) calls to get() to determine how to sort the array.
An interesting variant is to determine the smallest number of calls to reverse() necessary to sort the array, once you know what permutation you need. We know that this number is smaller than n-1, which can be achieved by using selection sort. Can the worst case number be smaller than n-2 ? I must say that I have no idea...
I'd try to reduce the problem to a classic swaps() based sorting algorithm.
In the following we assume without loss of generality j>=i:
Note that swap(i,j) = reverse(i,j) for each j <= i+2, the reversed sub array is only swapping the edges if there are 3 or less elements
Now, for any j>i+2 - all you need is just reverse() the array, by this swapping the edges - and then reverse the "middle" to get it back to the original, so you get: swap(i,j) = reverse(i,j) ; reverse(i+1,j-1)
Using the just built swap(), you can use any compare based algorithms that uses swaps, such as quicksort, which is O(nlogn). The complexity remains O(nlogn) since for each swap() you need up to 2 reverse() ops, which is O(1)
EDIT: Note: This solution fits for the original question (before it was editted), which asked for a solution, and not to optimize it better then quicksort/mergesort.
Assuming you want to minimize the number of external operations get and reverse:
read all integers into an internal array by calling get n times
do an internal sort (n log in internal ops) and calculate the permutation
sort the external array by calling reverse a maximum of n times
This has O(n) time and O(n) space complexity.
Edit in response to anonymous downvotes:
when talking about time complexity, you always have to state, which operations are to be counted. Here I assumed, only the external operations have a cost.
Based on get(int i) and reverse( int i, int j), we can't optimise the code. It will have same complexity.
I'm confused on the running time of shell sort if the list is pre-sorted (best case). Is it O(n) or O(n log n)?
for(k=n/2; k>0; k/=2)
for(i=k; i<n; i++)
for(j=i;j>k; j-=k)
if(a[j-k]>a[j]) swap
else break;
Shell sort is based on insertion sort, and insertion sort has O(n) running time for pre-sorted list, however, by introducing gaps (outermost loop), I don't know if it makes the running time of shell sort O(n log n) for pre-sorted list.
Thank's for the help
In the best case when the data is already ordered, the innermost loop will never swap. It will always immediately break, since the left value is known to be smaller than the right value:
for(k=n/2; k>0; k/=2)
for(i=k; i<n; i++)
for(j=i;j>k; j-=k)
if(false) swap
else break;
So, the algorithm collapses to this:
for(k=n/2; k>0; k/=2)
for(i=k; i<n; i++)
no_op()
The best case then becomes:
O((n - n/2) + (n - n/4) + (n - n/8) + ... + (n - 1))
= O(nlog(n) - n)
= O(nlog(n))
That said, according to Wikipedia, some other variants of Shell Sort do have an O(N) best case.
I think (at least as normally implemented) it's approximately O(n log n), though the exact number is going to depend on the progression you use.
For example, in the first iteration you invoke insertion sort, let's say, five times, each sorting every fifth element. Since each of these is linear on the number of elements sorted, you get linear complexity overall.
In the next iteration you invoke insertion sort, say, twice, sorting every other element. Again, linear overall.
In the third, you do insertion sort on every element, again linear.
In short, you have a linear algorithm invoked a (roughly) logarithmic number of times, so it should be about O(n log n) overall. That assumes some sort of geometric progression in the step sizes you use, which is common but (perhaps) not absolutely required.
If you're using log(n) compares for an array of length n, then you will have a time complexity of n log (n)
Otherwise, if you always use a constant amount of compares (such as 3), you will get O(n)
In general, if you use k gap values, your time complexity will be O(kn). People saying the best case is O(n log n) use log n gap values and people who say it's O(n) refer to always using a constant number of gap values regardless of the input.
The best case is O(n). Here's why:
Let's start with insertion sort. An already sorted list of n entries will require n minus 1 comparisons to complete (no exchanges necessary).
Put the insertion sort in the context of a shellsort with a single increment, 1. An already sorted list of n entries will require n minus the gap (1).
Suppose you have two gaps 5 followed by 1 and n is greater than 5. An already sorted list will require n-5 comparisons to process the first gap (no exchanges necessary) plus n-1 comparisons for the second or 2n-6 (no exchanges necessary).
It doesn't matter if you used n as input to generate the gaps. You end up with each gap being a constant value c (the final c being 1).
So the algorithm for the best case is "n*number of gaps - the sum of all gaps".
I don't see how "n*number of gaps - ..." could be anything other than O(n).
I know most discussions put it as something else and I get the impression that no one has bothered to sit down and do the math. As you can see, it's not rocket science.
My professor gave me the following definition of Shell Sort. I've included the Bubble and Insertion Sort algorithms as well.
What is the advantage of using Shell Sort vs just a regular Insertion Sort or Bubble Sort with gap=1? Eventually, the Shell Sort boils down to that anyway, right?
I'm not asking you to do my homework. I'm legitimately confused and want to understand what's going on.
Also, I've already visited Wikipedia and seen the Time Complexity table and I already know what they say. I'm looking for the why, not the what.
def shell(a, n):
gap = n / 2
while gap >= 1:
insertion(a, n, gap) # or bubble
gap /= 2
def bubble(a, n, gap=1):
for i in range(n):
for j in range(n-i-gap):
if a[j] > a[j+gap]:
swap(a, j, j+1)
def insertion(a, n, gap=1):
for i in range(1,n):
x = a[i]
j = i-gap
while j>=0 and a[j]>x:
a[j+gap] = a[j]
j-=gap
a[j+gap]=x
Shell sort allows swapping of indexes that are far apart, where bubble sort only swaps items that are adjacent.
The wikipedia entries on
http://en.wikipedia.org/wiki/Shell_sort
http://en.wikipedia.org/wiki/Insertion_sort
http://en.wikipedia.org/wiki/Bubble_sort
cover the differences.
Edit:
Imagine that you've got a bunch of cards in your hand and the cards are almost in order, except the first and last are swapped. bubble sort would be a pain to do, because there'd be about 2n swaps, insertion sort would be better with n swaps, but shell sort could do it in 1. (the number of swaps varies based on algorithm implementation, this is just an example)
The logic of shell sort is to sort entries that are further away first. Given a partially sorted list you can in theory sort a lot faster than O(n^2). Also given a large unsorted array the probability that your final sorted position being far from your current position is high. So logically it makes sense to use a larger gap. But the main point of shell sorts is not really its performance, instead it is the simplicity of the algorithm and the low usage of stack memory.
Given that on average it does better than O(n^2) (depends of the gap sequence), small code sizes and stack usages it is very popular in embedded applications where memory constraints are a factor.
The diference is the efficiency.
Insertion Sort and Bubble sort are both O(n^2), meanwhile Shell Sort is O(n log n).
That means if you have a collection that has 100 elements, the amount of operations with Bubble and Insert sort are K * 100^2 = K * 10000 operations, where K depends of others factors, but is mostly constant.
Using the Shell Sort, the operations needed will be Q * 100 * Log 100 = Q * 100 * 2 = Q * 2000 operations, where Q depends on others factors and is mostly constant.