Could anyone explain why insertion sort has a time complexity of Θ(n²)?
I'm fairly certain that I understand time complexity as a concept, but I don't really understand how to apply it to this sorting algorithm. Should I just look to mathematical proofs to find this answer?
On average each insertion must traverse half the currently sorted list while making one comparison per step. The list grows by one each time.
So starting with a list of length 1 and inserting the first item to get a list of length 2, we have average an traversal of .5 (0 or 1) places. The rest are 1.5 (0, 1, or 2 place), 2.5, 3.5, ... , n-.5 for a list of length n+1.
This is, by simple algebra, 1 + 2 + 3 + ... + n - n*.5 = (n(n+1) - n)/2 = n^2 / 2 = O(n^2)
Note that this is the average case. In the worst case the list must be fully traversed (you are always inserting the next-smallest item into the ascending list). Then you have 1 + 2 + ... n, which is still O(n^2).
In the best case you find the insertion point at the top element with one comparsion, so you have 1+1+1+ (n times) = O(n).
It only applies to arrays/lists - i.e. structures with O(n) time for insertions/deletions. It can be different for other data structures. For example, for skiplists it will be O(n * log(n)), because binary search is possible in O(log(n)) in skiplist, but insert/delete will be constant.
Worst case time complexity of Insertion Sort algorithm is O(n^2).
Worst case of insertion sort comes when elements in the array already stored in decreasing order and you want to sort the array in increasing order.
Suppose you have an array
Step 1 => | 4 | 3 | 2 | 1 | No. of comparisons = 1 | No. of movements = 1
Step 2 => | 3 | 4 | 2 | 1 | No. of comparisons = 2 | No. of movements = 2
Step 3 => | 2 | 3 | 4 | 1 | No. of comparisons = 3 | No. of movements = 3
Step 4 => | 1 | 2 | 3 | 4 | No. of comparisons = 4 | No. of movements = 4
T(n) = 2 + 4 + 6 + 8 + ---------- + 2(n-1)
T(n) = 2 * ( 1 + 2 + 3 + 4 + -------- + (n-1))
T(n) = 2 * (n(n-1))/2
T(n) = O(n^2)
Related
I have M intervals on the real line, each with a positive weight. I need to select N among them such that they don't overlap and give the maximum sum. How can I do that efficiently ?
If there is no subset of N non-overlapping intervals, there is no solution.
Without the non-overlap constraint, the question is trivial: pick the N largest weights. Because of the constraint, this doesn't work anymore. In my case, N and M are small (<20), but I hope that there is a more efficient solution than exhaustively trying all subsets.
You can solve it with dynamic programming. Let C(k, i) be the maximum sum of (up to) k weighted intervals, none of which has their left end less than i.
You can restrict i to be in the set of (real) start points for all the intervals, and k ranges from 0 to N.
Start by initializing C(k, max(start for start, end in interval)) to 0, and all other entries to -infinity.
Sort the intervals by start points, and iterate through them backwards.
For each interval (start, end) with weight w, and for each k:
C(start, k) = max(C(start, k), C(next(start), k), w + C(next'(end), k-1))
Here next(x) returns the smallest start point greater than x, and next'(x) returns the smallest start point greater than or equal to x. Both can be implemented by binary search (or linear scan if M is small).
Overall, this is going to take O(M*N*logM) time and O(M*N) space.
(This assumes the intervals aren't closed at both ends, so (0, 100) and (100, 200) don't overlap -- a small adjustment needs to be made if these are to be considered overlapping).
In progress:
Inspired by #PaulHankin's solution, here is a reformulation.
Sort all intervals by increasing right abscissa and iteratively find the largest possible sum up to the K-th right bound. Assume you have solved all optimization problems for the first K intervals and for all relevant N (from 0 to K).
When you take the next interval into consideration, you compute new candidate solutions as follows: for every N (up to the current nomber of intervals), lengthen all previous solutions that you can (including the empty one) without causing overlap, and keep the lenghtened solutions that are better.
Example:
The optimal sums up to the K-th right bound, by increasing N are
1: 2
2: 2 or 8 => 8 | -
3: 8 or 3 => 8 | 2 + 3 | -
4: 8 or 2 => 8 | 2 + 3 or 8 + 2 => 8 + 2 | 2 + 3 + 2 | -
5: 8 or 9 => 9 | 8 + 2 or 2 + 9 => 2 + 9 | 2 + 3 + 2 | - | -
Given an array of numbers a[0], a[1], ..., a[n-1], we get queries of the kind:
output k-th largest number in the range a[i], a[i+1], ..., a[j]
Can these queries be answered in polylogarithmic time (in n) per query? If not, is it possible to average results and still get a good amortized complexity?
EDIT: this can be solved using persistent segment trees
http://blog.anudeep2011.com/persistent-segment-trees-explained-with-spoj-problems/
Yes, these queries can be answered in polylog time if O(n log n) space is available.
Preprocess given array by constructing segment tree with depth log(n). So that leaf nodes are identical to source array, next-depth nodes contain sorted 2-element sub-arrays, next level consists of 4-element arrays produced by merging those 2-element arrays, etc. In other words, perform merge sort but keep results of each merge step in separate array. Here is an example:
root: | 1 2 3 5 5 7 8 9 |
| 1 2 5 8 | 3 5 7 9 |
| 1 5 | 2 8 | 7 9 | 3 5 |
source: | 5 | 1 | 2 | 8 | 7 | 9 | 5 | 3 |
To answer a query, split given range (into at most 2*log(n) subranges). For example, range [0, 4] should be split into [0, 3] and [4], which gives two sorted arrays [1 2 5 8] and [7]. Now the problem is simplified to finding k-th element in several sorted arrays. The easiest way to solve it is nested binary search: first use binary search to choose some candidate element from every array starting from largest one; then use binary search in other (smaller) arrays to determine rank of this candidate element. This allows to get k-th element in O(log(n)^4) time. Probably some optimization (like fractional cascading) or some other algorithm could do this faster...
There is an algoritm named QuickSelect which based on quick sort. It works O(n) in average case. Algorithm's worst case is O(n**2) when input revese ordered.
It gives exact k-th biggest number. If you want range, you can write an wrapper method.
For a given array of distinct (unique) integers I want to know the number of BST in all permutations with right most arm of length k.
(If k = 3, root->right->right is a leaf node)
(At my current requirement, I can not afford an algorithm with cost greater than N^3)
Two identical BSTs generated from different permutations are considered different.
My approach so far is:
Assume a function:
F(arr) = {a1, a2, a3...}
where a1 is count of array with k = 1, a2 is count of array with k2 etc.
F(arr[1:n]) = for i in range 1 to n (1 + df * F(subarr where each element is larger than arr[i]))
Where df is dynamic factor (n-1)C(count of elements smaller than arr[i])
I am trying to create a dp to solve the problem
Sort the array
Start from largest number to smaller number
dp[i][i] = 1
for(j in range i-1 to 1) dp[j][i] = some func of dp[j][i-1], but I am unable to formulate
For ex: for arr{4, 3, 2, 1}, I expect the following dp
arr[i] 4 3 2 1
+---+---+---+---+
k = 1 | 1 | 1 | 2 | 6 |
+---+---+---+---+
k = 2 | - | 1 | 3 |11 |
+---+---+---+---+
k = 3 | - | - | 1 | 6 |
+---+---+---+---+
k = 4 | - | - | - | 1 |
+---+---+---+---+
verification(n!) 1 2 6 24
Any hint, suggestions, pointers or redirection to a good source where I can meet my curiosity is welcome.
Thank you.
edit: It seems I may need 3D dp array. I am working on the same.
edit: Corrected col 3 of dp
The good new is that is you don't want the permutation but only their numbers, there is a formula for that. These are know as (unsigned) Stirling numbers of the first kind. The reason for that is that is that the numbers appearing on the right arm of a binary search tree are the left to right minima, that is the i such that the number appearing before i are greater than i. Here is a example where the records are underlined
6 8 3 5 4 2 7 1 9
_ _ _ _
This gives the tree
6
3 8
2 5 7 9
1 4
Those number are know to count permutation according to various characteristics (number of cycles... ). It is know that maxima or minima are among those characteristics. .You can find more information on Entry A008275 of the The On-Line Encyclopedia of Integer Sequences.
Now to answer the question of computing them. Let S(n,k) be the number of permutations of n numbers with k left to right minima. You can use the recurrence:
S(n, 0) = 0 for all n
S(n+1, k) = n*S(n, k) + S(n, k-1) for all n>0 and k>0
If I understand your problem correctly.
You do not need to sort an array. Since all number in you array are unique, you can assume that every possible subtree is a unique one.
Therefore you just need to count how may unique trees you can build having N - k unique elements, where N is a length of your array and k is a length of right most arm. In other words it will be number of permutations of your left subtree if you fix your right subtree to a fixed structure (root (node1 (node2 ... nodeK)))
Here is a way to calculate the number of binary trees of size N:
public int numTrees(int n) {
int[] ut = new int[Math.max(n + 1, 3)];
ut[1] = 1;
ut[2] = 2;
for (int i = 3; i <= n; i++) {
int u = 0;
for (int j = 0; j < i; j++) {
u += Math.max(1, ut[j]) * Math.max(1, ut[i - j - 1]);
}
ut[i] = u;
}
return ut[n];
}
it has O(n^2) time complexity and O(n) space complexity.
I recently learnt to find out nth number fibonacci series by matrix exponentiation.
but i am stuck on two relations :
1) F(n) = F(n−1) + n
2) F(n) = F(n−1) + 1/n
Is there any efficient way to solve these in
O(logn)
time like we have matrix expo. for fibonacci series ?
The first one is obviously equal to:
F(n) = F(0) + n*(n+1)/2
and can be computed in O(1) time. For the second, look here.
Supposing that you want to compute the first one with matrix exponentiation, in the same way that you did with the Fibonacci series, here's the matrix you should use:
| 1 1 0 |
A = | 0 1 1 |
| 0 0 1 |
The choice of matrix is obvious if you think of the following equation:
| F(n+1) | | 1 1 0 | | F(n) |
| n+1 | = | 0 1 1 | * | n |
| 1 | | 0 0 1 | | 1 |
Of course, the starting vector has to be: (F(0), 0, 1).
For the second series this is not so easy, as you would want to gradually compute the value 1/n, which cannot be computed linearly in this way. I guess it cannot be done but I won't try to prove it.
The first one can be calculated in O(1) just because this is an arithmetic progression and the sum is n*(n-1)/2.
The second one is a harmonic series and can not be calculated efficiently, but you can approximate it in O(1) with:
where the first one is a 0.57721566490153286060 and the second is approximately 1/(2k)
k-way merge is the algorithm that takes as input k sorted arrays, each of size n. It outputs a single sorted array of all the elements.
It does so by using the "merge" routine central to the merge sort algorithm to merge array 1 to array 2, and then array 3 to this merged array, and so on until all k arrays have merged.
I had thought that this algorithm is O(kn) because the algorithm traverses each of the k arrays (each of length n) once. Why is it O(nk^2)?
Because it doesn't traverse each of the k arrays once. The first array is traversed k-1 times, the first as merge(array-1,array-2), the second as merge(merge(array-1, array-2), array-3) ... and so on.
The result is k-1 merges with an average size of n*(k+1)/2 giving a complexity of O(n*(k^2-1)/2) which is O(nk^2).
The mistake you made was forgetting that the merges are done serially rather than in parallel, so the arrays are not all size n.
Actually in the worst case scenario,there will be n comparisons for the first array, 2n for the second, 3n for the third and soon till (k - 1)n.
So now the complexity becomes simply
n + 2n + 3n + 4n + ... + (k - 1)n
= n(1 + 2 + 3 + 4 + ... + (k - 1))
= n((k - 1)*k) / 2
= n(k^2 - k) / 2
= O(nk ^ 2)
:-)
How about this:
Step 1:
Merge arrays (1 and 2), arrays (3 and 4), and so on. (k/2 array merges of 2n, total work kn).
Step 2:
Merge array (1,2 and 3,4), arrays (5,6 and 7,8), and so on (k/4 merges of 4n, total work kn).
Step 3:
Repeat...
There will be log(k) such "Steps", each with kn work. Hence total work done = O(k.n.log(k)).
Even otherwise, if we were to just sort all the elements of the array we could still merge everything in O(k.n.log(k.n)) time.
k-way merge is the algorithm that takes as input k sorted arrays, each of size n. It outputs a single sorted array of all the elements.
I had thought that this algorithm is O(kn)
We can disprove that by contradiction. Define a sorting algorithm for m items that uses your algorithm with k=m and n=1. By the hypothesis, the sorting algorithm succeeds in O(m) time. Contradiction, it's known that any sorting algorithm has worst case at least O(m log m).
You don't have to compare items 1 by 1 each time.
You should simply maintain the most recent K items in a sorted set.
You remove the smallest and relace it by its next element. This should be n.log(k)
Relevant article. Disclaimer: I participated in writing it
1) You have k sorted arrays, each of size n. Therefore total number of elements = k * n
2) Take the first element of all k arrays and create a sequence. Then find the minimum of this sequence. This min value is stored in the output array. Number of comparisons to find the minimum of k elements is k - 1.
3) Therefore the total number of comparisons
= (comparisons/element) * number of elements
= (k - 1) * k * n
= k^2 * n // approximately
A common implementation keeps an array of indexes for each one of the k sorted arrays {i_1, i_2, i__k}. On each iteration the algorithm finds the minimum next element from all k arrays and store it in the output array. Since you are doing kn iterations and scanning k arrays per iteration the total complexity is O(k^2 * n).
Here's some pseudo-code:
Input: A[j] j = 1..k : k sorted arrays each of length n
Output: B : Sorted array of length kn
// Initialize array of indexes
I[j] = 0 for j = 1..k
q = 0
while (q < kn):
p = argmin({A[j][I[j]]}) j = 1..k // Get the array for which the next unprocessed element is minimal (ignores arrays for which I[j] > n)
B[q] = A[p][I[p]]
I[p] = I[p] + 1
q = q + 1
You have k arrays each with n elements. This means total k*n elements.
Consider it a matrix of k*n. To add first element to the merged/ final array, you need to compare heads of k arrays. This means for one element in final array you need to do k comparisons.
So from 1 and 2, for Kn elements, total time taken is O(kk*n).
For those who want to know the details or need some help with this, I'm going expand on Recurse's answer and follow-up comment
We only need k-1 merges because the last array is not merged with anything
The formula for summing the terms of an arithmetic sequence is helpful; Sn=n(a1 + an)2
Stepping through the first 4 merges of k arrays with n elements
+-------+-------------------+-------------+
| Merge | Size of new array | Note |
+-------+-------------------+-------------+
| 1 | n+n = 2n | first merge |
| 2 | 2n+n = 3n | |
| 3 | 3n+n = 4n | |
| 4 | 4n+n = 5n | |
| k-1 | (k-1)n+n = kn | last merge |
+-------+-------------------+-------------+
To find the average size, we need to sum all the sizes and divide by the number of merges (k-1). Using the formula for summing the first n terms, Sn=n(a1 + an)2, we only need the first and last terms:
a1=2n (first term)
an=kn (last term)
We want to sum all the terms so n=k-1 (the number of terms we have). Plugging in the numbers we get a formula for the sum of all terms
Sn = ( (k-1)(2n+kn) )/2
However, to find the average size we must divide by the number of terms (k-1). This cancels out the k-1 in the numerator and we're left with an average of size of
(2n + kn)/2
Now we have the average size, we can multiply it by the number of merges, which is k-1. To make the multiplication easier, ignore the /2, and just multiply the numerators:
(k-1)(2n+kn)
= (k^2)n + kn - 2n
At this point you could reintroduce the /2, but there shouldn't be any need since it's clear the dominant term is (k^2)*n