number of comparisons needed to sort n values? - algorithm

I am working on revised selection sort algorithm so that on each pass it finds both the largest and smallest values in the unsorted portion of the array. The sort then moves each of these values into its correct location by swapping array entries.
My question is - How many comparisons are necessary to sort n values?
In normal selection sort it is O(n) comparisons so I am not sure what will be in this case?

Normal selection sort requires O(n^2) comparisons.
At every run it makes K comparisons where K is n-1, n-2, n-3...1, and sum of this arithmetic progression is (n*(n-1)/2)
Your approach (if you are using optimized min/max choice scheme) use 3/2*K comparisons per run, where run length K is n, n-2, n-4...1
Sum of arithmetic progression with a(1)=1, a(n/2)=n, d=2 together with 3/2 multiplier is
3/2 * 1/2 * (n+1) * n/2 = 3/8 * n*(n+1) = O(n^2)
So complexity remains quadratic (and factor is very close to standard)

In your version of selection sort, first you would have to choose two elements as the minimum and maximum, and all of the remaining elements in the unsorted array can get compared with both of them in the worst case.
Let's say if k elements are remaining in the unsorted array, and assuming you pick up first two elements and accordingly assign them to minimum and maximum (1 comparison), then iterate over the rest of k-2 elements, each of which can result in 2 comparisons.So, total comparisons for this iteration will be = 1 + 2*(k-2) = 2*k - 3 comparisons.
Here k will take values as n, n-2, n-4, ... since in every iteration two elements get into their correct position. The summation will result in approximately O(n^2) comparisons.

Related

Maximum number of permutations

Some sorting algorithms, like Insertion Sort, have a Θ(n) asymptotic runtime for some subset of the n! possible permutations of n elements, which means that for those permutations, the number of comparisons that Insertion Sort does is kn for some constant k. For a given constant k, what is the maximum number of permutations for which any given comparison sort could terminate within kn comparisons?
Number of operations in insertion sort depends on the number of inversions. So we need to evaluate number of permutations of n values (1..n for simplicity), containing exactly k inversions.
We can see that Inv(n, 0) = 1 - sorted array
Also Inv(0, k) = 0 - empty array
We can get array with n elements and k inversions:
-adding value n to the end of array with n-1 items and k inversions (so number of inversions remains the same)
-inserting value n before the end of array with n-1 items and k-1 inversions (so adding one inversion)
-inserting value n before two elements in the end of array with n-1 items and k-2 inversions (so adding two inversions)
-and so on
Using this approach, we can just fill a table Inv[n][k] row-by-row and cell-by-cell
Inv[n][k] = Sum(Inv[n-1][i]) where j=0..k
Every comparison at most doubles the never of input permutations you can distinguish. Thus, with kn comparisons you can sort at most 2^(kn) permutations.

Runtime of triple nested for loops

I'm currently going through "Cracking the coding interview" textbook and I'm reviewing Big-O and runtime. One of the examples were as such:
Print all positive integer solutions to the equation a^3 + b^3 = c^3 + d^3 where a, b, c, d are integers between 1 and 1000.
The psuedo code solution provided is:
n = 1000;
for c from 1 to n
for d from 1 to n
result = c^3 + d^3
append (c,d) to list at value map[result]
for each result, list in map
for each pair1 in list
for each pair2 in list
print pair1, pair2
The runtime is O(N^2)
I'm not sure how O(N^2) is obtained and after extensive googling and trying to figure out why, I still have no idea. My rational is as following:
Top half is O(N^2) because the outer loop goes to n and inner loop executes n times each.
The bottom half I'm not sure how to calculate, but I got O(size of map) * O(size of list) * O(size of list) = O(size of map) * O(size of list^2).
O(N^2) + O(size of map) * O(size of list^2)
The 2 for loops adding the pairs to the list of the map = O(N) * O(N) b/c it's 2 for loops running N times.
The outer for loop for iterating through the map = O(2N-1) = O(N) b/c the size of the map is 2N - 1 which is essentially N.
The 2 for loops for iterating through the pairs of each list = O(N) * O(N) b/c each list is <= N
Total runtime: O(N^2) + O(N) * O(N^2) = O(N^3)
Not sure what I'm missing here
Could someone help me figure out how O(N^2) is obtained or why my solution is incorrect. Sorry if my explanation is a bit confusing. Thanks
Based on the first part of the solution, sum(size of lists) == N. This means that the second part (nested loop) cannot be more complex then O(N^2). As you said, the complexity is O(size of map)*O(size of list^2), but it should rather be:
O(size of map)*(O(size of list1^2) + O(size of list2^2) + ... )
This means, that in the worst-case scenario we will get a map of size 1, and one list of size N, and the resulting complexity of O(1)*O((N-1)^2) <==> O(N^2)
In other scenarios the complexity will be lower. For instance if we have map of 2 elements, then we will get 2 lists with the total size of N. So the result will then be:
O(2)*( O(size of list1^2) + O(size of list2^2)), where (size of list1)+(size of list2) == N
and we know from basic maths that X^2 + Y^2 <= (X+Y)^2 for positive numbers.
The complexity of the second part is O(sum of (length of lists)^2 in map), since the length of the list varies depending on the we know that sum of length of lists in map is n^2 since we definitely added n^2 pairs in the first bit of the code. Since T(program) = O(n^2) + O(sum of length of lists in map) * O(sum of length of lists in map / size of map) = O(n^2) * O(sum of length of lists in map / size of map), it remains to show that sum of length of lists in map / size of map is O(1). Doing this requires quite a bit of number theory and unfortunately I can't help you there. But do check out these links for more info on how you would go about it: https://en.wikipedia.org/wiki/Taxicab_number
https://math.stackexchange.com/questions/1274816/numbers-that-can-be-expressed-as-the-sum-of-two-cubes-in-exactly-two-different-w
http://oeis.org/A001235
This is a very interesting question! cdo256 made some good points, I will try to explain a bit more and complete the picture.
It is more or less obvious that the key questions are - how many integers exist that can be expressed as a sum of two positive cubes in k different ways (where k >= 2), and what is the possible size of k ? This number determines the sizes of lists which are values of map, which determine the total complexity of the program. Our "search space" is from 2 to 2 * 10^9 because c and d both iterate from 1 to 1000, so the sum of their cubes is at most 2 * 10^9. If none of the numbers in the range [2, 2 * 10^9] could be expressed as a sum of two cubes in more than one way, than the complexity of our program would be O(n^2). Why? Well, first part is obviously O(n^2), and the second part depends on the size of lists which are values of map. But in this case all lists have size 1, and there are n^2 keys in map which gives O(n^2).
However, that is not the case, there is a famous example of "taxicub number" 1729, so let us return to our main question - the number of different ways to express an integer as a sum of two cubes of positive integers. This is an active field of research in number theory, and great summary is given in Joseph H. Silverman's article Taxicabs and Sums of Two Cubes. I recommend to read it thoroughly. Current records are given here. Some interesting facts:
smallest integer that can be expressed as a sum of two cubes of positive integers in three different ways is 87,539,319
smallest integer that can be expressed as a sum of two cubes of positive integers in four different ways is 6,963,472,309,248 (> 2*10^9)
smallest integer that can be expressed as a sum of two cubes of positive integers in six different ways is 24,153,319,581,254,312,065,344 (> 2*10^9)
As you can easily see e.g. here, there are only 2184 integers in range [2, 2 * 10^9] that are expressible as a sum of two positive cubes in two or three different ways, and for k = 4,5,.. these numbers are out of our range. Therefore, the number of keys in map is very close to n^2, and sizes of the value lists are at most 3, which implies that the complexity of the code
for each pair1 in list
for each pair2 in list
print pair1, pair2
is constant, so the total complexity is again O(n^2).

Finding sub-array sum in an integer array

Given an array of N positive integers. It can have n*(n+1)/2 sub-arrays including single element sub-arrays. Each sub-array has a sum S. Find S's for all sub-arrays is obviously O(n^2) as number of sub-arrays are O(n^2). Many sums S's may be repeated also. Is there any way to find count of all distinct sum (not the exact values of sums but only count) in O(n logn).
I tried an approach but stuck on the way. I iterated the array from index 1 to n.
Say a[i] is the given array. For each index i, a[i] will add to all the sums in which a[i-1] is involved and will include itself also as individual element. But duplicate will emerge if among sums in which a[i-1] is involved, the difference of two sums is a[i]. I mean that, say sums Sp and Sq end up at a[i-1] and difference of both is a[i]. Then Sp + a[i] equals Sq, giving Sq as a duplicate.
Say C[i] is count of the distinct sums in which end up at a[i].
So C[i] = C[i-1] + 1 - numbers of pairs of sums in which a[i-1] is involved whose difference is a[i].
But problem is to find the part of number of pairs in O(log n). Please give me some hint about this or if I am on wrong way and completely different approach is required problem point that out.
When S is not too large, we can count the distinct sums with one (fast) polynomial multiplication. When S is larger, N is hopefully small enough to use a quadratic algorithm.
Let x_1, x_2, ..., x_n be the array elements. Let y_0 = 0 and y_i = x_1 + x_2 + ... + x_i. Let P(z) = z^{y_0} + z^{y_1} + ... + z^{y_n}. Compute the product of polynomials P(z) * P(z^{-1}); the coefficient of z^k with k > 0 is nonzero if and only if k is a sub-array sum, so we just have to read off the number of nonzero coefficients of positive powers. The powers of z, moreover, range from -S to S, so the multiplication takes time on the order of S log S.
You can look at the sub-arrays as a kind of tree. In the sense that subarray [0,3] can be divided to [0,1] and [2,3].
So build up a tree, where nodes are defined by length of the subarray and it's staring offset in the original array, and whenever you compute a subarray, store the result in this tree.
When computing a sub-array, you can check this tree for existing pre-computed values.
Also, when dividing, parts of the array can be computed on different CPU cores, if that matters.
This solution assumes that you don't need all values at once, rather ad-hoc.
For the former, there could be some smarter solution.
Also, I assume that we're talking about counts of elements in 10000's and more. Otherwise, such work is a nice excercise but has not much of a practical value.

Why is merge sort worst case run time O (n log n)?

Can someone explain to me in simple English or an easy way to explain it?
The Merge Sort use the Divide-and-Conquer approach to solve the sorting problem. First, it divides the input in half using recursion. After dividing, it sort the halfs and merge them into one sorted output. See the figure
It means that is better to sort half of your problem first and do a simple merge subroutine. So it is important to know the complexity of the merge subroutine and how many times it will be called in the recursion.
The pseudo-code for the merge sort is really simple.
# C = output [length = N]
# A 1st sorted half [N/2]
# B 2nd sorted half [N/2]
i = j = 1
for k = 1 to n
if A[i] < B[j]
C[k] = A[i]
i++
else
C[k] = B[j]
j++
It is easy to see that in every loop you will have 4 operations: k++, i++ or j++, the if statement and the attribution C = A|B. So you will have less or equal to 4N + 2 operations giving a O(N) complexity. For the sake of the proof 4N + 2 will be treated as 6N, since is true for N = 1 (4N +2 <= 6N).
So assume you have an input with N elements and assume N is a power of 2. At every level you have two times more subproblems with an input with half elements from the previous input. This means that at the the level j = 0, 1, 2, ..., lgN there will be 2^j subproblems with an input of length N / 2^j. The number of operations at each level j will be less or equal to
2^j * 6(N / 2^j) = 6N
Observe that it doens't matter the level you will always have less or equal 6N operations.
Since there are lgN + 1 levels, the complexity will be
O(6N * (lgN + 1)) = O(6N*lgN + 6N) = O(n lgN)
References:
Coursera course Algorithms: Design and Analysis, Part 1
On a "traditional" merge sort, each pass through the data doubles the size of the sorted subsections. After the first pass, the file will be sorted into sections of length two. After the second pass, length four. Then eight, sixteen, etc. up to the size of the file.
It's necessary to keep doubling the size of the sorted sections until there's one section comprising the whole file. It will take lg(N) doublings of the section size to reach the file size, and each pass of the data will take time proportional to the number of records.
After splitting the array to the stage where you have single elements i.e. call them sublists,
at each stage we compare elements of each sublist with its adjacent sublist. For example, [Reusing #Davi's image
]
At Stage-1 each element is compared with its adjacent one, so n/2 comparisons.
At Stage-2, each element of sublist is compared with its adjacent sublist, since each sublist is sorted, this means that the max number of comparisons made between two sublists is <= length of the sublist i.e. 2 (at Stage-2) and 4 comparisons at Stage-3 and 8 at Stage-4 since the sublists keep doubling in length. Which means the max number of comparisons at each stage = (length of sublist * (number of sublists/2)) ==> n/2
As you've observed the total number of stages would be log(n) base 2
So the total complexity would be == (max number of comparisons at each stage * number of stages) == O((n/2)*log(n)) ==> O(nlog(n))
Algorithm merge-sort sorts a sequence S of size n in O(n log n)
time, assuming two elements of S can be compared in O(1) time.
This is because whether it be worst case or average case the merge sort just divide the array in two halves at each stage which gives it lg(n) component and the other N component comes from its comparisons that are made at each stage. So combining it becomes nearly O(nlg n). No matter if is average case or the worst case, lg(n) factor is always present. Rest N factor depends on comparisons made which comes from the comparisons done in both cases. Now the worst case is one in which N comparisons happens for an N input at each stage. So it becomes an O(nlg n).
Many of the other answers are great, but I didn't see any mention of height and depth related to the "merge-sort tree" examples. Here is another way of approaching the question with a lot of focus on the tree. Here's another image to help explain:
Just a recap: as other answers have pointed out we know that the work of merging two sorted slices of the sequence runs in linear time (the merge helper function that we call from the main sorting function).
Now looking at this tree, where we can think of each descendant of the root (other than the root) as a recursive call to the sorting function, let's try to assess how much time we spend on each node... Since the slicing of the sequence and merging (both together) take linear time, the running time of any node is linear with respect to the length of the sequence at that node.
Here's where tree depth comes in. If n is the total size of the original sequence, the size of the sequence at any node is n/2i, where i is the depth. This is shown in the image above. Putting this together with the linear amount of work for each slice, we have a running time of O(n/2i) for every node in the tree. Now we just have to sum that up for the n nodes. One way to do this is to recognize that there are 2i nodes at each level of depth in the tree. So for any level, we have O(2i * n/2i), which is O(n) because we can cancel out the 2is! If each depth is O(n), we just have to multiply that by the height of this binary tree, which is logn. Answer: O(nlogn)
reference: Data Structures and Algorithms in Python
The recursive tree will have depth log(N), and at each level in that tree you will do a combined N work to merge two sorted arrays.
Merging sorted arrays
To merge two sorted arrays A[1,5] and B[3,4] you simply iterate both starting at the beginning, picking the lowest element between the two arrays and incrementing the pointer for that array. You're done when both pointers reach the end of their respective arrays.
[1,5] [3,4] --> []
^ ^
[1,5] [3,4] --> [1]
^ ^
[1,5] [3,4] --> [1,3]
^ ^
[1,5] [3,4] --> [1,3,4]
^ x
[1,5] [3,4] --> [1,3,4,5]
x x
Runtime = O(A + B)
Merge sort illustration
Your recursive call stack will look like this. The work starts at the bottom leaf nodes and bubbles up.
beginning with [1,5,3,4], N = 4, depth k = log(4) = 2
[1,5] [3,4] depth = k-1 (2^1 nodes) * (N/2^1 values to merge per node) == N
[1] [5] [3] [4] depth = k (2^2 nodes) * (N/2^2 values to merge per node) == N
Thus you do N work at each of k levels in the tree, where k = log(N)
N * k = N * log(N)
MergeSort algorithm takes three steps:
Divide step computes mid position of sub-array and it takes constant time O(1).
Conquer step recursively sort two sub arrays of approx n/2 elements each.
Combine step merges a total of n elements at each pass requiring at most n comparisons so it take O(n).
The algorithm requires approx logn passes to sort an array of n elements and so total time complexity is nlogn.
lets take an example of 8 element{1,2,3,4,5,6,7,8} you have to first divide it in half means n/2=4({1,2,3,4} {5,6,7,8}) this two divides section take 0(n/2) and 0(n/2) times so in first step it take 0(n/2+n/2)=0(n)time.
2. Next step is divide n/22 which means (({1,2} {3,4} )({5,6}{7,8})) which would take
(0(n/4),0(n/4),0(n/4),0(n/4)) respectively which means this step take total 0(n/4+n/4+n/4+n/4)=0(n) time.
3. next similar as previous step we have to divide further second step by 2 means n/222 ((({1},{2},{3},{4})({5},{6},{7},{8})) whose time is 0(n/8+n/8+n/8+n/8+n/8+n/8+n/8+n/8)=0(n)
which means every step takes 0(n) times .lets steps would be a so time taken by merge sort is 0(an) which mean a must be log (n) because step will always divide by 2 .so finally TC of merge sort is 0(nlog(n))

random merge sort

I was given the following question in an algorithms book:
Suppose a merge sort is implemented to split a file at a random position, rather then exactly in the middle. How many comparisons would be used by such method to sort n elements on average?
Thanks.
To guide you to the answer, consider these more specific questions:
Assume the split is always at 10%, or 25%, or 75%, or 90%. In each case: what's the impact on recursion depths? How many comparisons need to be per recursion level?
I'm partially agree with #Armen, they should be comparable.
But: consider the case when they are split in the middle. To merge two lists of lengths n we would need 2*n - 1 comparations (sometimes less, but we'll consider it fixed for simplicity), each of them producing the next value. There would be log2(n) levels of merges, that gives us approximately n*log2(n) comparations.
Now considering the random-split case: The maximum number of comparations needed to merge a list of length n1 with one of length n2 will be n1 + n2 - 1. Howerer, the average number will be close to it, because even for the most unhappy split 1 and n-1 we'll need an average of n/2 comparations. So we can consider that the cost of merging per level will be the same as in even case.
The difference is that in random case the number of levels will be larger, and we can consider that n for next level would be max(n1, n2) instead of n/2. This max(n1, n2) will tend to be 3*n/4, that gives us the approximate formula
n*log43(n) // where log43 is log in base 4/3
that gives us
n * log2(n) / log2(4/3) ~= 2.4 * n * log2(n)
This result is still larger than the correct one because we ignored that the small list will have fewer levels, but it should be close enough. I suppose that the correct answer will be the number of comparations on average will double
You can get an upper bound of 2n * H_{n - 1} <= 2n ln n using the fact that merging two lists of total length n costs at most n comparisons. The analysis is similar to that of randomized quicksort (see http://www.cs.cmu.edu/afs/cs/academic/class/15451-s07/www/lecture_notes/lect0123.pdf).
First, suppose we split a list of length n into 2 lists L and R. We will charge the first element of R for a comparison against all of the elements of L, and the last element of L for a comparison against all elements of R. Even though these may not be the exact comparisons that are executed, the total number of comparisons we are charging for is n as required.
This handles one level of recursion, but what about the rest? We proceed by concentrating only on the "right-to-left" comparisons that occur between the first element of R and every element of L at all levels of recursion (by symmetry, this will be half the actual expected total). The probability that the jth element is compared to the ith element is 1/(j - i) where j > i. To see this, note that element j is compared with element i exactly when it is the first element chosen as a "splitting element" from among the set {i + 1,..., j}. That is, elements i and j are split into two lists as soon as the list they are in is split at some element from {i + 1,..., j}, and element j is charged for a comparison with i exactly when element j is the element that is chosen from this set.
Thus, the expected total number of comparisons involving j is at most H_n (i.e., 1 + 1/2 + 1/3..., where the number of terms is at most n - 1). Summing across all possible j gives n * H_{n - 1}. This only counted "right-to-left" comparisons, so the final upper bound is 2n * H_{n - 1}.

Resources