Assume you have n integers in the range (0, n2
). These integers are all square roots of other integers.
Indicate whether it is possible to sort these numbers in O(n) or not
I assume we would take square roots of each of the integers but im not sure if its possible to sort them in O(n) any help would be appreciated
Yes, if you are willing to use O(n) space:
You have stored your N integers in array A.
Allocate a bitmap B with 2n+1 bits, initializated at zero.
For each integer, set B[A[i]] = 1
Create an empty list:
Iterate B in order, so that if B[i] == 1 then i is added to the list.
It can be done in time O(2n) == O(n).
Related
I have an N-size array A, that contain natural numbers.
I need to find an efficient algorithm that finding a pair of indexes, so that the sum of the sub-array elements A[i..j], is divided by N without a reminder of devision.
Any ideas?
The key observation is:
sum(A[i..j]) = sum(A[1..j]) − sum(A[1..(i−1)])
so N divides sum(A[i..j]) if and only if sum(A[1..(i−1)]) and sum(A[1..j]) are congruent modulo N
that is, if sum(A[1..(i−1)]) and sum(A[1..j]) have same remainder when you divide both by N.
So if you just iterate over the array tallying the "sum so far", and keep track of the remainders you've already seen and the indexes where you saw them, then you can do this in O(N) time and O(N) extra space.
I am working on revised selection sort algorithm so that on each pass it finds both the largest and smallest values in the unsorted portion of the array. The sort then moves each of these values into its correct location by swapping array entries.
My question is - How many comparisons are necessary to sort n values?
In normal selection sort it is O(n) comparisons so I am not sure what will be in this case?
Normal selection sort requires O(n^2) comparisons.
At every run it makes K comparisons where K is n-1, n-2, n-3...1, and sum of this arithmetic progression is (n*(n-1)/2)
Your approach (if you are using optimized min/max choice scheme) use 3/2*K comparisons per run, where run length K is n, n-2, n-4...1
Sum of arithmetic progression with a(1)=1, a(n/2)=n, d=2 together with 3/2 multiplier is
3/2 * 1/2 * (n+1) * n/2 = 3/8 * n*(n+1) = O(n^2)
So complexity remains quadratic (and factor is very close to standard)
In your version of selection sort, first you would have to choose two elements as the minimum and maximum, and all of the remaining elements in the unsorted array can get compared with both of them in the worst case.
Let's say if k elements are remaining in the unsorted array, and assuming you pick up first two elements and accordingly assign them to minimum and maximum (1 comparison), then iterate over the rest of k-2 elements, each of which can result in 2 comparisons.So, total comparisons for this iteration will be = 1 + 2*(k-2) = 2*k - 3 comparisons.
Here k will take values as n, n-2, n-4, ... since in every iteration two elements get into their correct position. The summation will result in approximately O(n^2) comparisons.
Some sorting algorithms, like Insertion Sort, have a Θ(n) asymptotic runtime for some subset of the n! possible permutations of n elements, which means that for those permutations, the number of comparisons that Insertion Sort does is kn for some constant k. For a given constant k, what is the maximum number of permutations for which any given comparison sort could terminate within kn comparisons?
Number of operations in insertion sort depends on the number of inversions. So we need to evaluate number of permutations of n values (1..n for simplicity), containing exactly k inversions.
We can see that Inv(n, 0) = 1 - sorted array
Also Inv(0, k) = 0 - empty array
We can get array with n elements and k inversions:
-adding value n to the end of array with n-1 items and k inversions (so number of inversions remains the same)
-inserting value n before the end of array with n-1 items and k-1 inversions (so adding one inversion)
-inserting value n before two elements in the end of array with n-1 items and k-2 inversions (so adding two inversions)
-and so on
Using this approach, we can just fill a table Inv[n][k] row-by-row and cell-by-cell
Inv[n][k] = Sum(Inv[n-1][i]) where j=0..k
Every comparison at most doubles the never of input permutations you can distinguish. Thus, with kn comparisons you can sort at most 2^(kn) permutations.
Given an array A with N elements I need to find pair (i,j) such that i is not equal to j and if we write the sum A[i]+A[j] for all pairs of (i,j) then it comes at the kth position.
Example : Let N=4 and arrays A=[1 2 3 4] and if K=3 then answer is 5 as we can see it clearly that sum array becomes like this : [3,4,5,5,6,7]
I can't go for all pair of i and j as N can go up to 100000. Please help how to solve this problem
I mean something like this :
int len=N*(N+1)/2;
int sum[len];
int count=0;
for(int i=0;i<N;i++){
for(int j=i+1;j<N;j++){
sum[count]=A[i]+A[j];
count++;
}
}
//Then just find kth element.
We can't go with this approach
A solution that is based on a fact that K <= 50: Let's take the first K + 1 elements of the array in a sorted order. Now we can just try all their combinations. Proof of correctness: let's assume that a pair (i, j) is the answer, where j > K + 1. But there are K pairs with the same or smaller sum: (1, 2), (1, 3), ..., (1, K + 1). Thus, it cannot be the K-th pair.
It is possible to achieve an O(N + K ^ 2) time complexity by choosing the K + 1 smallest numbers using a quickselect algorithm(it is possible to do even better, but it is not required). You can also just the array and get an O(N * log N + K ^ 2 * log K) complexity.
I assume that you got this question from http://www.careercup.com/question?id=7457663.
If k is close to 0 then the accepted answer to How to find kth largest number in pairwise sums like setA + setB? can be adapted quite easily to this problem and be quite efficient. You need O(n log(n)) to sort the array, O(n) to set up a priority queue, and then O(k log(k)) to iterate through the elements. The reversed solution is also efficient if k is near n*n - n.
If k is close to n*n/2 then that won't be very good. But you can adapt the pivot approach of http://en.wikipedia.org/wiki/Quickselect to this problem. First in time O(n log(n)) you can sort the array. In time O(n) you can set up a data structure representing the various contiguous ranges of columns. Then you'll need to select pivots O(log(n)) times. (Remember, log(n*n) = O(log(n)).) For each pivot, you can do a binary search of each column to figure out where it split it in time O(log(n)) per column, and total cost of O(n log(n)) for all columns.
The resulting algorithm will be O(n log(n) log(n)).
Update: I do not have time to do the finger exercise of supplying code. But I can outline some of the classes you might have in an implementation.
The implementation will be a bit verbose, but that is sometimes the cost of a good general-purpose algorithm.
ArrayRangeWithAddend. This represents a range of an array, summed with one value.with has an array (reference or pointer so the underlying data can be shared between objects), a start and an end to the range, and a shiftValue for the value to add to every element in the range.
It should have a constructor. A method to give the size. A method to partition(n) it into a range less than n, the count equal to n, and a range greater than n. And value(i) to give the i'th value.
ArrayRangeCollection. This is a collection of ArrayRangeWithAddend objects. It should have methods to give its size, pick a random element, and a method to partition(n) it into an ArrayRangeCollection that is below n, count of those equal to n, and an ArrayRangeCollection that is larger than n. In the partition method it will be good to not include ArrayRangeWithAddend objects that have size 0.
Now your main program can sort the array, and create an ArrayRangeCollection covering all pairs of sums that you are interested in. Then the random and partition method can be used to implement the standard quickselect algorithm that you will find in the link I provided.
Here is how to do it (in pseudo-code). I have now confirmed that it works correctly.
//A is the original array, such as A=[1,2,3,4]
//k (an integer) is the element in the 'sum' array to find
N = A.length
//first we find i
i = -1
nl = N
k2 = k
while (k2 >= 0) {
i++
nl--
k2 -= nl
}
//then we find j
j = k2 + nl + i + 1
//now compute the sum at index position k
kSum = A[i] + A[j]
EDIT:
I have now tested this works. I had to fix some parts... basically the k input argument should use 0-based indexing. (The OP seems to use 1-based indexing.)
EDIT 2:
I'll try to explain my theory then. I began with the concept that the sum array should be visualised as a 2D jagged array (diminishing in width as the height increases), with the coordinates (as mentioned in the OP) being i and j. So for an array such as [1,2,3,4,5] the sum array would be conceived as this:
3,4,5,6,
5,6,7,
7,8,
9.
The top row are all values where i would equal 0. The second row is where i equals 1. To find the value of 'j' we do the same but in the column direction.
... Sorry I cannot explain this any better!
How do you go about justifying the correctness and runtime of an algorithm?
For example, say I'm asked to justify the correctness and runtime of an algorithm that is essentially counting sort.. I know it runs in worst case O(n), but idk how to justify the correctness or prove that the runtime is O(n).
Question: Describe an algorithm to sort n integers, each in the range [0..n4 − 1], in O(n) time. Justify the correctness and the running time of your algorithm.
Work:
Algorithm to sort n integers in the range [0..(n^4)-1] in O(n)
Use countSort with respect to the least significant digit
countSort with respect to the next least significant digit
Represent each int x in the list as its 4 digits in base n
let k = (n^4)-1 the max value in the range
Since values range from 0..k, create k+1 buckets
Iterate through the list and increment counter each time a value appears
Fill input list with the data from the buckets where each key is a value in the list
From smallest to largest key, add bucket index to the input array
Variables:
array: list of ints to be sorted
result: output array (indexes from 0..n-1)
n: length of the input
k: value such that all keys are in range 0..k-1
count: array of ints with indexes 0..k-1 (starts with all = 0)
x: single input value
total/oCount: control variables
total=0
for x in array
count[key of x] ++
for i < length of k-1
oCount = count[i]
count[i] = total
total += oCount
for x in array
result[count[key of x]] = x
count[key of x] ++
return result
The algorithm uses simple loops without recursion. Initializing the count array and the middle for loop that calculates sum on the count array iterate at most k+1 times. The 1 is constant so this takes O(k). Looping to initialize the result array and input array will take O(n) time. These total to O(n+k) time. k is considered a constant, so the the final running time is O(n)
I need some help to point me in the correct direction. Thanks!