I am trying to understand exactly what this method does, it say its suppose to
"Keep swapping the outer-most wrongly-positioned pairs". I put this into a program
and tried different array but the result make no sense to me, what exactly does this do
partition(A, p)
A: array of size n, p: integer s.t. 0 <= p < n
1. swap(A[0],A[p])
2. i <- 1, j <- n − 1
3. while i < j do
4. while A[i] <= A[0] and i < n do
5. i <- i + 1
6. while A[j] > A[0] and j > 0 do
7. j <- j − 1
8. if i < j then
9. swap(A[i], A[j])
10. swap(A[0], A[j])
11. return j
The algorithm this pseudocode implements is the partitioning phase of the quicksort sorting algorithm. It will arrange the array so that all values smaller than or equal to A[p] are at the left and all larger values at the right. It returns the index j that is the last index of the left side for which A[j] equals A[p].
If you are not familiar with quicksort, the intent is to use this partition algorithm to split the array into "small" and "large" parts and recursively sort each part. Since the small ones had been arranged to come before the large ones in the array, the array gets sorted. If p is picked appropriately so that A[p] is close to the middle of the values in A, this is a very fast sorting method.
Related
Given two large sets A and B of scalar (floating point) values, what algorithm would you use to find the (scalar) range [x0,x1] containing zero elements from B and the maximum number of elements from A?
Is sorting complexity (O(n log n)) unavoidable?
Create a single list with all values, where each value is marked with two counts: one count that relates to set A, and another that relates to set B. Initially these counts are 1 and 0, when the value comes from set A, and 0 and 1 when the value comes from set B. So entries in this list could be tuples (value, countA, countB). This operation is O(n).
Sort these tuples. O(nlogn)
Merge tuples with duplicate values into one tuple, and accumulate the values in the corresponding counters, so that the tuple tells us how many times the value occurs in set A and how many times in set B. O(n)
Traverse this list in sorted order and maintain the largest sum of counts for countA of a series of adjacent tuples where countB is always 0, and the minimum and maximum value of that range. O(n)
The sorting is the determining factor of the time complexity: O(nlogn).
Sort both A and B in O(|A| log |A| + |B| log |B|). Then apply the following algorithm, which has complexity O(|A| + |B|):
i = j = k = 0
best_interval = (0, 1)
while i < len(B) - 1:
lo = B[i]
hi = B[i+1]
j = k # We can skip ahead from last iteration.
while j < len(A) and A[j] <= lo:
j += 1
k = j # We can skip ahead from the above loop.
while k < len(A) and A[k] < hi:
k += 1
if k - j > best_interval[1] - best_interval[0]:
best_interval = (j, k)
i += 1
x0 = A[best_interval[0]]
x1 = A[best_interval[1]-1]
It may look quadratic at a first inspection but note we never decrease j and k - it really is just a linear scan with three pointers.
Given an array of n integers in the locations A[1], A[2], …, A[n], describe an O(n^2) time algorithm to
compute the sum A[i] + A[i+1] + … + A[j] for all i, j, 1 ≤ i < j ≤ n.
I've tried multiple ways of solving this problem but none have in O(n^2) time.
So for an array containing {1,2,3,4}
You would output:
1+2 = 3
1+2+3 = 6
1+2+3+4 = 10
2+3 = 5
2+3+4 = 9
3+4 = 7
The answer does not need to be in a specific language, pseudocode is preferred.
A good preperation is everything.
You could create an array of integrals:
I[0..n] = (0, I[0] + A[1], I[1] + A[2], ..., I[n-1]+A[n]);
This will cost you O(n) * O(1) (looping over all elements and doing one addition);
Now you can calculate each Sum(A, i, j) with just a single subtraction: I[j] - I[i-1];
so this has O(1)
Looping over all combinations of i and j with 1 <= (i,j) <= n has O(n^2).
So you end up with O(n) * O(1) + O(n^2) * O(1) = O(n^2) .
Edit:
Your array A starts at 1 - adapted to this - this also solves the little quirk with i-1
So the integral array I starts with index 0 and is 1 element larger than A
Edit:
First you'll maybe have thought about the most naive idea:
Naive idea
Create a function that for given values of i and of j will return the sum A[i] + ... + A[j].
function sumRange(A, i, j):
sum = 0
for k = i to j
sum = sum + A[k]
return sum
Then generate all pairs of i and j (with i < j) and call the above function for each pair:
for i = 1 to n
for j = i+1 to n
output sumRange(A, i, j)
This is not O(n²), because already the two loops on i and j represent O(n²) iterations, and then the function will perform yet another loop, making it O(n³).
Better idea
The above can be improved. Look at the repetition it performs. The sum that was calculated for given values of i and j could be reused to calculate the sum for when j has increased with 1, without starting from scratch and summing the values between i and (now) j-1 again, only to add that one more value to it.
We should just remember what the previous sum was, and add A[j] to it.
So without a separate function:
for i = 1 to n
sum = A[i]
for j = i+1 to n
sum = sum + A[j]
output sum
Note how the sum is not reset to 0 once it is output. It is preserved, so that when j is incremented, only one value needs to be added to it.
Now it is O(n²). Note also how it does not require an extra array for storage. It only needs the memory for a few variables (i, j, sum), so its space complexity is O(1).
As the number of sums you need to output is O(n²), there is no way to improve this time complexity any further.
NB: I assume here that single array values do not constitute a "sum". As you stated in your question, i < j, and also in your example you only showed sums of at least two array values. The above can be easily adapted to also include single value "sums" if ever that were needed.
This question already has answers here:
Find a pair of elements from an array whose sum equals a given number
(33 answers)
Closed 5 years ago.
I have an O(n^2) solution to the classic two-sum problem. Where A[1...n] sorted array of positive integers. t is some positive integer.
Need to show that A contains two distinct elements a and b s.t. a+ b = t
Here is my solution so far:
t = a number;
for (i=0; i<A.length; i++)
for each A[j]
if A[i] + A[j] == t
return true
return false
How do I make this a linear solution? O(n) scratching my head trying to figure it out.
Here's an approach I have in mind so far. i will start at the beginning of A, j will start at the end of A. i will increment, j will decrement. So I'll have two counter variables in the for loop, i & j.
There are couple of ways to improve upon that.
You could extend your algorithm, but instead of doing a simple search for every term, you could do a binary search
t = a number
for (i = 0; i < A.length; i++)
j = binarySearch(A, t - A[i], i, A.length - 1)
if (j != null)
return true
return false
Binary search is done by O(log N) steps, since you perform a binary search per every element in the array, the complexity of the whole algorithm would be O(N*log N)
This already is a tremendous improvement upon O(N^2), but you can do better.
Let's take the sum 11 and the array 1, 3, 4, 8, 9 for example.
You can already see that (3,8) satisfy the sum. To find that, imagine having two pointers, once pointing at the beginning of the array (1), we'll call it H and denote it with bold and another one pointing at the end of the array (9), we'll call it T and denote it with emphasis.
1 3 4 8 9
Right now the sum of the two pointers is 1 + 9 = 10.
10 is less than the desired sum (11), there is no way to reach the desired sum by moving the T pointer, so we'll move the H pointer right:
1 3 4 8 9
3 + 9 = 12 which is greater than the desired sum, there is no way to reach the desired sum by moving the H pointer, moving it right will further increase the sum, moving it left bring us to the initital state, so we'll move the T pointer left:
1 3 4 8 9
3 + 8 = 11 <-- this is the desired sum, we're done.
So the rules of the algorithm consist of moving the H pointer left or moving the T pointer right, we're finished when the sum of the two pointer is equal to the desired sum, or H and T crossed (T became less than H).
t = a number
H = 0
T = A.length - 1
S = -1
while H < T && S != t
S = A[H] + A[T]
if S < t
H++
else if S > t
T--
return S == t
It's easy to see that this algorithm runs at O(N) because we traverse each element at most once.
You make 2 new variables that contain index 0 and index n-1, let's call them i and j respectively.
Then, you check the sum of A[i] and A[j] and if the sum is smaller than t, then increment i (the lower index), and if it is bigger then decrement j (the higher index). continue until you either find i and j such that A[i] + A[j] = t so you return true, or j <= i, and you return false.
int i = 0, j = n-1;
while(i < j) {
if(A[i] + A[j] == t)
return true;
if(A[i] + A[j] < t)
i++;
else
j--;
return false;
Given that A[i] is relatively small (maybe less than 10^6), you can create an array B of size 10^6 with each value equal to 0. Then apply the following algorithm:
for i in 1...N:
B[A[i]] += 1
for i in 1...N:
if t - A[i] > 0:
if B[t-A[i]] > 0:
return True
Edit: well, now that we know that the array is sorted, it may be wiser to find another algorithm. I'll leave the answer here since it still applies to a certain class of related problems.
Given an array a and integer k. Someone uses following algorithm to get first k smallest elements:
cnt = 0
for i in [1, k]:
for j in [i + 1, n]:
if a[i] > a[j]:
swap(a[i], a[j])
cnt = cnt + 1
The problem is: How to calculate value of cnt (when we get final k-sorted array), i.e. the number of swaps, in O(n log n) or better ?
Or simply put: calculate the number of swaps needed to get first k-smallest number sorted using the above algorithm, in less than O(n log n).
I am thinking about a binary search tree, but I get confused (How array will change when increase i ? How to calculate number of swap for a fixed i ?...).
This is a very good question: it involves Inverse Pairs, Stack and some proof techniques.
Note 1: All index used below are 1-based, instead of traditional 0-based.
Note 2: If you want to see the algorithm directly, please start reading from the bottom.
First we define Inverse Pairs as:
For a[i] and a[j], in which i < j holds, if we have a[i] > a[j], then a[i] and a[j] are called an Inverse Pair.
For example, In the following array:
3 2 1 5 4
a[1] and a[2] is a pair of Inverse Pair, a[2] and a[3] is another pair.
Before we start the analysis, let's define a common language: in the reset of the post, "inverse pair starting from i" means the total number of inverse pairs involving a[i].
For example, for a = {3, 1, 2}, inverse pair starting from 1 is 2, and inverse pair starting from 2 is 0.
Now let's look at some facts:
If we have i < j < k, and a[i] > a[k], a[j] > a[k], swap a[i] and a[j] (if they are an inverse pair) won't affect the total number of inverse pair starting from j;
Total inverse pairs starting from i may change after a swap (e.g. suppose we have a = {5, 3, 4}, before a[1] is swapped with a[2], total number of inverse pair starting from 1 is 2, but after swap, array becomes a = {3, 5, 4}, and the number of inverse pair starting from 1 becomes 1);
Given an array A and 2 numbers, a and b, as the head element of A, if we can form more inverse pair with a than b, we have a > b;
Let's denote the total number of inverse pair starting from i as ip[i], then we have: if k is the min number satisfies ip[i] > ip[i + k], then a[i] > a[i + k] while a[i] < a[i + 1 .. i + k - 1] must be true. In words, if ip[i + k] is the first number smaller than ip[i], a[i + k] is also the first number smaller than a[i];
Proof of point 1:
By definition of inverse pair, for all a[k], k > j that forms inverse pair with a[j], a[k] < a[j] must hold. Since a[i] and a[j] are a pair of inverse and provided that i < j, we have a[i] > a[j]. Therefore, we have a[i] > a[j] > a[k], which indicates the inverse-pair-relationships are not broken.
Proof of point 3:
Leave as empty since quite obvious.
Proof of point 4:
First, it's easy to see that when i < j, a[i] > a[j], we have ip[i] >= ip[j] + 1 > ip[j]. Then, it's inverse-contradict statement is also true, i.e. when i < j, ip[i] <= ip[j], we have a[i] <= a[j].
Now back to the point. Since k is the min number to satisfy ip[i] > ip[i + k], then we have ip[i] <= ip[i + 1 .. i + k - 1], which indicates a[i] <= a[i + 1.. i + k - 1] by the lemma we just proved, which also indicates there's no inverse pairs in the region [i + 1, i + k - 1]. Therefore, ip[i] is the same as the number of inverse pairs starting from i + k, but involving a[i]. Given ip[i + k] < ip[i], we know a[i + k] has less inverse pair than a[i] in the region of [i + k + 1, n], which indicates a[i + k] < a[i] (by Point 3).
You can write down some sequences and try out the 4 facts mentioned above and convince yourself or disprove them :P
Now it's about the algorithm.
A naive implementation will take O(nk) to compute the result, and the worst case will be O(n^2) when k = n.
But how about we make use of the facts above:
First we compute ip[i] using Fenwick Tree (see Note 1 below), which takes O(n log n) to construct and O(n log n) to get all ip[i] calculated.
Next, we need to make use of facts. Since swap of 2 numbers only affect current position's inverse pair number but not values after (point 1 and 2), we don't need to worry about the value change. Also, since the nearest smaller number to the right shares the same index in ip and a, we only need to find the first ip[j] that is smaller than ip[i] in [i + 1, n]. If we denote the number of swaps to get first i element sorted as f[i], we have f[i] = f[j] + 1.
But how to find this "first smaller number" fast? Use stack! Here is a post which asks a highly similar problem: Given an array A,compute B s.t B[i] stores the nearest element to the left of A[i] which is smaller than A[i]
In short, we are able to do this in O(n).
But wait, the post says "to the left" but in our case it's "to the right". The solution is simple: we do backward in our case, then everything the same :D
Therefore, in summary, the total time complexity of the algorithm is O(n log n) + O(n) = O(n log n).
Finally, let's talk with an example (a simplified example of #make_lover's example in the comment):
a = {2, 5, 3, 4, 1, 6}, k = 2
First, let's get the inverse pairs:
ip = {1, 3, 1, 1, 0, 0}
To calculate f[i], we do backward (since we need to use the stack technique):
f[6] = 0, since it's the last one
f[5] = 0, since we could not find any number that is smaller than 0
f[4] = f[5] + 1 = 1, since ip[5] is the first smaller number to the right
f[3] = f[5] + 1 = 1, since ip[5] is the first smaller number to the right
f[2] = f[3] + 1 = 2, since ip[3] is the first smaller number to the right
f[1] = f[5] + 1 = 1, since ip[5] is the first smaller number to the right
Therefore, ans = f[1] + f[2] = 3
Note 1: Using Fenwick Tree (Binary Index Tree) to get inverse pair can be done in O(N log N), here is a post on this topic, please have a look :)
Update
Aug/20/2014: There was a critical error in my previous post (thanks to #make_lover), here is the latest update.
Given as input, a sorted array of floats, I need to find the total number of pairs (i,j) such as A[i]*A[j]>=A[i]+A[j] for each i < j.
I already know the naive solution, using a loop inside other loop, which will give me O(n^2) algorithm, but i was wondering if there is a more optimal solution.
Here's an O(n) algorithm.
Let's look at A * B >= A + B.
When A, B <= 0, it's always true.
When A, B >= 2, it's always true.
When A >= 1, B <= 1 (or B >= 1, A <= 1), it's always false.
When 0 < A < 1, B < 0 (or 0 < B < 1, A < 0), it can be either true or false.
When 1 < A < 2, B > 0 (or 1 < B < 2, A > 0), it can be either true or false.
Here's a visualization, courtesy of Wolfram Alpha and Geobits:
Now, onto the algorithm.
* To find the pairs where one number is between 0 and 1 or 1 and 2 I do something similar to what is done for the 3SUM problem.
* "Pick 2" here is referring to combinations.
Count all the pairs where both are negative
Do a binary search to find the index of the first positive (> 0) number - O(log n).
Since we have the index, we know how many numbers are negative / zero, we simply need to pick 2 of them, so that's amountNonPositive * (amountNonPositive-1) / 2 - O(1).
Find all the pairs where one is between 0 and 1
Do a binary search to find the index of the last number < 1 - O(log n).
Start from that index as the right index and the left-most element as the left index.
Repeat this until the right index <= 0: (runs in O(n))
While the product is smaller than the sum, decrease the left index
Count all the elements greater than the left index
Decrease the right index
Find all the pairs where one is between 1 and 2
Do a binary search to find the index of the first number > 1 - O(log n).
Start from that index as the left index and the right-most element as the right index.
Repeat this until the left index >= 2: (runs in O(n))
While the product is greater than the sum, decrease the right index
Count all the elements greater than the right index
Increase the left index
Count all the pairs with both numbers >= 2
At the end of the last step, we're at the first index >= 2.
Now, from there, we just need to pick 2 of all the remaining numbers,
so it's again amountGreaterEqual2 * (amountGreaterEqual2-1) / 2 - O(1).
You can find and print the pairs (in a shorthand form) in O(n log n).
For each A[i] there is a minimum number k that satisfies the condition(1).
All values greater than k will also satisfy the condition.
Finding the lowest j such that A[j] >= k using binary search is O(log n).
So you can find and print the result like this:
(i, j)
(1, no match)
(2, no match)
(3, >=25)
(4, >=20)
(5, >=12)
(6, >6)
(7, >7)
...
(n-1, n)
If you want to print all combinations, then it is O(n^2), because the number of combinations are O(n^2).
(*) To handle negative numbers it actually needs to be a bit more complex, because the numbers that satify the equation can be more that one range.
I'm not absolutely sure how it behaves for small negative numbers, but if the number of ranges is not absolutely limited then my solution is no longer better than O(n^2).
Here's a binary search, O(n log n):
There's a breaking point for each number at A*B = A+B. You can reduce this to B = A / (A - 1). All numbers on one side or the other will fit it. It doesn't matter if there are negative numbers, etc.
If A < 1, then all numbers <= B fit.
If A > 1, then all numbers >= B fit.
If A == 1, then there is no match(divide by zero).
(Wolfram Alpha link)
So some pseudocode:
loop through i
a = A[i]
if(a == 1)
continue
if(a >= 2)
count += A.length - i
continue
j = binsearch(a / (a-1))
if(j <= i)
continue
if(a < 1)
count += j-i
if(a > 1)
count += A.length - j
Here's a O(n) algorithm that solves the problem when the array's elements are positive.
When the elements are positive, we can say that:
If A[i]*A[j] >= A[i]+A[j] when j>i then A[k]*A[j] >= A[k]+A[j] for any k that satisfies k>i (because the array is sorted).
If A[i]*A[j] < A[i]+A[j] when j>i then A[i]*A[k] < A[i]+A[k] for any k that satisfies k<j.
(these facts don't hold when both numbers are fractions, but then the condition won't be satisfied anyway)
Thus we can perform the following algorithm:
int findNumOfPairs(float A[])
{
start = 0;
end = A.length - 1;
numOfPairs = 0;
while (start != end)
{
if (A[start]*A[end] >= A[start]+A[end])
{
numOfPairs += end - start;
end--;
}
else
{
start++;
}
}
return numOfPairs;
}
How about excluding all floats that less then 1.0 first, since any number multiple with number less than 1, the x*0.3=A[i]+A[j] for each i < j, so we only need to count numbers of array to calculate the number of pairs(i, j), we can use formula about permutation and combination to calculate it. formula should be n(n-1)/2.