Efficient Algorithm to Compute The Summation of the Function - algorithm

We are given N points of the form (x,y) and we need to compute the following function:
F(i,j) = ( | X[i] - X[j] | ) * ( | Y[i] - Y[j] | )
Compute Summation of F(i,j) for all ordered pairs (i,j)
N <= 300000
I am looking for a O(N log N) solution.
My initial thought was to sort the points by X and then use a BIT but I'm not being able to formulate a clear solution.

I have a solution using O(N log(M)) time and O(M) memory, where M is the size of range of Y. It's similar to what you are thinking.
First sort the points so that the X coordinates are increasing.
Let's write A for the sum of (X[i] - X[j]) * (Y[i] - Y[j]) for all pairs i > j such that Y[i] > Y[j], and B for the sum of the same expression for all pairs i > j such that Y[i] < Y[j].
The sum A + B can be calculated easily in O(N) time, and the final answer can be calculated from A - B. Thus it suffices to calculate A.
Now create a binary indexed tree, whose nodes are indexed by intevals of the form [a, b) with b = a + 2^k for some k. (Not a good sentance, but you know what I mean, right?) The root node should cover the inteval [Y_min, Y_max] of possible values of Y.
For any node indexed by [a, b) and for any i, let f(a, b, i) be the following polynomial:
f(a, b, i)(X, Y) = sum of (X - X[j]) * (Y - Y[j]) for all j such that j < i and Y[j] < Y
It is of the form P * XY + Q * X + R * Y + S, thus such a polynomial can be represented by the four numbers P, Q, R, S.
Now beginning with i = 0, you may calculate f(a, b, i)(X[i], Y[i]). To go from i to i + 1, you only need to update those intevals [a, b) containing Y[i]. When you reach i = N, the value of A is calculated.
If you can afford O(M) memory, then this should work fine.

Related

Fast algorithm for sum of steps taken by the Euclidean algorithm over pairs of numbers under an upper bound

Note: This may involve a good deal of number theory, but the formula I found online is only an approximation, so I believe an exact solution requires some sort of iterative calculation by a computer.
My goal is to find an efficient algorithm (in terms of time complexity) to solve the following problem for large values of n:
Let R(a,b) be the amount of steps that the Euclidean algorithm takes to find the GCD of nonnegative integers a and b. That is, R(a,b) = 1 + R(b,a%b), and R(a,0) = 0. Given a natural number n, find the sum of R(a,b) for all 1 <= a,b <= n.
For example, if n = 2, then the solution is R(1,1) + R(1,2) + R(2,1) + R(2,2) = 1 + 2 + 1 + 1 = 5.
Since there are n^2 pairs corresponding to the numbers to be added together, simply computing R(a,b) for every pair can do no better than O(n^2), regardless of the efficiency of R. Thus, to improve the efficiency of the algorithm, a faster method must somehow calculate the sum of R(a,b) over many values at once. There are a few properties that I suspect might be useful:
If a = b, then R(a,b) = 1
If a < b, then R(a,b) = 1 + R(b,a)
R(a,b) = R(ka,kb) where k is some natural number
If b <= a, then R(a,b) = R(a+b,b)
If b <= a < 2b, then R(a,b) = R(2a-b,a)
Because of the first two properties, it is only necessary to find the sum of R(a,b) over pairs where a > b. I tried using this in addition to the third property in a method that computes R(a,b) only for pairs where a and b are also coprime in addition to a being greater than b. The total sum is then n plus the sum of (n / a) * ((2 * R(a,b)) + 1) over all such pairs (using integer division for n / a). This algorithm still had time complexity O(n^2), I discovered, due to Euler's totient function being roughly linear.
I don't need any specific code solution, I just need to figure out the procedure for a more efficient algorithm. But if the programming language matters, my attempts to solve this problem have used C++.
Side note: I have found that a formula has been discovered that nearly solves this problem, but it is only an approximation. Note that the formula calculates the average rather than the sum, so it would just need to be multiplied by n^2. If the formula could be expanded to reduce the error, it might work, but from what I can tell, I'm not sure if this is possible.
Using Stern-Brocot, due to symmetry, we can look at just one of the four subtrees rooted at 1/3, 2/3, 3/2 or 3/1. The time complexity is still O(n^2) but obviously performs less calculations. The version below uses the subtree rooted at 2/3 (or at least that's the one I looked at to think through :). Also note, we only care about the denominators there since the numerators are lower. Also note the code relies on rules 2 and 3 as well.
C++ code (takes about a tenth of a second for n = 10,000):
#include <iostream>
using namespace std;
long g(int n, int l, int mid, int r, int fromL, int turns){
long right = 0;
long left = 0;
if (mid + r <= n)
right = g(n, mid, mid + r, r, 1, turns + (1^fromL));
if (mid + l <= n)
left = g(n, l, mid + l, mid, 0, turns + fromL);
// Multiples
int k = n / mid;
// This subtree is rooted at 2/3
return 4 * k * turns + left + right;
}
long f(int n) {
// 1/1, 2/2, 3/3 etc.
long total = n;
// 1/2, 2/4, 3/6 etc.
if (n > 1)
total += 3 * (n >> 1);
if (n > 2)
// Technically 3 turns for 2/3 but
// we can avoid a subtraction
// per call by starting with 2. (I
// guess that means it could be
// another subtree, but I haven't
// thought it through.)
total += g(n, 2, 3, 1, 1, 2);
return total;
}
int main() {
cout << f(10000);
return 0;
}
I think this is a hard problem. We can avoid division and reduce the space usage to linear at least via the Stern--Brocot tree.
def f(n, a, b, r):
return r if a + b > n else r + f(n, a + b, b, r) + f(n, a + b, a, r + 1)
def R_sum(n):
return sum(f(n, d, d, 1) for d in range(1, n + 1))
def R(a, b):
return 1 + R(b, a % b) if b else 0
def test(n):
print(R_sum(n))
print(sum(R(a, b) for a in range(1, n + 1) for b in range(1, n + 1)))
test(100)

Finding median in merged array of two sorted arrays

Assume we have 2 sorted arrays of integers with sizes of n and m. What is the best way to find median of all m + n numbers?
It's easy to do this with log(n) * log(m) complexity. But i want to solve this problem in log(n) + log(m) time. So is there any suggestion to solve this problem?
Explanation
The key point of this problem is to ignore half part of A and B each step recursively by comparing the median of remaining A and B:
if (aMid < bMid) Keep [aMid +1 ... n] and [bLeft ... m]
else Keep [bMid + 1 ... m] and [aLeft ... n]
// where n and m are the length of array A and B
As the following: time complexity is O(log(m + n))
public double findMedianSortedArrays(int[] A, int[] B) {
int m = A.length, n = B.length;
int l = (m + n + 1) / 2;
int r = (m + n + 2) / 2;
return (getkth(A, 0, B, 0, l) + getkth(A, 0, B, 0, r)) / 2.0;
}
public double getkth(int[] A, int aStart, int[] B, int bStart, int k) {
if (aStart > A.length - 1) return B[bStart + k - 1];
if (bStart > B.length - 1) return A[aStart + k - 1];
if (k == 1) return Math.min(A[aStart], B[bStart]);
int aMid = Integer.MAX_VALUE, bMid = Integer.MAX_VALUE;
if (aStart + k/2 - 1 < A.length) aMid = A[aStart + k/2 - 1];
if (bStart + k/2 - 1 < B.length) bMid = B[bStart + k/2 - 1];
if (aMid < bMid)
return getkth(A, aStart + k / 2, B, bStart, k - k / 2); // Check: aRight + bLeft
else
return getkth(A, aStart, B, bStart + k / 2, k - k / 2); // Check: bRight + aLeft
}
Hope it helps! Let me know if you need more explanation on any part.
Here's a very good solution I found in Java on Stack Overflow. It's a method of finding the K and K+1 smallest items in the two arrays where K is the center of the merged array.
If you have a function for finding the Kth item of two arrays then finding the median of the two is easy;
Calculate the weighted average of the Kth and Kth+1 items of X and Y
But then you'll need a way to find the Kth item of two lists; (remember we're one indexing now)
If X contains zero items then the Kth smallest item of X and Y is the Kth smallest item of Y
Otherwise if K == 2 then the second smallest item of X and Y is the smallest of the smallest items of X and Y (min(X[0], Y[0]))
Otherwise;
i. Let A be min(length(X), K / 2)
ii. Let B be min(length(Y), K / 2)
iii. If the X[A] > Y[B] then recurse from step 1. with X, Y' with all elements of Y from B to the end of Y and K' = K - B, otherwise recurse with X' with all elements of X from A to the end of X, Y and K' = K - A
If I find the time tomorrow I will verify that this algorithm works in Python as stated and provide the example source code, it may have some off-by-one errors as-is.
Take the median element in list A and call it a. Compare a to the center elements in list B. Lets call them b1 and b2 (if B has odd length then exactly where you split b depends on your definition of the median of an even length list, but the procedure is almost identical regardless). if b1&leq;a&leq;b2 then a is the median of the merged array. This can be done in constant time since it requires exactly two comparisons.
If a is greater than b2 then we add the top half of A to the top of B and repeat. B will no longer be sorted, but it doesn't matter. If a is less than b1 then we add the bottom half of A to the bottom of B and repeat. These will iterate log(n) times at most (if the median is found sooner then stop, of course).
It is possible that this will not find the median. If this is the case then the median is in B. If so, perform the same algorithm with A and B reversed. This will require log(m) iterations. In total you will have performed at most 2*(log(n)+log(m)) iterations of a constant time operation, so you have solved the problem in order log(n)+log(m) time.
This is essentially the same answer as was given by iehrlich, but written out more explicitly.
Yes, this can be done. Given two arrays, A and B, in the worst-case scenario you have to first perform a binary search in A, and then, if it fails, binary search in B looking for the median. On each step of a binary search, you check if the current element is actually a median of a merged A+B array. Such check takes constant time.
Let's see why such check is constant. For simplicity, let's assume that |A| + |B| is an odd number, and that all numbers in both arrays are different. You can remove these restrictions later by applying the usual median definition approach (i.e., how to calculate the median of an array containing duplicates, or of an array with even length). Anyway, given that, we know for sure, that in the merged array there will be (|A| + |B| - 1) / 2 elements to the right and to the left of an actual median. In the process of a binary search in A, we know the index of current element x in array A (let it be i). Now, if x satisfies the condition B[j] < x < B[j+1], where i + j == (|A| + |B| - 1) / 2, then x is your median.
The overall complexity is O(log(max(|A|, |B|)) time and O(1) memory.

Number of pairs with a given sum and product

I have an array A along with 3 variables k, x and y.
I have to find number of unordered pairs (i,j) such that the sum of two elements mod k equals x and the product of the same two elements mod k is equal to y. Pairs need not be distinct. In other words, the number of (i,j) so that
(A[i]+A[j])%k == x and (A[i]*A[j])%k == y where 0 <= i < j < size of A.
For example, let A={1,2,3,2,1}, k=2, x=1, y=0. Then the answer is 6, because the pairs are: (1,2), (1,2), (2,3), (2,1), (3,2), and (2,1).
I used a brute force approach, but obviously this is not acceptable.
Modulo-arithmetic has the following two rules:
((a mod k) * (b mod k)) mod k = (a * b) mod k
((a mod k) + (b mod k)) mod k = (a + b) mod k
Thus we can sort all values into a hashtable with separate chaining and k buckets.
Addition
Find m < k, such that for a given n < k: (n + m) mod k = x.
There is exactly one solution to this problem:
if n < x: m < x must hold. Thus m = x - n
if n == x: m = 0
if n > x: we need to find m such that n + m = x + k. Thus m = x + k - n
This way, we can easily determine for each list of values the corresponding values such that for any pair (a, b) of the crossproduct of the two lists (a + b) mod k = x holds.
Multiplication
Multiplication is a bit trickier. Luckily we've already been given the matching congruence-class for addition (see above), which must as well be the matching congruence-class for the multiplication, since both constraints need to hold. To verify that the given congruence-class matches, we only need to check that (n * m) mod k = y (n and m defined as above). If this expression holds, we can build pairs, otherwise no matching elements exist.
Implementation
This would be the working python-code for the above example:
def modmuladd(ls, x, y, k):
result = []
# create tuples of indices and values
indices = zip(ls, range(0, len(ls)))
# split up into congruence classes
congruence_cls = [[] for i in range(0, k)]
for p in indices:
congruence_cls[p[0] % k].append(p)
for n in range(0, k):
# congruence class to match addition
if n < x:
m = x - n
elif n == x:
m = 0
else:
m = x + k - n
# check if congruence class matches for multiplication
if (n * m) % k != y or len(congruence_cls[m]) == 0:
continue # no matching congruence class
# add matching tuple to result
result += [(a, b) for a in congruence_cls[n] for b in congruence_cls[m] if a[1] <= b[1]]
result += [(a, b) for a in congruence_cls[m] for b in congruence_cls[n] if a[1] <= b[1]]
# sort result such according to indices of first and second element, remove duplicates
sorted_res = sorted(sorted(set(result), key=lambda p: p[1][1]), key=lambda p: p[0][1])
# remove indices from result-set
return [(p[0][0], p[1][0]) for p in sorted_res]
Note that sorting and elimination of duplicates is only required since this code concentrates on the usage of congruence-classes than perfect optimization. This example can be easily tweaked to provided ordering without the sorting by minor modifications.
Test run
print(modmuladd([1, 2, 3, 2, 1], 1, 0, 2))
Output:
[(1, 2), (1, 2), (2, 3), (2, 1), (3, 2), (2, 1)]
EDIT:
Worst-case complexity of this algorithm is still O(n^2), due to the fact that building all possible pairs of a list of size n is O(n^2). With this algorithm however the search for matching pairs can be cut down to O(k) with O(n) preprocessing. Thus counting resulting pairs can be done in O(n) with this approach. Assuming the numbers are distributed equally over the congruence-classes, this algorithm could build all pairs that are part of the solution-set in O(n^2/k^2).
EDIT 2:
An implementation that only counts would work like this:
def modmuladdct(ls, x, y, k):
result = 0
# split up into congruence classes
congruence_class = {}
for v in ls:
if v % k not in congruence_class:
congruence_class[(v % k)] = [v]
else:
congruence_class[v % k].append(v)
for n in congruence_class.keys():
# congruence class to match addition
m = (x - n + k) % k
# check if congruence class matches for multiplication
if (n * m % k != y) or len(congruence_class[m]) == 0:
continue # no matching congruence class
# total number of pairs that will be built
result += len(congruence_class[n]) * len(congruence_class[m])
# divide by two since each pair would otherwise be counted twice
return result // 2
Each pair would appear exactly twice in the result: once in-order and once with reversed order. By dividing the result by two this is being corrected. Runtime is O(n + k) (assuming dictionary-operations are O(1)).
The number of loops is C(2, n) = 5!/(2!(5-2)! = 10 loops in your case, and there is nothing magic that would drastically reduce the number of loops.
In JS you can do:
A = [1, 2, 3, 2, 1];
k = 2;
x = 1;
y = 0;
for(i=0; i<A.length; i++) {
for(j=i+1; j<A.length; j++) {
if ((A[i]+A[j])%k !== x) {
continue;
}
if ((A[i]*A[j])%k !== y) {
continue;
}
console.log('('+A[i]+', '+A[j]+')');
}
}
Ignoring A, we can find all solutions of n * (x - n) == y mod k for 0 <= n < k. That's a simple O(k) algorithm -- check each such n in turn.
We can count, for each n, how often A[i] == n, and then reconstruct the counts of pairs. For if cs is an array of these counts, and n is a solution of n * (x - n) == y mod k, then there's cs[n] * cs[(x-n)^k] pairs of things in A that solve our equations corresponding to this n. To avoid double counting we only count n such that n < (x - n) % k.
def count_pairs(A, k, x, y):
cs = [0] * k
for a in A:
cs[a % k] += 1
pairs = ((i, (x-i)%k) for i in xrange(k) if i * (x-i) % k == y)
return sum(cs[i] * cs[j] for i, j in pairs if i < j)
print count_pairs([1, 2, 3, 2, 1], 2, 1, 0)
Overall, this constructs the counts in O(|A|) time, and the remaining code runs in O(k) time. It uses O(k) space.

Maximizing the overall sum of K disjoint and contiguous subsets of size L among N positive numbers

I'm trying to find an algorithm to find K disjoint, contiguous subsets of size L of an array x of real numbers that maximize the sum of the elements.
Spelling out the details, X is a set of N positive real numbers:
X={x[1],x[2],...x[N]} where x[j]>=0 for all j=1,...,N.
A contiguous subset of length L called S[i] is defined as L consecutive members of X starting at position n[i] and ending at position n[i]+L-1:
S[i] = {x[j] | j=n[i],n[i]+1,...,n[i]+L-1} = {x[n[i]],x[n[i]+1],...,x[n[i]+L-1]}.
Two of such subsets S[i] and S[j] are called pairwise disjoint (non-overlapping) if |n[i]-n[j]|>=L. In other words, they don't contain any identical members of X.
Define the summation of the members of each subset:
SUM[i] = x[n[i]]+x[n[i]+1]+...+x[n[i]+L-1];
The goal is find K contiguous and disjoint(non-overlapping) subsets S[1],S[2],...,S[K] of length L such that SUM[1]+SUM[2]+...+SUM[K] is maximized.
This is solved by dynamic programming. Let M[i] be the best solution only for the first i elements of x. Then:
M[i] = 0 for i < L
M[i] = max(M[i-1], M[i-L] + sum(x[i-L+1] + x[i-L+2] + ... + x[i]))
The solution to your problem is M[N].
When you code it, you can incrementally compute the sum (or simply pre-compute all the sums) leading to an O(N) solution in both space and time.
If you have to find exactly K subsets, you can extend this, by defining M[i, k] to be the optimal solution with k subsets on the first i elements. Then:
M[i, k] = 0 for i < k * L or k = 0.
M[i, k] = max(M[i-1, k], M[i-L, k-1] + sum(x[i-L+1] + ... + x[i])
The solution to your problem is M[N, K].
This is a 2d dynamic programming solution, and has time and space complexity of O(NK) (assuming you use the same trick as above for avoiding re-computing the sum).

Counting number of points in lower left quadrant?

I am having trouble understanding a solution to an algorithmic problem
In particular, I don't understand how or why this part of the code
s += a[i];
total += query(s);
update(s);
allows you to compute the total number of points in the lower left quadrant of each point.
Could someone please elaborate?
As an analogue for the plane problem, consider this:
For a point (a, b) to lie in the lower left quadrant of (x, y), a <
x & b < y; thus, points of the form (i, P[i]) lie in the lower left quadrant
of (j, P[j]) iff i < j and P[i] < P[j]
When iterating in ascending order, all points that were considered earlier lie on the left compared to the current (i, P[i])
So one only has to locate all P[j]s less that P[i] that have been considered until now
*current point refers to the point in consideration in the current iteration of the for loop that you quoted ie, (i, P[i])
Let's define another array, C[s]:
C[s] = Number of Prefix Sums of array A[1..(i - 1)] that amount to s
So the solution to #3 becomes the sum ... C[-2] + C[-1] + C[0] + C[1] + C[2] ... C[P[i] - 1], ie prefix sum of C[P[i]]
Use the BIT to store the prefix sum of C, thus defining query(s) as:
query(s) = Number of Prefix Sums of array A[1..(i - 1)] that amount to a value < s
Using these definitions, s in the given code gives you the prefix sum up to the current index i (P[i]). total builds the answer, and update simply adds P[i] to the BIT.
We have to repeat this method for all i, hence the for loop.
PS: It uses a data structure called a Binary Indexed Tree (http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=binaryIndexedTrees) for operations. If you aren't acquainted with it, I'd recommend that you check the link.
EDIT:
You are given a array S and a value X. You can split S into two disjoint subarrays such that L has all elements of S less than X, and H that has those that are greater than or equal to X.
A: All elements of L are less than all elements of H.
Any subsequence T of S will have some elements of L and some elements of H. Let's say it has p elements of L and q of H. When T is sorted to give T', all p elements of L appear before the q elements of H because of A.
Median being the central value is the value at location m = (p + q)/2
It is intuitive to think that having q >= p implies that the median lies in X, as a proof:
Values in locations [1..p] in T' belong to L. Therefore for the median to be in H, it's position m should be greater than p:
m > p
(p + q)/2 > p
p + q > 2p
q > p
B: q - p > 0
To computer q - p, I replace all elements in T' with -1 if they belong to L ( < X ) and +1 if they belong to H ( >= X)
T looks something like {-1, -1, -1... 1, 1, 1}
It has p times -1 and q times 1. Sum of T' will now give me:
Sum = p * (-1) + q * (1)
C: Sum = q - p
I can use this information to find the value in B.
All subsequences are of the form {A[i], A[i + 2], A[i + 3] ... A[j + 1]} since they are contiguous, To compute sum of A[i] to A[j + 1], I can compute the prefix sum of A[i] with P[i] = A[1] + A[2] + .. A[i - 1]
Sum of subsequence from A[i] to A[j] then can be computed as P[j] - P[i] (j is greater of j and i)
With C and B in mind, we conclude:
Sum = P[j] - P[i] = q - p (q - p > 0)
P[j] - P[i] > 0
P[j] > P[i]
j > i and P[j] > P[i] for each solution that gives you a median >= X
In summary:
Replace all A[i] with -1 if they are less than X and -1 otherwise
Computer prefix sums of A[i]
For each pair (i, P[i]), count pairs which lie to its lower left quadrant.

Resources