Counting inversions in an array of 2D pair - algorithm

Problem Description:
Let there be an array of 2D pairs ((x1, y1), . . . ,(xn, yn))
. With a fixed constant
y' a pair (i, j) is called half-inverted if i < j, xi > xj , and yi ≥ y' > yj . Devise an algorithm
that counts the number of half-inverted pairs. You will get full marks if your algorithm is
correct of complexity no more than O(n log n).
\My idea is to treat this using similar method as counting inversion in a normal array, but my problem is that how do we maintain the order during the Merge And Count step?

It is a simple modification of the familiar merge-sort inversion counting algorithm which can be used to solve this problem so make you fully understand it as a prerequisite.
If we examine the merge step of this algorithm we have 2 sorted halves and 2 pointers pointing to an element of each. Let our left pointer be i and our right, j. Using the traditional definition of an inversion, if our i pointer points to a value that is larger than the value pointed to by j then due the arrays being sorted and all the elements on the left being before those on the right in the real array, we know all the elements from i to the end of the left half meet our definition of an inversion for our value at j so we increase our count by mid - i where mid is the end of the left half.
Switching back to your problem, we are dealing with pairs (x,y). If we can keep our x values sorted then, using the approach described above, we can simply count the number of inversions only considering x values. Looking at your definition of half inversions we will surely be over counting the number we need if we only count xi > xj. We are missing the additional constraint of yi >= y' > yj which must be filtered out of our counting.
So, if we look back to our traditional algorithm when our i pointer is pointing to a value greater than the value at j we also need to make sure that our y value at j is less than y'. If this not true then none of the x's from i to mid will match our definition of a half inversion and so we cannot count them. Now let's assume our j's y is smaller than y', if we simply counted all the pairs from i to mid then we would still be over counting the pairs which have yi < y'.
One way to fix this is to keep track of the of y values in the left half from i to mid which are >= y' and add that value to our count. We can keep track of how many y >= y' we see in the merge step up to any i, and subtract that from the total number of y's which are >= y' in the left half. To keep track of that total number we can return that value from our recursive function (total = left + right) and only use the number which came from the left half when merging. We also need to modify our base case which is straightforward.
def count_half_inversions(l, y):
return count_rec(l, 0, len(l), l.copy(), y)[0]
def count_rec(l, begin, end, copy, y):
if end-begin <= 1:
# we have only 1 pair
return (0, 1 if l[begin][1] >= y else 0)
mid = begin + ((end-begin) // 2)
left = count_rec(copy, begin, mid, l, y)
right = count_rec(copy, mid, end, l, y)
between = merge_count(l, begin, mid, end, copy, left[1], y)
# return (inversion count, number of pairs, (i,j), with j >= y)
return (left[0] + right[0] + between, left[1] + right[1])
def merge_count(l, begin, mid, end, copy, left_y_count, y):
result = 0
i,j = begin, mid
k = begin
while i < mid and j < end:
if copy[i][0] > copy[j][0]:
if y > copy[j][1]:
result += left_y_count
smaller = copy[j]
j += 1
else:
if copy[i][1] >= y:
left_y_count -= 1
smaller = copy[i]
i += 1
l[k] = smaller
k += 1
while i < mid:
l[k] = copy[i]
i += 1
k += 1
while j < end:
l[k] = copy[j]
j += 1
k += 1
return result
test_case = [(1,1), (6,4), (6,3), (1,2), (1,2), (3,3), (6,2), (0,1)]
fixed_y = 2
print(count_half_inversions(test_case, fixed_y))

Related

Algorithm Problem: Finding all cells that has distance of K from some specific cells in a 2D grid [duplicate]

I am attempting to solve a coding challenge however my solution is not very performant, I'm looking for advice or suggestions on how I can improve my algorithm.
The puzzle is as follows:
You are given a grid of cells that represents an orchard, each cell can be either an empty spot (0) or a fruit tree (1). A farmer wishes to know how many empty spots there are within the orchard that are within k distance from all fruit trees.
Distance is counted using taxicab geometry, for example:
k = 1
[1, 0]
[0, 0]
the answer is 2 as only the bottom right spot is >k distance from all trees.
My solution goes something like this:
loop over grid and store all tree positions
BFS from the first tree position and store all empty spots until we reach a neighbour that is beyond k distance
BFS from the next tree position and store the intersection of empty spots
Repeat step 3 until we have iterated over all tree positions
Return the number of empty spots remaining after all intersections
I have found that for large grids with large values of k, my algorithm becomes very slow as I end up checking every spot in the grid multiple times. After doing some research, I found some solutions for similar problems that suggest taking the two most extreme target nodes and then only comparing distance to them:
https://www.codingninjas.com/codestudio/problem-details/count-nodes-within-k-distance_992849
https://www.geeksforgeeks.org/count-nodes-within-k-distance-from-all-nodes-in-a-set/
However this does not work for my challenge given certain inputs like below:
k = 4
[0, 0, 0, 1]
[0, 1, 0, 0]
[0, 0, 0, 0]
[1, 0, 0, 0]
[0, 0, 0, 0]
Using the extreme nodes approach, the bottom right empty spot is counted even though it is 5 distance away from the middle tree.
Could anyone point me towards a more efficient approach? I am still very new to these types of problems so I am finding it hard to see the next step I should take.
There is a simple, linear time solution to this problem because of the grid and distance structure. Given a fruit tree with coordinates (a, b), consider the 4 diagonal lines bounding the box of distance k around it. The diagonals going down and to the right have a constant value of x + y, while the diagonals going down and to the left have a constant value of x - y.
A point (x, y) is inside the box (and therefore, within distance k of (a, b)) if and only if:
a + b - k <= x + y <= a + b + k, and
a - b - k <= x - y <= a - b + k
So we can iterate over our fruit trees (a, b) to find four numbers:
first_max = max(a + b - k); first_min = min(a + b + k);
second_max = max(a - b - k); second_min = min(a - b + k);
where min and max are taken over all fruit trees. Then, iterate over empty cells (or do some math and subtract fruit tree counts, if your grid is enormous), counting how many empty spots (x,y) satisfy
first_max <= x + y <= first_min, and
second_max <= x - y <= second_min.
This Python code (written in a procedural style) illustrates this idea. Each diagonal of each bounding box cuts off exactly half of the plane, so this is equivalent to intersection of parallel half planes:
fruit_trees = [(a, b) for a in range(len(grid))
for b in range(len(grid[0]))
if grid[a][b] == 1]
northwest_half_plane = -infinity
southeast_half_plane = infinity
southwest_half_plane = -infinity
northeast_half_plane = infinity
for a, b in fruit_trees:
northwest_half_plane = max(northwest_half_plane, a - b - k)
southeast_half_plane = min(southeast_half_plane, a - b + k)
southwest_half_plane = max(southwest_half_plane, a + b - k)
northeast_half_plane = min(northeast_half_plane, a + b + k)
count = 0
for x in range(len(grid)):
for y in range(len(grid[0])):
if grid[x][y] == 0:
if (northwest_half_plane <= x - y <= southeast_half_plane
and southwest_half_plane <= x + y <= northeast_half_plane):
count += 1
print(count)
Some notes on the code: Technically the array coordinates are a quarter-turn rotated from the Cartesian coordinates of the picture, but that is immaterial here. The code is left deliberately bereft of certain 'optimizations' which may seem obvious, for two reasons: 1. The best optimization depends on the input format of fruit trees and the grid, and 2. The solution, while being simple in concept and simple to read, is not simple to get right while writing, and it's important that the code be 'obviously correct'. Things like 'exit early and return 0 if a lower bound exceeds an upper bound' can be added later if the performance is necessary.
As Answered by #kcsquared ,Providing an implementation in JAVA
public int solutionGrid(int K, int [][]A){
int m=A.length;
int n=A[0].length;
int k=K;
//to store the house coordinates
Set<String> houses=new HashSet<>();
//Find the house and store the coordinates
for(int i=0;i<m;i++) {
for (int j = 0; j < n; j++) {
if (A[i][j] == 1) {
houses.add(i + "&" + j);
}
}
}
int northwest_half_plane = Integer.MIN_VALUE;
int southeast_half_plane = Integer.MAX_VALUE;
int southwest_half_plane = Integer.MIN_VALUE;
int northeast_half_plane = Integer.MAX_VALUE;
for(String ele:houses){
String arr[]=ele.split("&");
int a=Integer.valueOf(arr[0]);
int b=Integer.valueOf(arr[1]);
northwest_half_plane = Math.max(northwest_half_plane, a - b - k);
southeast_half_plane = Math.min(southeast_half_plane, a - b + k);
southwest_half_plane = Math.max(southwest_half_plane, a + b - k);
northeast_half_plane = Math.min(northeast_half_plane, a + b + k);
}
int count = 0;
for(int x=0;x<m;x++) {
for (int y = 0; y < n; y++) {
if (A[x][y] == 0){
if ((northwest_half_plane <= x - y && x - y <= southeast_half_plane)
&& southwest_half_plane <= x + y && x + y <= northeast_half_plane){
count += 1;
}
}
}
}
return count;
}
This wouldn't be easy to implement but could be sublinear for many cases, and at most linear. Consider representing the perimeter of each tree as four corners (they mark a square rotated 45 degrees). For each tree compute it's perimeter intersection with the current intersection. The difficulty comes with managing the corners of the intersection, which could include more than one point because of the diagonal alignments. Run inside the final intersection to count how many empty spots are within it.
Since you are using taxicab distance, BFS is unneccesary. You can compute the distance between an empty spot and a tree directly.
This algorithm is based on a suggestion by https://stackoverflow.com/users/3080723/stef
// select tree near top left corner
SET flag false
LOOP r over rows
LOOP c over columns
IF tree at c, r
SET t to tree at c,r
SET flag true
BREAK
IF flag
BREAK
LOOP s over empty spots
Calculate distance between s and t
IF distance <= k
ADD s to spotlist
LOOP s over spotlist
LOOP t over trees, starting at bottom right corner
Calculate distance between s and t
IF distance > k
REMOVE s from spotlist
BREAK
RETURN spotlist

Count nodes within k distance of marked nodes in grid

I am attempting to solve a coding challenge however my solution is not very performant, I'm looking for advice or suggestions on how I can improve my algorithm.
The puzzle is as follows:
You are given a grid of cells that represents an orchard, each cell can be either an empty spot (0) or a fruit tree (1). A farmer wishes to know how many empty spots there are within the orchard that are within k distance from all fruit trees.
Distance is counted using taxicab geometry, for example:
k = 1
[1, 0]
[0, 0]
the answer is 2 as only the bottom right spot is >k distance from all trees.
My solution goes something like this:
loop over grid and store all tree positions
BFS from the first tree position and store all empty spots until we reach a neighbour that is beyond k distance
BFS from the next tree position and store the intersection of empty spots
Repeat step 3 until we have iterated over all tree positions
Return the number of empty spots remaining after all intersections
I have found that for large grids with large values of k, my algorithm becomes very slow as I end up checking every spot in the grid multiple times. After doing some research, I found some solutions for similar problems that suggest taking the two most extreme target nodes and then only comparing distance to them:
https://www.codingninjas.com/codestudio/problem-details/count-nodes-within-k-distance_992849
https://www.geeksforgeeks.org/count-nodes-within-k-distance-from-all-nodes-in-a-set/
However this does not work for my challenge given certain inputs like below:
k = 4
[0, 0, 0, 1]
[0, 1, 0, 0]
[0, 0, 0, 0]
[1, 0, 0, 0]
[0, 0, 0, 0]
Using the extreme nodes approach, the bottom right empty spot is counted even though it is 5 distance away from the middle tree.
Could anyone point me towards a more efficient approach? I am still very new to these types of problems so I am finding it hard to see the next step I should take.
There is a simple, linear time solution to this problem because of the grid and distance structure. Given a fruit tree with coordinates (a, b), consider the 4 diagonal lines bounding the box of distance k around it. The diagonals going down and to the right have a constant value of x + y, while the diagonals going down and to the left have a constant value of x - y.
A point (x, y) is inside the box (and therefore, within distance k of (a, b)) if and only if:
a + b - k <= x + y <= a + b + k, and
a - b - k <= x - y <= a - b + k
So we can iterate over our fruit trees (a, b) to find four numbers:
first_max = max(a + b - k); first_min = min(a + b + k);
second_max = max(a - b - k); second_min = min(a - b + k);
where min and max are taken over all fruit trees. Then, iterate over empty cells (or do some math and subtract fruit tree counts, if your grid is enormous), counting how many empty spots (x,y) satisfy
first_max <= x + y <= first_min, and
second_max <= x - y <= second_min.
This Python code (written in a procedural style) illustrates this idea. Each diagonal of each bounding box cuts off exactly half of the plane, so this is equivalent to intersection of parallel half planes:
fruit_trees = [(a, b) for a in range(len(grid))
for b in range(len(grid[0]))
if grid[a][b] == 1]
northwest_half_plane = -infinity
southeast_half_plane = infinity
southwest_half_plane = -infinity
northeast_half_plane = infinity
for a, b in fruit_trees:
northwest_half_plane = max(northwest_half_plane, a - b - k)
southeast_half_plane = min(southeast_half_plane, a - b + k)
southwest_half_plane = max(southwest_half_plane, a + b - k)
northeast_half_plane = min(northeast_half_plane, a + b + k)
count = 0
for x in range(len(grid)):
for y in range(len(grid[0])):
if grid[x][y] == 0:
if (northwest_half_plane <= x - y <= southeast_half_plane
and southwest_half_plane <= x + y <= northeast_half_plane):
count += 1
print(count)
Some notes on the code: Technically the array coordinates are a quarter-turn rotated from the Cartesian coordinates of the picture, but that is immaterial here. The code is left deliberately bereft of certain 'optimizations' which may seem obvious, for two reasons: 1. The best optimization depends on the input format of fruit trees and the grid, and 2. The solution, while being simple in concept and simple to read, is not simple to get right while writing, and it's important that the code be 'obviously correct'. Things like 'exit early and return 0 if a lower bound exceeds an upper bound' can be added later if the performance is necessary.
As Answered by #kcsquared ,Providing an implementation in JAVA
public int solutionGrid(int K, int [][]A){
int m=A.length;
int n=A[0].length;
int k=K;
//to store the house coordinates
Set<String> houses=new HashSet<>();
//Find the house and store the coordinates
for(int i=0;i<m;i++) {
for (int j = 0; j < n; j++) {
if (A[i][j] == 1) {
houses.add(i + "&" + j);
}
}
}
int northwest_half_plane = Integer.MIN_VALUE;
int southeast_half_plane = Integer.MAX_VALUE;
int southwest_half_plane = Integer.MIN_VALUE;
int northeast_half_plane = Integer.MAX_VALUE;
for(String ele:houses){
String arr[]=ele.split("&");
int a=Integer.valueOf(arr[0]);
int b=Integer.valueOf(arr[1]);
northwest_half_plane = Math.max(northwest_half_plane, a - b - k);
southeast_half_plane = Math.min(southeast_half_plane, a - b + k);
southwest_half_plane = Math.max(southwest_half_plane, a + b - k);
northeast_half_plane = Math.min(northeast_half_plane, a + b + k);
}
int count = 0;
for(int x=0;x<m;x++) {
for (int y = 0; y < n; y++) {
if (A[x][y] == 0){
if ((northwest_half_plane <= x - y && x - y <= southeast_half_plane)
&& southwest_half_plane <= x + y && x + y <= northeast_half_plane){
count += 1;
}
}
}
}
return count;
}
This wouldn't be easy to implement but could be sublinear for many cases, and at most linear. Consider representing the perimeter of each tree as four corners (they mark a square rotated 45 degrees). For each tree compute it's perimeter intersection with the current intersection. The difficulty comes with managing the corners of the intersection, which could include more than one point because of the diagonal alignments. Run inside the final intersection to count how many empty spots are within it.
Since you are using taxicab distance, BFS is unneccesary. You can compute the distance between an empty spot and a tree directly.
This algorithm is based on a suggestion by https://stackoverflow.com/users/3080723/stef
// select tree near top left corner
SET flag false
LOOP r over rows
LOOP c over columns
IF tree at c, r
SET t to tree at c,r
SET flag true
BREAK
IF flag
BREAK
LOOP s over empty spots
Calculate distance between s and t
IF distance <= k
ADD s to spotlist
LOOP s over spotlist
LOOP t over trees, starting at bottom right corner
Calculate distance between s and t
IF distance > k
REMOVE s from spotlist
BREAK
RETURN spotlist

Maximize the difference of the sum of picked numbers by 2 players

I have 2 problems that derive from a simple problem. I'll explain the simple one with the solution I found and after that the modified problem.
Suppose there is a game with 2 players, A and B and a list of
positive integers. Player A starts by taking out a number from the list, player
B does the same and so on after the there are no longer numbers in the
list. Both players sum up the picked numbers. The goal
for each player is to maximize the difference between his sum and
opponent's sum, which is the score. The question is what is the
maximum score player A can obtain if both players play in an optimal
manner.
Now, for this I figured out that the optimal strategy for each player is to take the biggest number at each step, the pseudocode is the following:
sumA = 0
sumB = 0
list = [1, 5, 3, 7, 9]
while list IS NOT EMPTY:
val = pop_max(list)
sumA = sumA + val
if list IS NOT EMPTY:
val = pop_max(list)
sumB = sumB + val
scoreA = sumA - sumB
print scoreA
This can run in O(n) or O(n*log(n)) depending how the numbers from list are sorted.
The following 2 modification:
At the beginning of the game player A should remove K numbers from the list. If he does this in an optimal manner and after that the games is the initial one, what is the maxim score he can obtain?
and
At each step the players can pick the left-most or the right-most number from the list. Again they play in an optimal manner. Which is the maximum score player A can obtain?
For the second modification I can think of a brute-force approach, i.e. computing the tree of all possibilities, but this does not work for big input data. I believe that there is some kind of DP algorithm.
For the first modification I can't think of an idea.
Can someone help with some algorithm ideas for the 2 modifications?
[LATTER EDIT]
The solution for the 2nd modification can be found here https://www.geeksforgeeks.org/optimal-strategy-for-a-game-dp-31/ It is DP.
Here is the post for the 2nd modification, which is
At each step the players can pick the left-most or the right-most number from the list. Again they play in an optimal manner. Which is the maximum score player A can obtain?
The solution is based on DP. For the sub-problem (i-j) i.e. v[]i, v[i+1], ..., v[j] there are two choices:
The user chooses the i-th element with value v[i]: The opponent either chooses (i+1)-th element or j-th element. The opponent intends to choose the element which leaves the user with minimum value. i.e. The user can collect the value v[i] + min(F(i+2, j), F(i+1, j-1))
The user chooses the j-th element with value v[j]: The opponent either chooses i-th element or (j-1)-th element. The opponent intends to choose the element which leaves the user with minimum value.
i.e. The user can collect the value v[j] + min(F(i+1, j-1), F(i, j-2))
Following is recursive solution that is based on above two choices. We take the maximum of two choices.
F(i, j) represents the maximum value the user can collect from i-th coin to j-th coin.
F(i, j) = Max(v[i] + min(F(i+2, j), F(i+1, j-1)), v[j] + min(F(i+1, j-1), F(i, j-2)))
Base Cases
F(i, j) = v[i] If j == i
F(i, j) = max(v[i], v[j]) If j == i+1
Here is a pice of code in Python that solves it
def optimalStrategyOfGame(arr, n):
# Create a table to store solutions of subproblems
table = [[0 for i in range(n)] for i in range(n)]
# Fill table using above recursive formula. Note that the table is
# filled in diagonal fashion from diagonal elements to table[0][n-1] which is the result.
for gap in range(n):
for j in range(gap, n):
i = j - gap
# Here x is value of F(i+2, j), y is F(i+1, j-1) and z is
# F(i, j-2) in above recursive formula
x = 0
if((i + 2) <= j):
x = table[i + 2][j]
y = 0
if((i + 1) <= (j - 1)):
y = table[i + 1][j - 1]
z = 0
if(i <= (j - 2)):
z = table[i][j - 2]
table[i][j] = max(arr[i] + min(x, y), arr[j] + min(y, z))
return table[0][n - 1]
[SOURCE] https://www.geeksforgeeks.org/optimal-strategy-for-a-game-dp-31/

Finding median in merged array of two sorted arrays

Assume we have 2 sorted arrays of integers with sizes of n and m. What is the best way to find median of all m + n numbers?
It's easy to do this with log(n) * log(m) complexity. But i want to solve this problem in log(n) + log(m) time. So is there any suggestion to solve this problem?
Explanation
The key point of this problem is to ignore half part of A and B each step recursively by comparing the median of remaining A and B:
if (aMid < bMid) Keep [aMid +1 ... n] and [bLeft ... m]
else Keep [bMid + 1 ... m] and [aLeft ... n]
// where n and m are the length of array A and B
As the following: time complexity is O(log(m + n))
public double findMedianSortedArrays(int[] A, int[] B) {
int m = A.length, n = B.length;
int l = (m + n + 1) / 2;
int r = (m + n + 2) / 2;
return (getkth(A, 0, B, 0, l) + getkth(A, 0, B, 0, r)) / 2.0;
}
public double getkth(int[] A, int aStart, int[] B, int bStart, int k) {
if (aStart > A.length - 1) return B[bStart + k - 1];
if (bStart > B.length - 1) return A[aStart + k - 1];
if (k == 1) return Math.min(A[aStart], B[bStart]);
int aMid = Integer.MAX_VALUE, bMid = Integer.MAX_VALUE;
if (aStart + k/2 - 1 < A.length) aMid = A[aStart + k/2 - 1];
if (bStart + k/2 - 1 < B.length) bMid = B[bStart + k/2 - 1];
if (aMid < bMid)
return getkth(A, aStart + k / 2, B, bStart, k - k / 2); // Check: aRight + bLeft
else
return getkth(A, aStart, B, bStart + k / 2, k - k / 2); // Check: bRight + aLeft
}
Hope it helps! Let me know if you need more explanation on any part.
Here's a very good solution I found in Java on Stack Overflow. It's a method of finding the K and K+1 smallest items in the two arrays where K is the center of the merged array.
If you have a function for finding the Kth item of two arrays then finding the median of the two is easy;
Calculate the weighted average of the Kth and Kth+1 items of X and Y
But then you'll need a way to find the Kth item of two lists; (remember we're one indexing now)
If X contains zero items then the Kth smallest item of X and Y is the Kth smallest item of Y
Otherwise if K == 2 then the second smallest item of X and Y is the smallest of the smallest items of X and Y (min(X[0], Y[0]))
Otherwise;
i. Let A be min(length(X), K / 2)
ii. Let B be min(length(Y), K / 2)
iii. If the X[A] > Y[B] then recurse from step 1. with X, Y' with all elements of Y from B to the end of Y and K' = K - B, otherwise recurse with X' with all elements of X from A to the end of X, Y and K' = K - A
If I find the time tomorrow I will verify that this algorithm works in Python as stated and provide the example source code, it may have some off-by-one errors as-is.
Take the median element in list A and call it a. Compare a to the center elements in list B. Lets call them b1 and b2 (if B has odd length then exactly where you split b depends on your definition of the median of an even length list, but the procedure is almost identical regardless). if b1&leq;a&leq;b2 then a is the median of the merged array. This can be done in constant time since it requires exactly two comparisons.
If a is greater than b2 then we add the top half of A to the top of B and repeat. B will no longer be sorted, but it doesn't matter. If a is less than b1 then we add the bottom half of A to the bottom of B and repeat. These will iterate log(n) times at most (if the median is found sooner then stop, of course).
It is possible that this will not find the median. If this is the case then the median is in B. If so, perform the same algorithm with A and B reversed. This will require log(m) iterations. In total you will have performed at most 2*(log(n)+log(m)) iterations of a constant time operation, so you have solved the problem in order log(n)+log(m) time.
This is essentially the same answer as was given by iehrlich, but written out more explicitly.
Yes, this can be done. Given two arrays, A and B, in the worst-case scenario you have to first perform a binary search in A, and then, if it fails, binary search in B looking for the median. On each step of a binary search, you check if the current element is actually a median of a merged A+B array. Such check takes constant time.
Let's see why such check is constant. For simplicity, let's assume that |A| + |B| is an odd number, and that all numbers in both arrays are different. You can remove these restrictions later by applying the usual median definition approach (i.e., how to calculate the median of an array containing duplicates, or of an array with even length). Anyway, given that, we know for sure, that in the merged array there will be (|A| + |B| - 1) / 2 elements to the right and to the left of an actual median. In the process of a binary search in A, we know the index of current element x in array A (let it be i). Now, if x satisfies the condition B[j] < x < B[j+1], where i + j == (|A| + |B| - 1) / 2, then x is your median.
The overall complexity is O(log(max(|A|, |B|)) time and O(1) memory.

Counting number of points in lower left quadrant?

I am having trouble understanding a solution to an algorithmic problem
In particular, I don't understand how or why this part of the code
s += a[i];
total += query(s);
update(s);
allows you to compute the total number of points in the lower left quadrant of each point.
Could someone please elaborate?
As an analogue for the plane problem, consider this:
For a point (a, b) to lie in the lower left quadrant of (x, y), a <
x & b < y; thus, points of the form (i, P[i]) lie in the lower left quadrant
of (j, P[j]) iff i < j and P[i] < P[j]
When iterating in ascending order, all points that were considered earlier lie on the left compared to the current (i, P[i])
So one only has to locate all P[j]s less that P[i] that have been considered until now
*current point refers to the point in consideration in the current iteration of the for loop that you quoted ie, (i, P[i])
Let's define another array, C[s]:
C[s] = Number of Prefix Sums of array A[1..(i - 1)] that amount to s
So the solution to #3 becomes the sum ... C[-2] + C[-1] + C[0] + C[1] + C[2] ... C[P[i] - 1], ie prefix sum of C[P[i]]
Use the BIT to store the prefix sum of C, thus defining query(s) as:
query(s) = Number of Prefix Sums of array A[1..(i - 1)] that amount to a value < s
Using these definitions, s in the given code gives you the prefix sum up to the current index i (P[i]). total builds the answer, and update simply adds P[i] to the BIT.
We have to repeat this method for all i, hence the for loop.
PS: It uses a data structure called a Binary Indexed Tree (http://community.topcoder.com/tc?module=Static&d1=tutorials&d2=binaryIndexedTrees) for operations. If you aren't acquainted with it, I'd recommend that you check the link.
EDIT:
You are given a array S and a value X. You can split S into two disjoint subarrays such that L has all elements of S less than X, and H that has those that are greater than or equal to X.
A: All elements of L are less than all elements of H.
Any subsequence T of S will have some elements of L and some elements of H. Let's say it has p elements of L and q of H. When T is sorted to give T', all p elements of L appear before the q elements of H because of A.
Median being the central value is the value at location m = (p + q)/2
It is intuitive to think that having q >= p implies that the median lies in X, as a proof:
Values in locations [1..p] in T' belong to L. Therefore for the median to be in H, it's position m should be greater than p:
m > p
(p + q)/2 > p
p + q > 2p
q > p
B: q - p > 0
To computer q - p, I replace all elements in T' with -1 if they belong to L ( < X ) and +1 if they belong to H ( >= X)
T looks something like {-1, -1, -1... 1, 1, 1}
It has p times -1 and q times 1. Sum of T' will now give me:
Sum = p * (-1) + q * (1)
C: Sum = q - p
I can use this information to find the value in B.
All subsequences are of the form {A[i], A[i + 2], A[i + 3] ... A[j + 1]} since they are contiguous, To compute sum of A[i] to A[j + 1], I can compute the prefix sum of A[i] with P[i] = A[1] + A[2] + .. A[i - 1]
Sum of subsequence from A[i] to A[j] then can be computed as P[j] - P[i] (j is greater of j and i)
With C and B in mind, we conclude:
Sum = P[j] - P[i] = q - p (q - p > 0)
P[j] - P[i] > 0
P[j] > P[i]
j > i and P[j] > P[i] for each solution that gives you a median >= X
In summary:
Replace all A[i] with -1 if they are less than X and -1 otherwise
Computer prefix sums of A[i]
For each pair (i, P[i]), count pairs which lie to its lower left quadrant.

Resources