pairwise distinct left ends in all segments - algorithm

I am provided with M segments of form [L,R] of N elements of an array.I need to change these segments in such a way that all segments have pairwise distinct left ends.
Example : Let suppose we have 5 elements in array and we have 4 segments : [1,2],[1,3],[2,4] and [4,5] then after making all the left ends pairwise disjoint we have [1,2],[3,3],[2,4] and [4,5].Here all segments have different left ends

Let's see if I got this. I suggest
You sort all segments according to the right end.
Then you fix all the left ends, starting with the smallest right end working towards larger right ends. Fixing means you replace the current left end with the next available value.
In Python it looks like this:
def fit_intervals(datalist):
d1 = sorted(datalist, key=lambda x : x[1])
taken = set()
def find_next_free(x):
while x in taken:
x = x + 1
taken.add(x)
return x
for interval in d1:
interval[0] = find_next_free( interval[0] )
data = [ [4,5], [1,9], [1,2], [1,3], [2,4] ]
fit_intervals(data)
print(data)
output: [[4, 5], [5, 9], [1, 2], [2, 3], [3, 4]]
This function find_next_free currently uses a simple linear algorithm, if necessary this could certainly be improved.

Related

query the number of intersected segments in a rage

I have a large dataset of segments (ai, bi), where ai < bi, and many queries. Each query asks for the number of intersected segments with the given range (b, e). The number of queries can be very large. A naive algorithm is to search for all intersected segments per query which takes O(N) time apparently. Is there a faster way to do this? I can imagine soring the segments dataset in ascending order of ai may help but I don't know what to do with the other direction.
segments: [1, 3], [2, 6], [4, 7], [7, 8]
query 1: [2, 5] => output [1, 3] [2, 6], [4, 7]
...
Make list B of sorted start points, as you wrote.
Make list P of structures containing all points - both starting and ending points together with field SE = +1/-1 for start and end correspondingly. Sort it by point coordinate.
Make Active = 0. Walk through P, adding SE to Counter and making new list A containing point position and Active count.
For every query start search (with binary search) lower position in A, get Active - number of opened segments at this moment.
Then search indexes in B corresponding to query start and query end, get index difference - number of segments starting inside query interval.
Sum of these values is needed number of intersected segments (you don't need segments themselves according to the problem statement)
Time per query is O(log(N))
[1, 3], [2, 6], [4, 7], [7, 8] initial list
[1, 2, 4, 7] list B
(1,1),(2,1),(3,-1),(4,1),(6,-1),(7,-1),(7,1),(8,-1) list P
(1,1),(2,2),(3,1), (4,2),(6,1), (7,0), (7,1),(8,0) list A
^
q start 2 gives active = 2 (two active intervals)
searching 2 in B gives index 1, searching 5 gives index 2,
difference is 1
result = 2 + 1 = 3

Ruby count duplicates in diagonal rows of matrix

I'm implementing gomoku game in Ruby, this is a variation of tic-tac-toe played on 15x15 board, and the first player who places 5 O's or X's in horizontal, vertical or diagonal row wins.
First, I assigning Matrix to a variable and fill it with numbers from 0 to 224, so there are no repetitions and I could count them later
gomoku = Matrix.zero(15)
num = 0
15.times do |i|
15.times do |j|
gomoku[i, j] = num
num += 1
end
end
then players take turns, and after every turn I check a win with the method win?
def win? matrix
15.times do |i|
return true if matrix.row_vectors[i].chunk{|e| e}.map{|_, v| v.length}.max > 4 # thanks to sawa for this way of counting adjacent duplicates
return true if matrix.column_vectors[i].chunk{|e| e}.map{|_, v| v.length}.max > 4
end
return false
end
I know, that I'm probably doing it wrong, but my problem isn't that, though suggestions are welcome. The problem is with diagonal rows. I don't know how to count duplicates in diagonal rows
diagonal_vectors = (-10 .. 10).flat_map do |x|
i = x < 0 ? 0 : x
j = x < 0 ? -x : 0
d = 15 - x.abs
[
d.times.map { |k|
gomoku[i + k, j + k]
},
d.times.map { |k|
gomoku[i + k, 14 - j - k]
}
]
end
With this, you can apply the same test sawa gave you.
EDIT: What this does
When looking at diagonals, there's two kinds: going down-left, and going down-right. Let's focus on down-right ones for now. In a 15x15 matrix, there are 29 down-right diagonals: one starting at each element of the first row, one starting at each element of the first column, but taking care not to count the one starting at [0, 0] twice. But some diagonals are too short, so we want to only take those that start on the first eleven rows and columns (because others will be shorter than 5 elements). This is what the first three lines do: [i, j] will be [10, 0], [9, 0] ... [0, 0], [0, 1], ... [0, 10]. d is the length of a diagonal starting at that position. Then, d.times.map { |k| gomoku[i + k, j + k] } collects all the elements in that diagonal. Say we're working on [10, 0]: d is 5, so we have [10, 0], [11, 1], [12, 2], [13, 3], [14, 4]; and we collect values at those coordinates in a list. Simultaneously, we'll also work on a down-left diagonal; that's the other map's job, which flips one coordinate. Thus, the inner block will return a two-element array, which is two diagonals, one down-left, one down-right. flat_map will take care of iterating while squishing the two-element arrays so that we get one big array of diagonals, not array of two-element arrays of diagonals.

N-fold partition of an array with equal sum in each partition

Given an array of integers a, two numbers N and M, return N group of integers from a such that each group sums to M.
For example, say:
a = [1,2,3,4,5]
N = 2
M = 5
Then the algorithm could return [2, 3], [1, 4] or [5], [2, 3] or possibly others.
What algorithms could I use here?
Edit:
I wasn't aware that this problem is NP complete. So maybe it would help if I provided more details on my specific scenario:
So I'm trying to create a "match-up" application. Given the number of teams N and the number of players per team M, the application listens for client requests. Each client request will give a number of players that the client represents. So if I need 2 teams of 5 players, then if 5 clients send requests, each representing 1, 2, 3, 4, 5 players respectively, then my application should generate a match-up between clients [1, 4] and clients [2, 3]. It could also generate a match-up between [1, 4] and [5]; I don't really care.
One implication is that any client representing more than M or less than 0 players is invalid. Hope this could simplify the problem.
this appears to be a variation of the subset sum problem. as this problem is np-complete, there will be no efficient algorithm without further constraints.
note that it is already hard to find a single subset of the original set whose elements would sum up to M.
People give up too easily on NP-complete problems. Just because a problem is NP complete doesn't mean that there aren't more and less efficient algorithms in the general case. That is you can't guarantee that for all inputs there is an answer that can be computed faster than a brute force search, but for many problems you can certainly have methods that are faster than the full search for most inputs.
For this problem there are certainly 'perverse' sets of numbers that will result in worst case search times, because there may be say a large vector of integers, but only one solution and you have to end up trying a very large number of combinations.
But for non-perverse sets, there are probably many solutions, and an efficient way of 'tripping over' a good partitioning will run much faster than NP time.
How you solve this will depend a lot on what you expect to be the more common parameters. It also makes a difference if the integers are all positive, or if negatives are allowed.
In this case I'll assume that:
N is small relative to the length of the vector
All integers are positive.
Integers cannot be re-used.
Algorithm:
Sort the vector, v.
Eliminate elements bigger than M. They can't be part of any solution.
Add up all remaining numbers in v, divide by N. If the result is smaller than M, there is no solution.
Create a new array w, same size as v. For each w[i], sum all the numbers in v[i+1 - end]
So if v was 5 4 3 2 1, w would be 10, 6, 3, 1, 0.
While you have not found enough sets:
Chose the largest number, x, if it is equal to M, emit a solution set with just x, and remove it from the vector, remove the first element from w.
Still not enough sets? (likely), then again while you have not found enough sets:
A solution theory is ([a,b,c], R ) where [a,b,c] is a partial set of elements of v and a remainder R. R = M-sum[a,b,c]. Extending a theory is adding a number to the partial set, and subtracting that number from R. As you extend the theories, if R == 0, that is a possible solution.
Recursively create theories like so: loop over the elements v, as v[i] creating theories, ( [v[i]], R ), And now recursively extend extend each theory from just part of v. Binary search into v to find the first element equal to or smaller than R, v[j]. Start with v[j] and extend each theory with the elements of v from j until R > w[k].
The numbers from v[j] to v[k] are the only numbers that be used to extend a theory and still get R to 0. Numbers larger than v[j] will make R negative. Smaller larger than v[k], and there aren't any more numbers left in the array, even if you used them all to get R to 0
Here is my own Python solution that uses dynamic programming. The algorithm is given here.
def get_subset(lst, s):
'''Given a list of integer `lst` and an integer s, returns
a subset of lst that sums to s, as well as lst minus that subset
'''
q = {}
for i in range(len(lst)):
for j in range(1, s+1):
if lst[i] == j:
q[(i, j)] = (True, [j])
elif i >= 1 and q[(i-1, j)][0]:
q[(i, j)] = (True, q[(i-1, j)][1])
elif i >= 1 and j >= lst[i] and q[(i-1, j-lst[i])][0]:
q[(i, j)] = (True, q[(i-1, j-lst[i])][1] + [lst[i]])
else:
q[(i, j)] = (False, [])
if q[(i, s)][0]:
for k in q[(i, s)][1]:
lst.remove(k)
return q[(i, s)][1], lst
return None, lst
def get_n_subset(n, lst, s):
''' Returns n subsets of lst, each of which sums to s'''
solutions = []
for i in range(n):
sol, lst = get_subset(lst, s)
solutions.append(sol)
return solutions, lst
# print(get_n_subset(7, [1, 2, 3, 4, 5, 7, 8, 4, 1, 2, 3, 1, 1, 1, 2], 5))
# [stdout]: ([[2, 3], [1, 4], [5], [4, 1], [2, 3], [1, 1, 1, 2], None], [7, 8])

Partitioning a superset and getting the list of original sets for each partition

Introduction
While trying to do some cathegorization on nodes in a graph (which will be rendered differenty), I find myself confronted with the following problem:
The Problem
Given a superset of elements S = {0, 1, ... M} and a number n of non-disjoint subsets T_i thereof, with 0 <= i < n, what is the best algorithm to find out the partition of the set S called P?
P = S is the union of all disjoint partitions P_j of the original superset S, with 0 <= j < M, such that for all elements x in P_j, every x has the same list of "parents" among the "original" sets T_i.
Example
S = [1, 2, 3, 4, 5, 6, 8, 9]
T_1 = [1, 4]
T_2 = [2, 3]
T_3 = [1, 3, 4]
So all P_js would be:
P_1 = [1, 4] # all elements x have the same list of "parents": T_1, T_3
P_2 = [2] # all elements x have the same list of "parents": T_2
P_3 = [3] # all elements x have the same list of "parents": T_2, T_3
P_4 = [5, 6, 8, 9] # all elements x have the same list of "parents": S (so they're not in any of the P_j
Questions
What are good functions/classes in the python packages to compute all P_js and the list of their "parents", ideally restricted to numpy and scipy? Perhaps there's already a function which does just that
What is the best algorithm to find those partitions P_js and for each one, the list of "parents"? Let's note T_0 = S
I think the brute force approach would be to generate all 2-combinations of T sets and split them in at most 3 disjoint sets, which would be added back to the pool of T sets and then repeat the process until all resulting Ts are disjoint, and thus we've arrived at our answer - the set of P sets. A little problematic could be caching all the "parents" on the way there.
I suspect a dynamic programming approach could be used to optimize the algorithm.
Note: I would have loved to write the math parts in latex (via MathJax), but unfortunately this is not activated :-(
The following should be linear time (in the number of the elements in the Ts).
from collections import defaultdict
S = [1, 2, 3, 4, 5, 6, 8, 9]
T_1 = [1, 4]
T_2 = [2, 3]
T_3 = [1, 3, 4]
Ts = [S, T_1, T_2, T_3]
parents = defaultdict(int)
for i, T in enumerate(Ts):
for elem in T:
parents[elem] += 2 ** i
children = defaultdict(list)
for elem, p in parents.items():
children[p].append(elem)
print(list(children.values()))
Result:
[[5, 6, 8, 9], [1, 4], [2], [3]]
The way I'd do this is to construct an M × n boolean array In where In(i, j) &equals; Si &in; Tj. You can construct that in O(Σj|Tj|), provided you can map an element of S onto its integer index in O(1), by scanning all of the sets T and marking the corresponding bit in In.
You can then read the "signature" of each element i directly from In by concatenating row i into a binary number of n bits. The signature is precisely the equivalence relationship of the partition you are seeking.
By the way, I'm in total agreement with you about Math markup. Perhaps it's time to mount a new campaign.

Find the middle element in merged arrays in O(logn)

We have two sorted arrays of the same size n. Let's call the array a and b.
How to find the middle element in an sorted array merged by a and b?
Example:
n = 4
a = [1, 2, 3, 4]
b = [3, 4, 5, 6]
merged = [1, 2, 3, 3, 4, 4, 5, 6]
mid_element = merged[(0 + merged.length - 1) / 2] = merged[3] = 3
More complicated cases:
Case 1:
a = [1, 2, 3, 4]
b = [3, 4, 5, 6]
Case 2:
a = [1, 2, 3, 4, 8]
b = [3, 4, 5, 6, 7]
Case 3:
a = [1, 2, 3, 4, 8]
b = [0, 4, 5, 6, 7]
Case 4:
a = [1, 3, 5, 7]
b = [2, 4, 6, 8]
Time required: O(log n). Any ideas?
Look at the middle of both the arrays. Let's say one value is smaller and the other is bigger.
Discard the lower half of the array with the smaller value. Discard the upper half of the array with the higher value. Now we are left with half of what we started with.
Rinse and repeat until only one element is left in each array. Return the smaller of those two.
If the two middle values are the same, then pick arbitrarily.
Credits: Bill Li's blog
Quite interesting task. I'm not sure about O(logn), but solution O((logn)^2) is obvious for me.
If you know position of some element in first array then you can find how many elements are smaller in both arrays then this value (you know already how many smaller elements are in first array and you can find count of smaller elements in second array using binary search - so just sum up this two numbers). So if you know that number of smaller elements in both arrays is less than N, you should look in to the upper half in first array, otherwise you should move to the lower half. So you will get general binary search with internal binary search. Overall complexity will be O((logn)^2)
Note: if you will not find median in first array then start initial search in the second array. This will not have impact on complexity
So, having
n = 4 and a = [1, 2, 3, 4] and b = [3, 4, 5, 6]
You know the k-th position in result array in advance based on n, which is equal to n.
The result n-th element could be in first array or second.
Let's first assume that element is in first array then
do binary search taking middle element from [l,r], at the beginning l = 0, r = 3;
So taking middle element you know how many elements in the same array smaller, which is middle - 1.
Knowing that middle-1 element is less and knowing you need n-th element you may have [n - (middle-1)]th element from second array to be smaller, greater. If that's greater and previos element is smaller that it's what you need, if it's greater and previous is also greater we need to L = middle, if it's smaller r = middle.
Than do the same for the second array in case you did not find solution for first.
In total log(n) + log(n)

Resources