segment overlapping regions into disjoint regions - algorithm

Given a set of closed regions [a,b] where a and b are integers I need to find another set of regions that cover the same numbers but are disjoint.
I suppose it is possible to do naively by iterating through the set several times, but I am looking for a recommendation of a good algorithm for this. Please help.
EDIT:
to clarify, the resulting regions cannot be larger than the original ones, I have to come up with disjoint regions that are contained by the original ones. In other words, I need to split the original regions on the boundaries where they overlap.
example:
3,8
1,4
7,9
11,14
result:
1,3
3,4
4,7
7,8
8,9
11,14

Just sort all endpoints left to right (remember their type: start or end). Swype left to right. Keep a counter starting at 0. Whenever you come across a start: increment the counter. When you come across an end: decrement (note that the counter is always at least 0).
Keep track of the last two points. If the counter is greater than zero - and the last two points are different (prevent empty ranges) - add the interval between the last two points.
Pseudocode:
points = all interval endpoints
sort(points)
previous = points[0]
counter = 1
for(int i = 1; i < #points; i++) {
current = points[i]
if (current was start point)
counter++
else
counter--
if (counter > 0 and previous != current)
add (previous, current) to output
previous = current
}

(This is a modification of an answer that I posted earlier today which I deleted after I discovered it had a logic error. I later realized that I could modify Vincent van der Weele's elegant idea of using parenthesis depth to fix the bug)
On Edit: Modified to be able to accept intervals of length 0
Call an interval [a,a] of length 0 essential if a doesn't also appear as an endpoint of any interval of length > 0. For example, in [1,3], [2,2], [3,3], [4,4] the 0-length intervals [2,2] and [4,4] are essential but [3,3] isn't.
Inessential 0-length intervals are redundant thus need not appear in the final output. When the list of intervals is initially scanned (loading the basic data structures) points corresponding to 0-length intervals are recorded, as are endpoint of intervals of length > 0. When the scan is completed, two instances of each point corresponding to essential 0-length intervals are added into the list of endpoints, which is then sorted. The resulting data structure is a multiset where the only repetitions correspond to essential 0-length intervals.
For every endpoint in the interval define the pdelta (parentheses delta) of the endpoint as the number of times that point appears as a left endpoint minus the number of times it appears as a right endpoint. Store these in a dictionary keyed by the endpoints.
[a,b] where a,b are the first two elements of the list of endpoints is the first interval in the list of disjoint intervals. Define the parentheses depth of b to be the sum of pdelta[a] and pdelta[b]. We loop through the rest of the endpoints as follows:
In each pass through the loop look at the parenthesis depth of b. If it is not 0 than b is still needed for one more interval. Let a = b and let the new p be the next value in the list. Adjust the parentheses depth be the pdelta of the new b and add [a,b] to the list of disjoint intervals. Otherwise (if the parenthesis depth of b is 0) let the next [a,b] be the next two points in the list and adjust the parenthesis depth accordingly.
Here is a Python implementation:
def disjointify(intervals):
if len(intervals) == 0: return []
pdelta = {}
ends = set()
disjoints = []
onePoints = set() #onePoint intervals
for (a,b) in intervals:
if a == b:
onePoints.add(a)
if not a in pdelta: pdelta[a] = 0
else:
ends.add(a)
ends.add(b)
pdelta[a] = pdelta.setdefault(a,0) + 1
pdelta[b] = pdelta.setdefault(b,0) - 1
onePoints.difference_update(ends)
ends = list(ends)
for a in onePoints:
ends.extend([a,a])
ends.sort()
a = ends[0]
b = ends[1]
pdepth = pdelta[a] + pdelta[b]
i = 1
disjoints.append((a,b))
while i < len(ends) - 1:
if pdepth != 0:
a = b
b = ends[i+1]
pdepth += pdelta[b]
i += 1
else:
a = ends[i+1]
b = ends[i+2]
pdepth += (pdelta[a] + pdelta[b])
i += 2
disjoints.append((a,b))
return disjoints
Sample output which illustrates various edge cases:
>>> example = [(1,1), (1,4), (2,2), (4,4),(5,5), (6,8), (7,9), (10,10)]
>>> disjointify(example)
[(1, 2), (2, 2), (2, 4), (5, 5), (6, 7), (7, 8), (8, 9), (10, 10)]
>>> disjointify([(1,1), (2,2)])
[(1, 1), (2, 2)]
(I am using Python tuples to represent the closed intervals even though it has the minor drawback of looking like the standard mathematical notation for open intervals).
A final remark: referring to the result as a collection of disjoint interval might not be accurate since some of these intervals have nonempty albeit 1-point intersections

Related

Find optimal points to cut a set of intervals

Given a set of intervals on the real line and some parameter d > 0. Find a sequence of points with gaps between neighbors less or equal to d, such that the number of intervals that contain any of the points is minimized.
To prevent trivial solutions we ask that the first point from the sequence is before the first interval, and the last point is after the last interval. The intervals can be thought of right-open.
Does this problem have a name? Maybe even an algorithm and a complexity bound?
Some background:
This is motivated by a question from topological data analysis, but it seems so general, that it could be interesting for other topics, e.g. task scheduling (given a factory that has to shut down at least once a year and wants to minimize the number of tasks inflicted by the maintenance...)
We were thinking of integer programming and minimum cuts, but the d-parameter does not quite fit. We also implemented approximate greedy solutions in n^2 and n*logn time, but they can run into very bad local optima.
Show me a picture
I draw intervals by lines. The following diagram shows 7 intervals. d is such that you have to cut at least every fourth character. At the bottom of the diagram you see two solutions (marked with x and y) to the diagram. x cuts through the four intervals in the top, whereas y cuts through the three intervals at the bottom. y is optimal.
——— ———
——— ———
———
———
———
x x x x
y y y
Show me some code:
How should we define fun in the following snippet?
intervals = [(0, 1), (0.5, 1.5), (0.5, 1.5)]
d = 1.1
fun(intervals, d)
>>> [-0.55, 0.45, 1.55] # Or something close to it
In this small example the optimal solution will cut the first interval, but not the second and third. Obviously, the algorithm should work with more complicated examples as well.
A tougher test can be the following: Given a uniform distribution of interval start times on [0, 100] and lengths uniform on [0, d], one can compute the expected number of cuts by a regular grid [0, d, 2d, 3d,..] to be slightly below 0.5*n. And the optimal solution should be better:
n = 10000
delta = 1
starts = np.random.uniform(low=0., high=99, size=n)
lengths = np.random.uniform(low=0., high=1, size=n)
rand_intervals = np.array([starts, starts + lengths]).T
regular_grid = np.arange(0, 101, 1)
optimal_grid = fun(rand_intervals)
# This computes the number of intervals being cut by one of the points
def cuts(intervals, grid):
bins = np.digitize(intervals, grid)
return sum(bins[:,0] != bins[:,1])
cuts(rand_intervals, regular_grid)
>>> 4987 # Expected to be slightly below 0.5*n
assert cuts(rand_intervals, optimal_grid) <= cuts(rand_intervals, regular_grid)
You can solve this optimally through dynamic programming by maintaining an array S[k] where S[k] is the best solution (covers the largest amount of space) while having k intervals with a point in it. Then you can repeatedly remove your lowest S[k], extend it in all possible ways (limiting yourself to the relevant endpoints of intervals plus the last point in S[k] + delta), and updating S with those new possible solutions.
When the lowest possible S[k] in your table covers the entire range, you are done.
A Python 3 solution using intervaltree from pip:
from intervaltree import Interval, IntervalTree
def optimal_points(intervals, d, epsilon=1e-9):
intervals = [Interval(lr[0], lr[1]) for lr in intervals]
tree = IntervalTree(intervals)
start = min(iv.begin for iv in intervals)
stop = max(iv.end for iv in intervals)
# The best partial solution with k intervals containing a point.
# We also store the intervals that these points are contained in as a set.
sols = {0: ([start], set())}
while True:
lowest_k = min(sols.keys())
s, contained = sols.pop(lowest_k)
# print(lowest_k, s[-1]) # For tracking progress in slow instances.
if s[-1] >= stop:
return s
relevant_intervals = tree[s[-1]:s[-1] + d]
relevant_points = [iv.begin - epsilon for iv in relevant_intervals]
relevant_points += [iv.end + epsilon for iv in relevant_intervals]
extensions = {s[-1] + d} | {p for p in relevant_points if s[-1] < p < s[-1] + d}
for ext in sorted(extensions, reverse=True):
new_s = s + [ext]
new_contained = set(tree[ext]) | contained
new_k = len(new_contained)
if new_k not in sols or new_s[-1] > sols[new_k][0][-1]:
sols[new_k] = (new_s, new_contained)
If the range and precision could be feasible for iterating over, we could first merge and count the intervals. For example,
[(0, 1), (0.5, 1.5), (0.5, 1.5)] ->
[(0, 0.5, 1), (0.5, 1, 3), (1, 1.5, 2)]
Now let f(n, k) represent the optimal solution with k points up to n on the number line. Then:
f(n, k) = min(
num_intervals(n) + f(n - i, k - 1)
)
num_intervals(n) is known in O(1)
from a pointer in the merged interval list.
n-i is not every precision point up to n. Rather, it's
every point not more than d back that marks a change
from one merged interval to the next as we move it
back from our current pointer in the merged-interval
list.
One issue to note is that we need to store the distance between the rightmost and previous point for any optimal f(n, k). This is to avoid joining f(n - i, k - 1) where the second to rightmost point would be less than d away from our current n, making the new middle point, n - i, superfluous and invalidating this solution. (I'm not sure I've thought this issue through enough. Perhaps someone could point out something that's amiss.)
How would we know k is high enough? Given that the optimal solution may be lower than the current k, we assume that the recurrence would prevent us from finding an instance based on the idea in the above paragraph:
0.......8
——— ———
——— ———
———
———
———
x x x x
y y y
d = 4
merged list:
[(1, 3, 2), (3, 4, 5), (4, 5, 3), (5, 6, 5), (6, 8, 2)]
f(4, 2) = (3, 0) // (intersections, previous point)
f(8, 3) = (3, 4)
There are no valid solutions for f(8, 4) since the
break point we may consider between interval change
in the merged list is before the second-to-last
point in f(8, 3).

What is the most efficient algorithm/data structure for finding the smallest range containing a point?

Given a data set of a few millions of price ranges, we need to find the smallest range that contains a given price.
The following rules apply:
Ranges can be fully nested (ie, 1-10 and 5-10 is valid)
Ranges cannot be partially nested (ie, 1-10 and 5-15 is invalid)
Example:
Given the following price ranges:
1-100
50-100
100-120
5-10
5-20
The result for searching price 7 should be 5-10
The result for searching price 100 should be 100-120 (smallest range containing 100).
What's the most efficient algorithm/data structure to implement this?
Searching the web, I only found solutions for searching ranges within ranges.
I've been looking at Morton count and Hilbert curve, but can't wrap my head around how to use them for this case.
Thanks.
Because you did not mention this ad hoc algorithm, I'll propose this as a simple answer to your question:
This is a python function, but it's fairly easy to understand and convert it in another language.
def min_range(ranges, value):
# ranges = [(1, 100), (50, 100), (100, 120), (5, 10), (5, 20)]
# value = 100
# INIT
import math
best_range = None
best_range_len = math.inf
# LOOP THROUGH ALL RANGES
for b, e in ranges:
# PICK THE SMALLEST
if b <= value <= e and e - b < best_range_len:
best_range = (b, e)
best_range_len = e - b
print(f'Minimal range containing {value} = {best_range}')
I believe there are more efficient and complicated solutions (if you can do some precomputation for example) but this is the first step you must take.
EDIT : Here is a better solution, probably in O(log(n)) but it's not trivial. It is a tree where each node is an interval, and has a child list of all strictly non overlapping intervals that are contained inside him.
Preprocessing is done in O(n log(n)) time and queries are O(n) in worst case (when you can't find 2 ranges that don't overlap) and probably O(log(n)) in average.
2 classes: Tree that holds the tree and can query:
class tree:
def __init__(self, ranges):
# sort the ranges by lowest starting and then greatest ending
ranges = sorted(ranges, key=lambda i: (i[0], -i[1]))
# recursive building -> might want to optimize that in python
self.node = node( (-float('inf'), float('inf')) , ranges)
def __str__(self):
return str(self.node)
def query(self, value):
# bisect is for binary search
import bisect
curr_sol = self.node.inter
node_list = self.node.child_list
while True:
# which of the child ranges can include our value ?
i = bisect.bisect_left(node_list, (value, float('inf'))) - 1
# does it includes it ?
if i < 0 or i == len(node_list):
return curr_sol
if value > node_list[i].inter[1]:
return curr_sol
else:
# if it does then go deeper
curr_sol = node_list[i].inter
node_list = node_list[i].child_list
Node that holds the structure and information:
class node:
def __init__(self, inter, ranges):
# all elements in ranges will be descendant of this node !
import bisect
self.inter = inter
self.child_list = []
for i, r in enumerate(ranges):
if len(self.child_list) == 0:
# append a new child when list is empty
self.child_list.append(node(r, ranges[i + 1:bisect.bisect_left(ranges, (r[1], r[1] - 1))]))
else:
# the current range r is included in a previous range
# r is not a child of self but a descendant !
if r[0] < self.child_list[-1].inter[1]:
continue
# else -> this is a new child
self.child_list.append(node(r, ranges[i + 1:bisect.bisect_left(ranges, (r[1], r[1] - 1))]))
def __str__(self):
# fancy
return f'{self.inter} : [{", ".join([str(n) for n in self.child_list])}]'
def __lt__(self, other):
# this is '<' operator -> for bisect to compare our items
return self.inter < other
and to test that:
ranges = [(1, 100), (50, 100), (100, 120), (5, 10), (5, 20), (50, 51)]
t = tree(ranges)
print(t)
print(t.query(10))
print(t.query(5))
print(t.query(40))
print(t.query(50))
Preprocessing that generates disjoined intervals
(I call source segments as ranges and resulting segments as intervals)
For ever range border (both start and end) make tuple: (value, start/end fiels, range length, id), put them in array/list
Sort these tuples by the first field. In case of tie make longer range left for start and right for end.
Make a stack
Make StartValue variable.
Walk through the list:
if current tuple contains start:
if interval is opened: //we close it
if current value > StartValue: //interval is not empty
make interval with //note id remains in stack
(start=StartValue, end = current value, id = stack.peek)
add interval to result list
StartValue = current value //we open new interval
push id from current tuple onto stack
else: //end of range
if current value > StartValue: //interval is not empty
make interval with //note id is removed from stack
(start=StartValue, end = current value, id = stack.pop)
add interval to result list
if stack is not empty:
StartValue = current value //we open new interval
After that we have sorted list of disjointed intervals containing start/end value and id of the source range (note that many intervals might correspond to the same source range), so we can use binary search easily.
If we add source ranges one-by-one in nested order (nested after it parent), we can see that every new range might generate at most two new intervals, so overall number of intervals M <= 2*N and overall complexity is O(Nlog N + Q * logN) where Q is number of queries
Edit:
Added if stack is not empty section
Result for your example 1-100, 50-100, 100-120, 5-10, 5-20 is
1-5(0), 5-10(3), 10-20(4), 20-50(0), 50-100(1), 100-120(2)
Since pLOPeGG already covered the ad hoc case, I will answer the question under the premise that preporcessing is performed in order to support multiple queries efficiently.
General data structures for efficient queries on intervals are the Interval Tree and the Segment Tree
What about an approach like this. Since we only allow nested and not partial-nesting. This looks to be a do-able approach.
Split segments into (left,val) and (right,val) pairs.
Order them with respect to their vals and left/right relation.
Search the list with binary search. We get two outcomes not found and found.
If found check if it is a left or right. If it is a left go right until you find a right without finding a left. If it is a right go left until you find a left without finding a right. Pick the smallest.
If not found stop when the high-low is 1 or 0. Then compare the queried value with the value of the node you are at and then according to that search right and left to it just like before.
As an example;
We would have (l,10) (l,20) (l,30) (r,45) (r,60) (r,100) when searching for say, 65 you drop on (r,100) so you go left and can't find a spot with a (l,x) such that x>=65 so you go left until you get balanced lefts and rights and first right and last left is your interval. The reprocessing part will be long but since you will keep it that way. It is still O(n) in worst-case. But that worst case requires you to have everything nested inside each other and you searching for the outer-most.

How to generate a pseudo-random involution?

For generating a pseudo-random permutation, the Knuth shuffles can be used. An involution is a self-inverse permutation and I guess, I could adapt the shuffles by forbidding touching an element multiple times. However, I'm not sure whether I could do it efficiently and whether it generates every involution equiprobably.
I'm afraid, an example is needed: On a set {0,1,2}, there are 6 permutation, out of which 4 are involutions. I'm looking for an algorithm generating one of them at random with the same probability.
A correct but very inefficient algorithm would be: Use Knuth shuffle, retry if it's no involution.
Let's here use a(n) as the number of involutions on a set of size n (as OEIS does). For a given set of size n and a given element in that set, the total number of involutions on that set is a(n). That element must either be unchanged by the involution or be swapped with another element. The number of involutions that leave our element fixed is a(n-1), since those are involutions on the other elements. Therefore a uniform distribution on the involutions must have a probability of a(n-1)/a(n) of keeping that element fixed. If it is to be fixed, just leave that element alone. Otherwise, choose another element that has not yet been examined by our algorithm to swap with our element. We have just decided what happens with one or two elements in the set: keep going and decide what happens with one or two elements at a time.
To do this, we need a list of the counts of involutions for each i <= n, but that is easily done with the recursion formula
a(i) = a(i-1) + (i-1) * a(i-2)
(Note that this formula from OEIS also comes from my algorithm: the first term counts the involutions keeping the first element where it is, and the second term is for the elements that are swapped with it.) If you are working with involutions, this is probably important enough to break out into another function, precompute some smaller values, and cache the function's results for greater speed, as in this code:
# Counts of involutions (self-inverse permutations) for each size
_invo_cnts = [1, 1, 2, 4, 10, 26, 76, 232, 764, 2620, 9496, 35696, 140152]
def invo_count(n):
"""Return the number of involutions of size n and cache the result."""
for i in range(len(_invo_cnts), n+1):
_invo_cnts.append(_invo_cnts[i-1] + (i-1) * _invo_cnts[i-2])
return _invo_cnts[n]
We also need a way to keep track of the elements that have not yet been decided, so we can efficiently choose one of those elements with uniform probability and/or mark an element as decided. We can keep them in a shrinking list, with a marker to the current end of the list. When we decide an element, we move the current element at the end of the list to replace the decided element then reduce the list. With that efficiency, the complexity of this algorithm is O(n), with one random number calculation for each element except perhaps the last. No better order complexity is possible.
Here is code in Python 3.5.2. The code is somewhat complicated by the indirection involved through the list of undecided elements.
from random import randrange
def randinvolution(n):
"""Return a random (uniform) involution of size n."""
# Set up main variables:
# -- the result so far as a list
involution = list(range(n))
# -- the list of indices of unseen (not yet decided) elements.
# unseen[0:cntunseen] are unseen/undecided elements, in any order.
unseen = list(range(n))
cntunseen = n
# Make an involution, progressing one or two elements at a time
while cntunseen > 1: # if only one element remains, it must be fixed
# Decide whether current element (index cntunseen-1) is fixed
if randrange(invo_count(cntunseen)) < invo_count(cntunseen - 1):
# Leave the current element as fixed and mark it as seen
cntunseen -= 1
else:
# In involution, swap current element with another not yet seen
idxother = randrange(cntunseen - 1)
other = unseen[idxother]
current = unseen[cntunseen - 1]
involution[current], involution[other] = (
involution[other], involution[current])
# Mark both elements as seen by removing from start of unseen[]
unseen[idxother] = unseen[cntunseen - 2]
cntunseen -= 2
return involution
I did several tests. Here is the code I used to check for validity and uniform distribution:
def isinvolution(p):
"""Flag if a permutation is an involution."""
return all(p[p[i]] == i for i in range(len(p)))
# test the validity and uniformness of randinvolution()
n = 4
cnt = 10 ** 6
distr = {}
for j in range(cnt):
inv = tuple(randinvolution(n))
assert isinvolution(inv)
distr[inv] = distr.get(inv, 0) + 1
print('In {} attempts, there were {} random involutions produced,'
' with the distribution...'.format(cnt, len(distr)))
for x in sorted(distr):
print(x, str(distr[x]).rjust(2 + len(str(cnt))))
And the results were
In 1000000 attempts, there were 10 random involutions produced, with the distribution...
(0, 1, 2, 3) 99874
(0, 1, 3, 2) 100239
(0, 2, 1, 3) 100118
(0, 3, 2, 1) 99192
(1, 0, 2, 3) 99919
(1, 0, 3, 2) 100304
(2, 1, 0, 3) 100098
(2, 3, 0, 1) 100211
(3, 1, 2, 0) 100091
(3, 2, 1, 0) 99954
That looks pretty uniform to me, as do other results I checked.
An involution is a one-to-one mapping that is its own inverse. Any cipher is a one-to-one mapping; it has to be in order for a cyphertext to be unambiguously decrypyed.
For an involution you need a cipher that is its own inverse. Such ciphers exist, ROT13 is an example. See Reciprocal Cipher for some others.
For your question I would suggest an XOR cipher. Pick a random key at least as long as the longest piece of data in your initial data set. If you are using 32 bit numbers, then use a 32 bit key. To permute, XOR the key with each piece of data in turn. The reverse permutation (equivalent to decrypting) is exactly the same XOR operation and will get back to the original data.
This will solve the mathematical problem, but it is most definitely not cryptographically secure. Repeatedly using the same key will allow an attacker to discover the key. I assume that there is no security requirement over and above the need for a random-seeming involution with an even distribution.
ETA: This is a demo, in Java, of what I am talking about in my second comment. Being Java, I use indexes 0..12 for your 13 element set.
public static void Demo() {
final int key = 0b1001;
System.out.println("key = " + key);
System.out.println();
for (int i = 0; i < 13; ++i) {
System.out.print(i + " -> ");
int ctext = i ^ key;
while (ctext >= 13) {
System.out.print(ctext + " -> ");
ctext = ctext ^ key;
}
System.out.println(ctext);
}
} // end Demo()
The output from the demo is:
key = 9
0 -> 9
1 -> 8
2 -> 11
3 -> 10
4 -> 13 -> 4
5 -> 12
6 -> 15 -> 6
7 -> 14 -> 7
8 -> 1
9 -> 0
10 -> 3
11 -> 2
12 -> 5
Where a transformed key would fall off the end of the array it is transformed again until it falls within the array. I am not sure if a while construction will fall within the strict mathematical definition of a function.

Disperse Duplicates in an Array

Source : Google Interview Question
Write a routine to ensure that identical elements in the input are maximally spread in the output?
Basically, we need to place the same elements,in such a way , that the TOTAL spreading is as maximal as possible.
Example:
Input: {1,1,2,3,2,3}
Possible Output: {1,2,3,1,2,3}
Total dispersion = Difference between position of 1's + 2's + 3's = 4-1 + 5-2 + 6-3 = 9 .
I am NOT AT ALL sure, if there's an optimal polynomial time algorithm available for this.Also,no other detail is provided for the question other than this .
What i thought is,calculate the frequency of each element in the input,then arrange them in the output,each distinct element at a time,until all the frequencies are exhausted.
I am not sure of my approach .
Any approaches/ideas people .
I believe this simple algorithm would work:
count the number of occurrences of each distinct element.
make a new list
add one instance of all elements that occur more than once to the list (order within each group does not matter)
add one instance of all unique elements to the list
add one instance of all elements that occur more than once to the list
add one instance of all elements that occur more than twice to the list
add one instance of all elements that occur more than trice to the list
...
Now, this will intuitively not give a good spread:
for {1, 1, 1, 1, 2, 3, 4} ==> {1, 2, 3, 4, 1, 1, 1}
for {1, 1, 1, 2, 2, 2, 3, 4} ==> {1, 2, 3, 4, 1, 2, 1, 2}
However, i think this is the best spread you can get given the scoring function provided.
Since the dispersion score counts the sum of the distances instead of the squared sum of the distances, you can have several duplicates close together, as long as you have a large gap somewhere else to compensate.
for a sum-of-squared-distances score, the problem becomes harder.
Perhaps the interview question hinged on the candidate recognizing this weakness in the scoring function?
In perl
#a=(9,9,9,2,2,2,1,1,1);
then make a hash table of the counts of different numbers in the list, like a frequency table
map { $x{$_}++ } #a;
then repeatedly walk through all the keys found, with the keys in a known order and add the appropriate number of individual numbers to an output list until all the keys are exhausted
#r=();
$g=1;
while( $g == 1 ) {
$g=0;
for my $n (sort keys %x)
{
if ($x{$n}>1) {
push #r, $n;
$x{$n}--;
$g=1
}
}
}
I'm sure that this could be adapted to any programming language that supports hash tables
python code for algorithm suggested by Vorsprung and HugoRune:
from collections import Counter, defaultdict
def max_spread(data):
cnt = Counter()
for i in data: cnt[i] += 1
res, num = [], list(cnt)
while len(cnt) > 0:
for i in num:
if num[i] > 0:
res.append(i)
cnt[i] -= 1
if cnt[i] == 0: del cnt[i]
return res
def calc_spread(data):
d = defaultdict()
for i, v in enumerate(data):
d.setdefault(v, []).append(i)
return sum([max(x) - min(x) for _, x in d.items()])
HugoRune's answer takes some advantage of the unusual scoring function but we can actually do even better: suppose there are d distinct non-unique values, then the only thing that is required for a solution to be optimal is that the first d values in the output must consist of these in any order, and likewise the last d values in the output must consist of these values in any (i.e. possibly a different) order. (This implies that all unique numbers appear between the first and last instance of every non-unique number.)
The relative order of the first copies of non-unique numbers doesn't matter, and likewise nor does the relative order of their last copies. Suppose the values 1 and 2 both appear multiple times in the input, and that we have built a candidate solution obeying the condition I gave in the first paragraph that has the first copy of 1 at position i and the first copy of 2 at position j > i. Now suppose we swap these two elements. Element 1 has been pushed j - i positions to the right, so its score contribution will drop by j - i. But element 2 has been pushed j - i positions to the left, so its score contribution will increase by j - i. These cancel out, leaving the total score unchanged.
Now, any permutation of elements can be achieved by swapping elements in the following way: swap the element in position 1 with the element that should be at position 1, then do the same for position 2, and so on. After the ith step, the first i elements of the permutation are correct. We know that every swap leaves the scoring function unchanged, and a permutation is just a sequence of swaps, so every permutation also leaves the scoring function unchanged! This is true at for the d elements at both ends of the output array.
When 3 or more copies of a number exist, only the position of the first and last copy contribute to the distance for that number. It doesn't matter where the middle ones go. I'll call the elements between the 2 blocks of d elements at either end the "central" elements. They consist of the unique elements, as well as some number of copies of all those non-unique elements that appear at least 3 times. As before, it's easy to see that any permutation of these "central" elements corresponds to a sequence of swaps, and that any such swap will leave the overall score unchanged (in fact it's even simpler than before, since swapping two central elements does not even change the score contribution of either of these elements).
This leads to a simple O(nlog n) algorithm (or O(n) if you use bucket sort for the first step) to generate a solution array Y from a length-n input array X:
Sort the input array X.
Use a single pass through X to count the number of distinct non-unique elements. Call this d.
Set i, j and k to 0.
While i < n:
If X[i+1] == X[i], we have a non-unique element:
Set Y[j] = Y[n-j-1] = X[i].
Increment i twice, and increment j once.
While X[i] == X[i-1]:
Set Y[d+k] = X[i].
Increment i and k.
Otherwise we have a unique element:
Set Y[d+k] = X[i].
Increment i and k.

algorithm to find longest non-overlapping sequences

I am trying to find the best way to solve the following problem. By best way I mean less complex.
As an input a list of tuples (start,length) such:
[(0,5),(0,1),(1,9),(5,5),(5,7),(10,1)]
Each element represets a sequence by its start and length, for example (5,7) is equivalent to the sequence (5,6,7,8,9,10,11) - a list of 7 elements starting with 5. One can assume that the tuples are sorted by the start element.
The output should return a non-overlapping combination of tuples that represent the longest continuous sequences(s). This means that, a solution is a subset of ranges with no overlaps and no gaps and is the longest possible - there could be more than one though.
For example for the given input the solution is:
[(0,5),(5,7)] equivalent to (0,1,2,3,4,5,6,7,8,9,10,11)
is it backtracking the best approach to solve this problem ?
I'm interested in any different approaches that people could suggest.
Also if anyone knows a formal reference of this problem or another one that is similar I'd like to get references.
BTW - this is not homework.
Edit
Just to avoid some mistakes this is another example of expected behaviour
for an input like [(0,1),(1,7),(3,20),(8,5)] the right answer is [(3,20)] equivalent to (3,4,5,..,22) with length 20. Some of the answers received would give [(0,1),(1,7),(8,5)] equivalent to (0,1,2,...,11,12) as right answer. But this last answer is not correct because is shorter than [(3,20)].
Iterate over the list of tuples using the given ordering (by start element), while using a hashmap to keep track of the length of the longest continuous sequence ending on a certain index.
pseudo-code, skipping details like items not found in a hashmap (assume 0 returned if not found):
int bestEnd = 0;
hashmap<int,int> seq // seq[key] = length of the longest sequence ending on key-1, or 0 if not found
foreach (tuple in orderedTuples) {
int seqLength = seq[tuple.start] + tuple.length
int tupleEnd = tuple.start+tuple.length;
seq[tupleEnd] = max(seq[tupleEnd], seqLength)
if (seqLength > seq[bestEnd]) bestEnd = tupleEnd
}
return new tuple(bestEnd-seq[bestEnd], seq[bestEnd])
This is an O(N) algorithm.
If you need the actual tuples making up this sequence, you'd need to keep a linked list of tuples hashed by end index as well, updating this whenever the max length is updated for this end-point.
UPDATE: My knowledge of python is rather limited, but based on the python code you pasted, I created this code that returns the actual sequence instead of just the length:
def get_longest(arr):
bestEnd = 0;
seqLengths = dict() #seqLengths[key] = length of the longest sequence ending on key-1, or 0 if not found
seqTuples = dict() #seqTuples[key] = the last tuple used in this longest sequence
for t in arr:
seqLength = seqLengths.get(t[0],0) + t[1]
tupleEnd = t[0] + t[1]
if (seqLength > seqLengths.get(tupleEnd,0)):
seqLengths[tupleEnd] = seqLength
seqTuples[tupleEnd] = t
if seqLength > seqLengths.get(bestEnd,0):
bestEnd = tupleEnd
longestSeq = []
while (bestEnd in seqTuples):
longestSeq.append(seqTuples[bestEnd])
bestEnd -= seqTuples[bestEnd][1]
longestSeq.reverse()
return longestSeq
if __name__ == "__main__":
a = [(0,3),(1,4),(1,1),(1,8),(5,2),(5,5),(5,6),(10,2)]
print(get_longest(a))
Revised algorithm:
create a hashtable of start->list of tuples that start there
put all tuples in a queue of tupleSets
set the longestTupleSet to the first tuple
while the queue is not empty
take tupleSet from the queue
if any tuples start where the tupleSet ends
foreach tuple that starts where the tupleSet ends
enqueue new tupleSet of tupleSet + tuple
continue
if tupleSet is longer than longestTupleSet
replace longestTupleSet with tupleSet
return longestTupleSet
c# implementation
public static IList<Pair<int, int>> FindLongestNonOverlappingRangeSet(IList<Pair<int, int>> input)
{
var rangeStarts = input.ToLookup(x => x.First, x => x);
var adjacentTuples = new Queue<List<Pair<int, int>>>(
input.Select(x => new List<Pair<int, int>>
{
x
}));
var longest = new List<Pair<int, int>>
{
input[0]
};
int longestLength = input[0].Second - input[0].First;
while (adjacentTuples.Count > 0)
{
var tupleSet = adjacentTuples.Dequeue();
var last = tupleSet.Last();
int end = last.First + last.Second;
var sameStart = rangeStarts[end];
if (sameStart.Any())
{
foreach (var nextTuple in sameStart)
{
adjacentTuples.Enqueue(tupleSet.Concat(new[] { nextTuple }).ToList());
}
continue;
}
int length = end - tupleSet.First().First;
if (length > longestLength)
{
longestLength = length;
longest = tupleSet;
}
}
return longest;
}
tests:
[Test]
public void Given_the_first_problem_sample()
{
var input = new[]
{
new Pair<int, int>(0, 5),
new Pair<int, int>(0, 1),
new Pair<int, int>(1, 9),
new Pair<int, int>(5, 5),
new Pair<int, int>(5, 7),
new Pair<int, int>(10, 1)
};
var result = FindLongestNonOverlappingRangeSet(input);
result.Count.ShouldBeEqualTo(2);
result.First().ShouldBeSameInstanceAs(input[0]);
result.Last().ShouldBeSameInstanceAs(input[4]);
}
[Test]
public void Given_the_second_problem_sample()
{
var input = new[]
{
new Pair<int, int>(0, 1),
new Pair<int, int>(1, 7),
new Pair<int, int>(3, 20),
new Pair<int, int>(8, 5)
};
var result = FindLongestNonOverlappingRangeSet(input);
result.Count.ShouldBeEqualTo(1);
result.First().ShouldBeSameInstanceAs(input[2]);
}
This is a special case of the longest path problem for weighted directed acyclic graphs.
The nodes in the graph are the start points and the points after the last element in a sequence, where the next sequence could start.
The problem is special because the distance between two nodes must be the same independently of the path.
Just thinking about the algorithm in basic terms, would this work?
(apologies for horrible syntax but I'm trying to stay language-independent here)
First the simplest form: Find the longest contiguous pair.
Cycle through every member and compare it to every other member with a higher startpos. If the startpos of the second member is equal to the sum of the startpos and length of the first member, they are contiguous. If so, form a new member in a new set with the lower startpos and combined length to represent this.
Then, take each of these pairs and compare them to all of the single members with a higher startpos and repeat, forming a new set of contiguous triples (if any exist).
Continue this pattern until you have no new sets.
The tricky part then is you have to compare the length of every member of each of your sets to find the real longest chain.
I'm pretty sure this is not as efficient as other methods, but I believe this is a viable approach to brute forcing this solution.
I'd appreciate feedback on this and any errors I may have overlooked.
Edited to replace pseudocode with actual Python code
Edited AGAIN to change the code; The original algorithm was on the solution, but I missunderstood what the second value in the pairs was! Fortunatelly the basic algorithm is the same, and I was able to change it.
Here's an idea that solves the problem in O(N log N) and doesn't use a hash map (so no hidden times). For memory we're going to use N * 2 "things".
We're going to add two more values to each tuple: (BackCount, BackLink). In the successful combination BackLink will link from right to left from the right-most tuple to the left-most tuple. BackCount will be the value accumulated count for the given BackLink.
Here's some python code:
def FindTuplesStartingWith(tuples, frm):
# The Log(N) algorithm is left as an excersise for the user
ret=[]
for i in range(len(tuples)):
if (tuples[i][0]==frm): ret.append(i)
return ret
def FindLongestSequence(tuples):
# Prepare (BackCount, BackLink) array
bb=[] # (BackCount, BackLink)
for OneTuple in tuples: bb.append((-1,-1))
# Prepare
LongestSequenceLen=-1
LongestSequenceTail=-1
# Algorithm
for i in range(len(tuples)):
if (bb[i][0] == -1): bb[i] = (0, bb[i][1])
# Is this single pair the longest possible pair all by itself?
if (tuples[i][1] + bb[i][0]) > LongestSequenceLen:
LongestSequenceLen = tuples[i][1] + bb[i][0]
LongestSequenceTail = i
# Find next segment
for j in FindTuplesStartingWith(tuples, tuples[i][0] + tuples[i][1]):
if ((bb[j][0] == -1) or (bb[j][0] < (bb[i][0] + tuples[i][1]))):
# can be linked
bb[j] = (bb[i][0] + tuples[i][1], i)
if ((bb[j][0] + tuples[j][1]) > LongestSequenceLen):
LongestSequenceLen = bb[j][0] + tuples[j][1]
LongestSequenceTail=j
# Done! I'll now build up the solution
ret=[]
while (LongestSequenceTail > -1):
ret.insert(0, tuples[LongestSequenceTail])
LongestSequenceTail = bb[LongestSequenceTail][1]
return ret
# Call the algoritm
print FindLongestSequence([(0,5), (0,1), (1,9), (5,5), (5,7), (10,1)])
>>>>>> [(0, 5), (5, 7)]
print FindLongestSequence([(0,1), (1,7), (3,20), (8,5)])
>>>>>> [(3, 20)]
The key for the whole algorithm is where the "THIS IS THE KEY" comment is in the code. We know our current StartTuple can be linked to EndTuple. If a longer sequence that ends at EndTuple.To exists, it was found by the time we got to this point, because it had to start at an smaller StartTuple.From, and the array is sorted on "From"!
I removed the previous solution because it was not tested.
The problem is finding the longest path in a "weighted directed acyclic graph", it can be solved in linear time:
http://en.wikipedia.org/wiki/Longest_path_problem#Weighted_directed_acyclic_graphs
Put a set of {start positions} union {(start position + end position)} as vertices. For your example it would be {0, 1, 5, 10, 11, 12}
for vertices v0, v1 if there is an end value w that makes v0 + w = v1, then add a directed edge connecting v0 to v1 and put w as its weight.
Now follow the pseudocode in the wikipedia page. since the number of vertices is the maximum value of 2xn (n is number of tuples), the problem can still be solved in linear time.
This is a simple reduce operation. Given a pair of consecutive tuples, they either can or can't be combined. So define the pairwise combination function:
def combo(first,second):
if first[0]+first[1] == second[0]:
return [(first[0],first[1]+second[1])]
else:
return [first,second]
This just returns a list of either one element combining the two arguments, or the original two elements.
Then define a function to iterate over the first list and combine pairs:
def collapse(tupleList):
first = tupleList.pop(0)
newList = []
for item in tupleList:
collapsed = combo(first,item)
if len(collapsed)==2:
newList.append(collapsed[0])
first = collapsed.pop()
newList.append(first)
return newList
This keeps a first element to compare with the current item in the list (starting at the second item), and when it can't combine them it drops the first into a new list and replaces first with the second of the two.
Then just call collapse with the list of tuples:
>>> collapse( [(5, 7), (12, 3), (0, 5), (0, 7), (7, 2), (9, 3)] )
[(5, 10), (0, 5), (0, 12)]
[Edit] Finally, iterate over the result to get the longest sequence.
def longest(seqs):
collapsed = collapse(seqs)
return max(collapsed, key=lambda x: x[1])
[/Edit]
Complexity O(N). For bonus marks, do it in reverse so that the initial pop(0) becomes a pop() and you don't have to reindex the array, or move the iterator instead. For top marks make it run as a pairwise reduce operation for multithreaded goodness.
This sounds like a perfect "dynamic programming" problem...
The simplest program would be to do it brute force (e.g. recursive), but this has exponential complexity.
With dynamic programming you can set up an array a of length n, where n is the maximum of all (start+length) values of your problem, where a[i] denotes the longest non-overlapping sequence up to a[i]. You can then step trought all tuples, updating a. The complexity of this algorithm would be O(n*k), where k is the number of input values.
Create an ordered array of all start and end points and initialise all of them to one
For each item in your tuple, compare the end point (start and end) to the ordered items in your array, if any point is between them (e.g. point in the array is 5 and you have start 2 with length 4) change value to zero.
After finishing the loop, start moving across the ordered array and create a strip when you see 1 and while you see 1, add to the existing strip, with any zero, close the strip and etc.
At the end check the length of strips
I think complexity is around O(4-5*N)
(SEE UPDATE)
with N being number of items in the tuple.
UPDATE
As you figured out, the complexity is not accurate but definitely very small since it is a function of number of line stretches (tuple items).
So if N is number of line stretches, sorting is O(2N * log2N). Comparison is O(2N). Finding line stretches is also O(2N). So all in all O(2N(log2N + 2)).

Resources