Let us say, there are two df's, df1 and df2. Both have one column (geometry_1 and geometry_2 respectively) representing geometry of linestring type.
df1
geometry_1
0 LINESTRING(37.00 59.00, 37.05 59.32)
....
df2
geometry_2
0 LINESTRING(37.89 59.55, 38.05 60.32 )
....
Both df's have more rows, but for now I want to focus on the following question. Is there any way to evaluate if the two lines are similar. By similar I mean that if the distance between the respective points of the lines is no higher than a valid value (eg. 100m), the two lines are considered identical.
The test that you are after (i.e. comparing vertex per vertex) has a very important constraint: there must be exactly the same number of vertices in both LineStrings, which is not very likely to happen.
Since you obviously want a very basic, broad similarity check, I would start with comparing the main characteristics of your lines. You can achieve that by using shapely's geometry attributes like in the following self-explainatory example:
def are_geometries_similar(geom1,geom2,MAX_ALLOWED_DISTANCE = 100,MAX_ALLOWED_DIFFERENCE_RATIO = 0.1):
"""
Function compares two linestrings' number of vertices, length and basic position.
If they pass all 3 tests within the specified margin of error, it returns true, otherwise it returns false.
"""
# 1. Compare length:
l1 = geom1.length
l2 = geom2.length
if not abs(float(l1) - l2)/max([l1,l2]) < MAX_ALLOWED_DIFFERENCE_RATIO:
return False
# 2. Compare number of vertices:
vert_num1 = len(geom1.coords)
vert_num2 = len(geom2.coords)
if not abs(float(vert_num1) - vert_num2)/max([vert_num1,vert_num2]) < MAX_ALLOWED_DIFFERENCE_RATIO:
return False
# 3. Compare position by calculating the representative point
rp1 = geom1.representative_point()
rp2 = geom2.representative_point()
if rp1.distance(rp2) > MAX_ALLOWED_DISTANCE:
return False
# If all tests passed, return True
return True
Related
This piece of code is part of a larger function. I already created a list of molecular weights and I also defined a list of all the fragments in my data.
I'm trying to figure out how I can go through the list of fragments, calculate their molecular weight and check if it matches the number in the other list. If it matches, the sequence is appended into an empty list.
combs = [397.47, 2267.58, 475.63, 647.68]
fragments = ['SKEPFKTRIDKKPCDHNTEPYMSGGNY', 'KMITKARPGCMHQMGEY', 'AINV', 'QIQD', 'YAINVMQCL', 'IEEATHMTPCYELHGLRWV', 'MQCL', 'HMTPCYELHGLRWV', 'DHTAQPCRSWPMDYPLT', 'IEEATHM', 'MVGKMDMLEQYA', 'GWPDII', 'QIQDY', 'TPCYELHGLRWVQIQDYA', 'HGLRWVQIQDYAINV', 'KKKNARKW', 'TPCYELHGLRWV']
frags = []
for c in combs:
for f in fragments:
if c == SeqUtils.molecular_weight(f, 'protein', circular = True):
frags.append(f)
print(frags)
I'm guessing I don't fully know how the SeqUtils.molecular_weight command works in Python, but if there is another way that would also be great.
You are comparing floating point values for equality. That is bound to fail. You always have to account for some degree of error when dealing with floating point values. In this particular case you also have to take into account the error margin of the input values.
So do not compare floats like this
x == y
but instead like this
abs(x - y) < epsilon
where epsilon is some carefully selected arbitrary number.
I did two slight modifications to your code: I swapped the order of the f and the c loop to be able to store the calculated value of w. And I append the value of w to the list frags as well in order to better understand what is happening.
Your modified code now looks like this:
from Bio import SeqUtils
combs = [397.47, 2267.58, 475.63, 647.68]
fragments = ['SKEPFKTRIDKKPCDHNTEPYMSGGNY', 'KMITKARPGCMHQMGEY', 'AINV', 'QIQD', 'YAINVMQCL', 'IEEATHMTPCYELHGLRWV',
'MQCL', 'HMTPCYELHGLRWV', 'DHTAQPCRSWPMDYPLT', 'IEEATHM', 'MVGKMDMLEQYA', 'GWPDII', 'QIQDY',
'TPCYELHGLRWVQIQDYA', 'HGLRWVQIQDYAINV', 'KKKNARKW', 'TPCYELHGLRWV']
frags = []
threshold = 0.5
for f in fragments:
w = SeqUtils.molecular_weight(f, 'protein', circular=True)
for c in combs:
if abs(c - w) < threshold:
frags.append((f, w))
print(frags)
This prints the result
[('AINV', 397.46909999999997), ('IEEATHMTPCYELHGLRWV', 2267.5843), ('MQCL', 475.6257), ('QIQDY', 647.6766)]
As you can see, the first value for the weight differs from the reference value by about 0.0009. That's why you did not catch it with your approach.
Given a data set of a few millions of price ranges, we need to find the smallest range that contains a given price.
The following rules apply:
Ranges can be fully nested (ie, 1-10 and 5-10 is valid)
Ranges cannot be partially nested (ie, 1-10 and 5-15 is invalid)
Example:
Given the following price ranges:
1-100
50-100
100-120
5-10
5-20
The result for searching price 7 should be 5-10
The result for searching price 100 should be 100-120 (smallest range containing 100).
What's the most efficient algorithm/data structure to implement this?
Searching the web, I only found solutions for searching ranges within ranges.
I've been looking at Morton count and Hilbert curve, but can't wrap my head around how to use them for this case.
Thanks.
Because you did not mention this ad hoc algorithm, I'll propose this as a simple answer to your question:
This is a python function, but it's fairly easy to understand and convert it in another language.
def min_range(ranges, value):
# ranges = [(1, 100), (50, 100), (100, 120), (5, 10), (5, 20)]
# value = 100
# INIT
import math
best_range = None
best_range_len = math.inf
# LOOP THROUGH ALL RANGES
for b, e in ranges:
# PICK THE SMALLEST
if b <= value <= e and e - b < best_range_len:
best_range = (b, e)
best_range_len = e - b
print(f'Minimal range containing {value} = {best_range}')
I believe there are more efficient and complicated solutions (if you can do some precomputation for example) but this is the first step you must take.
EDIT : Here is a better solution, probably in O(log(n)) but it's not trivial. It is a tree where each node is an interval, and has a child list of all strictly non overlapping intervals that are contained inside him.
Preprocessing is done in O(n log(n)) time and queries are O(n) in worst case (when you can't find 2 ranges that don't overlap) and probably O(log(n)) in average.
2 classes: Tree that holds the tree and can query:
class tree:
def __init__(self, ranges):
# sort the ranges by lowest starting and then greatest ending
ranges = sorted(ranges, key=lambda i: (i[0], -i[1]))
# recursive building -> might want to optimize that in python
self.node = node( (-float('inf'), float('inf')) , ranges)
def __str__(self):
return str(self.node)
def query(self, value):
# bisect is for binary search
import bisect
curr_sol = self.node.inter
node_list = self.node.child_list
while True:
# which of the child ranges can include our value ?
i = bisect.bisect_left(node_list, (value, float('inf'))) - 1
# does it includes it ?
if i < 0 or i == len(node_list):
return curr_sol
if value > node_list[i].inter[1]:
return curr_sol
else:
# if it does then go deeper
curr_sol = node_list[i].inter
node_list = node_list[i].child_list
Node that holds the structure and information:
class node:
def __init__(self, inter, ranges):
# all elements in ranges will be descendant of this node !
import bisect
self.inter = inter
self.child_list = []
for i, r in enumerate(ranges):
if len(self.child_list) == 0:
# append a new child when list is empty
self.child_list.append(node(r, ranges[i + 1:bisect.bisect_left(ranges, (r[1], r[1] - 1))]))
else:
# the current range r is included in a previous range
# r is not a child of self but a descendant !
if r[0] < self.child_list[-1].inter[1]:
continue
# else -> this is a new child
self.child_list.append(node(r, ranges[i + 1:bisect.bisect_left(ranges, (r[1], r[1] - 1))]))
def __str__(self):
# fancy
return f'{self.inter} : [{", ".join([str(n) for n in self.child_list])}]'
def __lt__(self, other):
# this is '<' operator -> for bisect to compare our items
return self.inter < other
and to test that:
ranges = [(1, 100), (50, 100), (100, 120), (5, 10), (5, 20), (50, 51)]
t = tree(ranges)
print(t)
print(t.query(10))
print(t.query(5))
print(t.query(40))
print(t.query(50))
Preprocessing that generates disjoined intervals
(I call source segments as ranges and resulting segments as intervals)
For ever range border (both start and end) make tuple: (value, start/end fiels, range length, id), put them in array/list
Sort these tuples by the first field. In case of tie make longer range left for start and right for end.
Make a stack
Make StartValue variable.
Walk through the list:
if current tuple contains start:
if interval is opened: //we close it
if current value > StartValue: //interval is not empty
make interval with //note id remains in stack
(start=StartValue, end = current value, id = stack.peek)
add interval to result list
StartValue = current value //we open new interval
push id from current tuple onto stack
else: //end of range
if current value > StartValue: //interval is not empty
make interval with //note id is removed from stack
(start=StartValue, end = current value, id = stack.pop)
add interval to result list
if stack is not empty:
StartValue = current value //we open new interval
After that we have sorted list of disjointed intervals containing start/end value and id of the source range (note that many intervals might correspond to the same source range), so we can use binary search easily.
If we add source ranges one-by-one in nested order (nested after it parent), we can see that every new range might generate at most two new intervals, so overall number of intervals M <= 2*N and overall complexity is O(Nlog N + Q * logN) where Q is number of queries
Edit:
Added if stack is not empty section
Result for your example 1-100, 50-100, 100-120, 5-10, 5-20 is
1-5(0), 5-10(3), 10-20(4), 20-50(0), 50-100(1), 100-120(2)
Since pLOPeGG already covered the ad hoc case, I will answer the question under the premise that preporcessing is performed in order to support multiple queries efficiently.
General data structures for efficient queries on intervals are the Interval Tree and the Segment Tree
What about an approach like this. Since we only allow nested and not partial-nesting. This looks to be a do-able approach.
Split segments into (left,val) and (right,val) pairs.
Order them with respect to their vals and left/right relation.
Search the list with binary search. We get two outcomes not found and found.
If found check if it is a left or right. If it is a left go right until you find a right without finding a left. If it is a right go left until you find a left without finding a right. Pick the smallest.
If not found stop when the high-low is 1 or 0. Then compare the queried value with the value of the node you are at and then according to that search right and left to it just like before.
As an example;
We would have (l,10) (l,20) (l,30) (r,45) (r,60) (r,100) when searching for say, 65 you drop on (r,100) so you go left and can't find a spot with a (l,x) such that x>=65 so you go left until you get balanced lefts and rights and first right and last left is your interval. The reprocessing part will be long but since you will keep it that way. It is still O(n) in worst-case. But that worst case requires you to have everything nested inside each other and you searching for the outer-most.
Suppose I have a two vectors of "starts" and "stops", sorted in ascending order.
Vector 1 = [start1 stop1;
start2 stop2;
start3 stop3];
Vector 2 = [start4 stop4;
start5 stop5;
start6 stop6];
What is the most efficient way of determining the intersection/overlap of these two vectors?
I've had to do this on a couple of occasions. It's a simple task, but the logic can get quite messy.
One thing you must decide upon, up-front, is whether intervals are closed or open. That is, do the intervals [1,3] and [3,5] have an intersection at [3,3], or no intersection? I strongly advise "no intersection" (closed intervals tend to be much more painful to reason about than open or half-open intervals), but your use case may require otherwise.
I think the cleanest way to do this is to maintain a "current partial interval" from each list. By "partial" I mean that each interval may be "eaten away" from the bottom as intersections with intervals from the other list are recognized and output. This simplifies the logic by only forcing you to consider two intervals at a time, rather than processing all the V2 intervals which are relevant to some interval in V1.
To simplify the code further, you can allow intervals to be temporarily invalid, and start with both current intervals invalid. This makes the code more unnecessarily branchy, but it means you only have to handle updating them in one place and with one rule.
So the pseudocode goes like this (I'm destructively reading from V1 and V2, and writing to VI):
v1a,v1b = 0,0 # Empty and hence invalid
v2a,v2b = 0,0 # intervals to start with.
while True:
if v1a >= v1b: # Handle an invalid V1 interval
if V1.empty(): # If there's no more V1s,
return # No more intersections are possible.
else:
v1a,v2a = V1.pop() # Grab the next full interval from V1
if v2a >= v2b:
if V2.empty():
return
else:
v2a,v2b = V2.pop()
lower_bound = max(v1a, v2a) # Determine the overlap, if any, between
upper_bound = min(v1b, v2b) # the current two intervals.
if lower_bound < upper_bound:
VI.push(lower_bound, upper_bound) # Output the overlapping interval.
v1a = max(v1a, upper_bound) # Snip away the region which has now been
v2a = max(v2a, upper_bound) # handled. This may make one or both invalid.
The last two lines are the tricky bit. If there was an intersection, then upper_bound is its upper end: There are no remaining intersecting ranges below it, so they can be removed from either or both current intervals. If, however, the two current intervals did not overlap, then it has the effect of setting the lower interval's a to its own b, making it invalid and causing it to be replaced on the next iteration.
I believe you can take advantage of the fact that the lists are sorted and then do the following (pseudo code)
Grab the first spans
Determine if the spans overlap
If the spans overlap, then max(starts) to min(ends) is an overlap
Increment the span with the smallest end
* Merge results - You may need to pass over your results and merge overlapping elements
Here's some python that implements this (note - a and b would be lists that contain (start,end) tuples):
try:
while True:
if b_span[1]>a_span[0] and b_span[0]<a_span[1]:
overlaps.append((a_span[0] if a_span[0] > b_span[0] else b_span[0],
a_span[1] if a_span[1] < b_span[1] else b_span[1]))
if a_span[1] < b_span[1]:
a_span = a.pop(0)
else:
b_span = b.pop(0)
except IndexError:
pass
What is the most efficient way to determine whether a collection of sets is pairwise disjoint? -- i.e. verifying that the intersection between all pairs of sets is empty. How efficiently can this be done?
The sets from a collection are pairwise disjoint if, and only if, the size of their union equals the sum of their sizes (this statement applies to finite sets):
def pairwise_disjoint(sets) -> bool:
union = set().union(*sets)
return len(union) == sum(map(len, sets))
This could be a one-liner, but readability counts.
Expected linear time O(total number of elements):
def all_disjoint(sets):
union = set()
for s in sets:
for x in s:
if x in union:
return False
union.add(x)
return True
This is optimal under the assumption that your input is a collection of sets represented as some kind of unordered data structure (hash table?), because than you need to look at every element at least once.
You can do much better by using a different representation for your sets. For example, by maintaining a global hash table that stores for each element the number of sets it is stored in, you can do all the set operations optimally and also check for disjointness in O(1).
Using Python as psudo-code. The following tests for the intersection of each pair of sets only once.
def all_disjoint(sets):
S = list(sets)
while S:
s = S.pop() # remove an element
# loop over the remaining ones
for t in S:
# test for intersection
if not s.isdisjoint(t):
return False
return True
The number of intersection tests is the same as the number of edges in a fully connected graph with the same number of vertexes as there are sets. It also exits early if any pair is found not to be disjoint.
I have a set of integers, represented as a Reduced Ordered Binary Decision Diagram (ROBDD) (interpreted as a function which evaluates to true iff the input is in the set) which I shall call Domain, and an integer function (which I shall call F) represented as an array of ROBDD's (one entry per bit of the result).
Now I want to calculate the image of the domain for F. It's definitely possible, because it could trivially be done by enumerating all items from the domain, apply F, and insert the result in the image. But that's a horrible algorithm with exponential complexity (linear in the size of the domain), and my gut tells me it can be faster. I've been looking into the direction of:
apply Restrict(Domain) to all bits of F
do magic
But the second step proved difficult. The result of the first step contains the information I need (at least, I'm 90% sure of it), but not in the right form. Is there an efficient algorithm to turn it into a "set encoded as ROBDD"? Do I need an other approach?
Define two set-valued functions:
N(d1...dn): The subset of the image where members start with a particular sequence of digits d0...dn.
D(d1...dn): The subset of the inputs that produce N(d1...dn).
Then when the sequences are empty, we have our full problem:
D(): The entire domain.
N(): The entire image.
From the full domain we can define two subsets:
D(0) = The subset of D() such that F(x)[1]==0 for any x in D().
D(1) = The subset of D() such that F(x)[1]==1 for any x in D().
This process can be applied recursively to generate D for every sequence.
D(d1...d[m+1]) = D(d1...dm) & {x | F(x)[m+1]==d[m+1]}
We can then determine N(x) for the full sequences:
N(d1...dn) = 0 if D(d1...dn) = {}
N(d1...dn) = 1 if D(d1...dn) != {}
The parent nodes can be produced from the two children, until we've produced N().
If at any point we determine that D(d1...dm) is empty, then we know
that N(d1...dm) is also empty, and we can avoid processing that branch.
This is the main optimization.
The following code (in Python) outlines the process:
def createImage(input_set_diagram,function_diagrams,index=0):
if input_set_diagram=='0':
# If the input set is empty, the output set is also empty
return '0'
if index==len(function_diagrams):
# The set of inputs that produce this result is non-empty
return '1'
function_diagram=function_diagrams[index]
# Determine the branch for zero
set0=intersect(input_set_diagram,complement(function_diagram))
result0=createImage(set0,function_diagrams,index+1)
# Determine the branch for one
set1=intersect(input_set_diagram,function_diagram)
result1=createImage(set1,function_diagrams,index+1)
# Merge if the same
if result0==result1:
return result0
# Otherwise create a new node
return {'index':index,'0':result0,'1':result1}
Let S(x1, x2, x3...xn) be the indicator function for the set S, so that S(x1, x2...xn) = true if (x1, x2,...xn) is an element of S. Let F1(x1, x2, x3... xn), F2(),... Fn() be the individual functions that define F. Then I could ask if a particular bit pattern, with wild cards, is in the image of F by forming the equation e.g. S() & F1() & ~F2() for bit-pattern 10 and then solving this equation, which I presume that I can do since it is an ROBDD.
Of course you want a general indicator function, which tells me if abc is in the image. Extending the above, I think you get S() & (a&F1() | ~a&~F1()) & (b&F2() | ~b&~F2()) &... If you then re-order the variables so that the original x1, x2, ... xn occur last in the ROBDD order, then you should be able to prune the tree to return true for the case where any setting of the x1, x2, ... xn leads to the value true, and to return false otherwise.
(of course you could run of space, or patience, waiting for the re-ordering to work).