Finding the neighbours of a segment (Bentley-Ottmann algorithm ) - algorithm

I'm implementing the Bentley-Ottmann algorithm
to find the set of segment intersection points,
unfortunately I didn't understand some things.
For example :
how can I get the neighbours of the segment Sj in the image.
I'm using a balanced binary search tree for the sweepLine status, but we store the segments in the leaves, after reading this wikipedia article I didn't find an explanation for this operation.
From the reference book (de Berg & al.: "Computational Geometry", ill. at p.25):
Suppose we search in T for the segment immediately to the left of some point p that lies on the sweep line.
At each internal node v we test whether p lies left or right of the segment stored at v.
Depending on the outcome we descend to the left or right subtree of v,
eventually ending up in a leaf.
Either this leaf, or the leaf immediately to the left of it, stores the segment we are searching for.
For my example if I follow this I will arrive at the leaf Sj but I will know just the leaf to the left i.e. Sk, how can I get Si?
Edit
I found this discussion that looks like my problem, unfortunately there are no answers about how can I implement some operations in such data structure.
The operation are:
inserting a node in such data structure.
deleting a node.
swapping two nodes.
searching for neighbours' node.
I know how to implement these operations in a balanced binary search tree when we store data too in internal node, but with this type of AVL I don't know if it is the same thing.
Thank you

I stumbled upon the same problem when reading Computational Geometry from DeBerg (see p. 25 for the quote and the image). My understanding is the following:
say you need the right neighbor of a segment S which is in the tree. If you store data in the nodes, the pseudo code is:
locate node S
if S has a right subtree:
return the left-most node of the right subtree of S
else if S is in the left sub-tree of any ancestor:
return the lowest/nearest such ancestor
else
return not found
If you store the data in the leaves, the pseudo-code becomes:
let p the point of S currently on the sweep line
let n the segment at the root of the tree
while n != null && n is not a leaf:
if n = S:
n = right child of S
else:
determine if p is on the right or left of n
update n accordingly (normal descent)
In the end, either n is null and it means there is no right neighbor, or n points to the proper leaf.
The same logic applies for the left neighbor.

Same as you, I have met the same problem while reading the de Berg & al.: "Computational Geometry". But I think The C++ Standard Template Library (STL) have an implantation called "map" which can do the job.
You just need to define some personalized class for line segment and event points and their comparison functions. Then, use std::map to build the tree and access the neighboring element using map.find() to get and iterator, and use iterator to gain access to the two neighbor element.

Related

Sweep Line Algorithm: AVL tree for neighboring line segments

I am currently trying to implement the sweep line algorithm for line segments intersection for Unity in C#.
I am using the book "Computational Geometry, Algorithms and Applications, 3rd Edition" by "Mark de Berg· Otfried Cheong. Marc van Kreveld· Mark Overmar."
PDF: https://people.inf.elte.hu/fekete/algoritmusok_msc/terinfo_geom/konyvek/Computational%20Geometry%20-%20Algorithms%20and%20Applications,%203rd%20Ed.pdf
At the moment I'm trying to implement the 2 datastructures: An AVL Tree for the Events and an AVL Tree for the line segments ordered by the x axis.
I successfully implemented the events structure, but I am struggling with the line segments structure: On page 25 of the book (P. 35 in the pdf) it explains how we store the line segments only in the leaves of the tree, but how would one implement such a tree? Since the other nodes in the trees are still "segments" but only serve as values to guide the search, it means that all segments will be in the tree twice. But doesn't an AVL tree only have each value once? and if not, how would you order them.
Beyond that, I don't fully understand the sentence "Suppose we search in T for the
segment immediately to the left of some point p that lies on the sweep line. At
each internal node ν we test whether p lies left or right of the segment stored
at ν. Depending on the outcome we descend to the left or right subtree of ν,
eventually ending up in a leaf. Either this leaf, or the leaf immediately to the left
of it, stores the segment we are searching for." That sentence is on page 25 as well.
Firstof, when would we descend to the left and when to the right? And why could also the leaf immediately to the left be the leaf we are looking for?
Hopefully I didnt write too much text. Thanks!

Search AVL tree in log(n)?

This question was taken from a bigger one preparing for a job interview (Solved the rest of it successfully)
Question: Suggest DataStructure to handle boxes where a box has: special ID, weight and size.
I want to save those boxes in an AVL tree, and being able to solve the following problem:
From all boxes which has a maximum size of v (In other words: size<=v) I want to find the heaviest one.
How can I do this in log(n) where n is the number of total saved boxes?
I know that the solution would be saving some extra data in each node but I'm not sure which data would be helpful (no need to explain how to fix the data in rotation etc)
Example of extra data saved in each node: Id of heaviest box in right sub-tree.
It sounds like you're already on the right track: each node stores its heaviest descendant. The only missing piece is coming up with a set of log(n) nodes such that the target node is the descendant of one of them.
In other words, you need to identify all the subtrees of the AVL tree which consist entirely of nodes whose size is less than (i.e. are to the left of) your size=v node.
Which ones are those? Well, for one thing, the left child of your size=v node, of course. Then, go from that node up to the root. Every ancestor of the size=v node which is a right child, consider its left sibling (and the node itself). The set of subtrees whose roots you examine along the way will be all nodes to the left of the size=v node.
As a simplification, you can combine the upper-bound size search with the search for the target node. Basically, you traverse to the child with the highest maximum-weight descendant, but don't allow traversing to children which would violate the size constraint.
max = null
x = root
while x is not null:
if x.size <= v:
if x.weight > max.weight:
max = x
x = x.left or x.right, depending on which has a larger maxWeightDescendant
else:
x = x.left

In a leftist heap tree, is the distance varibale calculated by going to null the fastest in ANY way possible?

So I'm studying leftist heap tree when I came across this property that intrigued me
Heavier on left side : dist(right(i)) <= dist(left(i)). Here, dist(i) is the number of edges on the shortest path from node i to a leaf node in extended binary tree representation (In this representation, a null child is considered as external or leaf node). The shortest path to a descendant external node is through the right child. Every subtree is also a leftist tree and dist( i ) = 1 + dist( right( i ) ).
Now according to that property, the distance varible is calculated by calculating the fastest route to a null . So let's say if I have a tree which looks something like
this
So I've read the leftist tree properties for more than an hour now and it only states that it follows min heap and dist left > dist right. So according to it that tree should be legal (forgive me if I missed something). I've drawn on the tree two possible paths for that node to get to null, one way is going backwards through root, one way is keep processing downward. Now the question that I'm asking is this : Which one should I take, since there's no limitation that says I can't take the 1, I should take it because it's better ? Or when inserting elements into the heap, do we do it in someway that this never happens ? Again I'm sorry if I missed something because I'm new, but I've spent a while trying to figure this out, and even if the latter is true, why do we want it to be .
The distance of a node v is calculated as d(v) = -1 if v is null, otherwise d(v) = 1 + min(d(c_l), d(c_r)) where c_l and c_r are the left and right children of v. Leftist heap H is binary heap with the property that d(c_l) >= d(c_r) for each v in H.
Thus, in your case you should take the right path and claim that the distance of the node 1 is 0.
(The purpose of the leftist heap is to enable fast union of two heaps. When the merge operation is defined, the insert operation is implemented by using the merge.)

Detect given graph is Tree or not in LogSpace

This is my code to check if given graph is a tree (BFS). The problem is that when I use stack S, it would not be in logSpace. While going through some references, I think I have to replace stack S by some counters but how?Is there any way to modify this algorithm to run in LogSpace?
boolean isTree(G){
Given: G=(V,E)
pick first node s
mark s as visited
S is empty stack << (Problem-I)
S.push(v)
while( S is not empty)
v = S.pop()
for all neighbors W of v in G
//check if marked as already visited
if(W is not parent of v && W is visited)
return 0
else
mark W as a visited
S.push(W)
end for
end while
return true
}
You didn't consider the space of visited flag, so I am assuming you're using some data-structure with bit masking property like BitSet for visited.
The stack will take O(logn) space at any time where n is the total number of nodes only if the tree is balanced (the maximum height of the tree won't exceed logn nodes). And in worst case, when the tree is flat (growing only in left or only in right), the space complexity will be O(n). So, there is no guarantee of logarithmic space unless the tree is sure to be a balanced tree.
Moreover, you're assuming the given graph is connected. But if the given graph graph is not connected, this can turn into forest (many trees). So you need to check with every nodes instead of pick first node s and if any node is found un-visited after first round, this is not a tree, its a forest/non-tree graph.
Hope it helps!
Edit
Is there any way to keep track of nodes without using stack?
Without stack, either you need to do recursion/DFS which will take O(n) space in worst case in function stack or you can use Queue/BFS which again will take O(n) space. So No, you need at-least O(n) space when the tree is arbitrary(not guaranteed to be balanced).
You can check the implementation in Java and/or C++ here.

Find all subtrees of size N in an undirected graph

Given an undirected graph, I want to generate all subgraphs which are trees of size N, where size refers to the number of edges in the tree.
I am aware that there are a lot of them (exponentially many at least for graphs with constant connectivity) - but that's fine, as I believe the number of nodes and edges makes this tractable for at least smallish values of N (say 10 or less).
The algorithm should be memory-efficient - that is, it shouldn't need to have all graphs or some large subset of them in memory at once, since this is likely to exceed available memory even for relatively small graphs. So something like DFS is desirable.
Here's what I'm thinking, in pseudo-code, given the starting graph graph and desired length N:
Pick any arbitrary node, root as a starting point and call alltrees(graph, N, root)
alltrees(graph, N, root)
given that node root has degree M, find all M-tuples with integer, non-negative values whose values sum to N (for example, for 3 children and N=2, you have (0,0,2), (0,2,0), (2,0,0), (0,1,1), (1,0,1), (1,1,0), I think)
for each tuple (X1, X2, ... XM) above
create a subgraph "current" initially empty
for each integer Xi in X1...XM (the current tuple)
if Xi is nonzero
add edge i incident on root to the current tree
add alltrees(graph with root removed, N-1, node adjacent to root along edge i)
add the current tree to the set of all trees
return the set of all trees
This finds only trees containing the chosen initial root, so now remove this node and call alltrees(graph with root removed, N, new arbitrarily chosen root), and repeat until the size of the remaining graph < N (since no trees of the required size will exist).
I forgot also that each visited node (each root for some call of alltrees) needs to be marked, and the set of children considered above should only be the adjacent unmarked children. I guess we need to account for the case where no unmarked children exist, yet depth > 0, this means that this "branch" failed to reach the required depth, and cannot form part of the solution set (so the whole inner loop associated with that tuple can be aborted).
So will this work? Any major flaws? Any simpler/known/canonical way to do this?
One issue with the algorithm outlined above is that it doesn't satisfy the memory-efficient requirement, as the recursion will hold large sets of trees in memory.
This needs an amount of memory that is proportional to what is required to store the graph. It will return every subgraph that is a tree of the desired size exactly once.
Keep in mind that I just typed it into here. There could be bugs. But the idea is that you walk the nodes one at a time, for each node searching for all trees that include that node, but none of the nodes that were searched previously. (Because those have already been exhausted.) That inner search is done recursively by listing edges to nodes in the tree, and for each edge deciding whether or not to include it in your tree. (If it would make a cycle, or add an exhausted node, then you can't include that edge.) If you include it your tree then the used nodes grow, and you have new possible edges to add to your search.
To reduce memory use, the edges that are left to look at is manipulated in place by all of the levels of the recursive call rather than the more obvious approach of duplicating that data at each level. If that list was copied, your total memory usage would get up to the size of the tree times the number of edges in the graph.
def find_all_trees(graph, tree_length):
exhausted_node = set([])
used_node = set([])
used_edge = set([])
current_edge_groups = []
def finish_all_trees(remaining_length, edge_group, edge_position):
while edge_group < len(current_edge_groups):
edges = current_edge_groups[edge_group]
while edge_position < len(edges):
edge = edges[edge_position]
edge_position += 1
(node1, node2) = nodes(edge)
if node1 in exhausted_node or node2 in exhausted_node:
continue
node = node1
if node1 in used_node:
if node2 in used_node:
continue
else:
node = node2
used_node.add(node)
used_edge.add(edge)
edge_groups.append(neighbors(graph, node))
if 1 == remaining_length:
yield build_tree(graph, used_node, used_edge)
else:
for tree in finish_all_trees(remaining_length -1
, edge_group, edge_position):
yield tree
edge_groups.pop()
used_edge.delete(edge)
used_node.delete(node)
edge_position = 0
edge_group += 1
for node in all_nodes(graph):
used_node.add(node)
edge_groups.append(neighbors(graph, node))
for tree in finish_all_trees(tree_length, 0, 0):
yield tree
edge_groups.pop()
used_node.delete(node)
exhausted_node.add(node)
Assuming you can destroy the original graph or make a destroyable copy I came up to something that could work but could be utter sadomaso because I did not calculate its O-Ntiness. It probably would work for small subtrees.
do it in steps, at each step:
sort the graph nodes so you get a list of nodes sorted by number of adjacent edges ASC
process all nodes with the same number of edges of the first one
remove those nodes
For an example for a graph of 6 nodes finding all size 2 subgraphs (sorry for my total lack of artistic expression):
Well the same would go for a bigger graph, but it should be done in more steps.
Assuming:
Z number of edges of most ramificated node
M desired subtree size
S number of steps
Ns number of nodes in step
assuming quicksort for sorting nodes
Worst case:
S*(Ns^2 + MNsZ)
Average case:
S*(NslogNs + MNs(Z/2))
Problem is: cannot calculate the real omicron because the nodes in each step will decrease depending how is the graph...
Solving the whole thing with this approach could be very time consuming on a graph with very connected nodes, however it could be paralelized, and you could do one or two steps, to remove dislocated nodes, extract all subgraphs, and then choose another approach on the remainder, but you would have removed a lot of nodes from the graph so it could decrease the remaining run time...
Unfortunately this approach would benefit the GPU not the CPU, since a LOT of nodes with the same number of edges would go in each step.... and if parallelization is not used this approach is probably bad...
Maybe an inverse would go better with the CPU, sort and proceed with nodes with the maximum number of edges... those will be probably less at start, but you will have more subgraphs to extract from each node...
Another possibility is to calculate the least occuring egde count in the graph and start with nodes that have it, that would alleviate the memory usage and iteration count for extracting subgraphs...
Unless I'm reading the question wrong people seem to be overcomplicating it.
This is just "all possible paths within N edges" and you're allowing cycles.
This, for two nodes: A, B and one edge your result would be:
AA, AB, BA, BB
For two nodes, two edges your result would be:
AAA, AAB, ABA, ABB, BAA, BAB, BBA, BBB
I would recurse into a for each and pass in a "template" tuple
N=edge count
TempTuple = Tuple_of_N_Items ' (01,02,03,...0n) (Could also be an ordered list!)
ListOfTuple_of_N_Items ' Paths (could also be an ordered list!)
edgeDepth = N
Method (Nodes, edgeDepth, TupleTemplate, ListOfTuples, EdgeTotal)
edgeDepth -=1
For Each Node In Nodes
if edgeDepth = 0 'Last Edge
ListOfTuples.Add New Tuple from TupleTemplate + Node ' (x,y,z,...,Node)
else
NewTupleTemplate = TupleTemplate + Node ' (x,y,z,Node,...,0n)
Method(Nodes, edgeDepth, NewTupleTemplate, ListOfTuples, EdgeTotal
next
This will create every possible combination of vertices for a given edge count
What's missing is the factory to generate tuples given an edge count.
You end up with a list of possible paths and the operation is Nodes^(N+1)
If you use ordered lists instead of tuples then you don't need to worry about a factory to create the objects.
If memory is the biggest problem you can use a NP-ish solution using tools from formal verification. I.e., guess a subset of nodes of size N and check whether it's a graph or not. To save space you can use a BDD (http://en.wikipedia.org/wiki/Binary_decision_diagram) to represent the original graph's nodes and edges. Plus you can use a symbolic algorithm to check if the graph you guessed is really a graph - so you don't need to construct the original graph (nor the N-sized graphs) at any point. Your memory consumption should be (in big-O) log(n) (where n is the size of the original graph) to store the original graph, and another log(N) to store every "small graph" you want.
Another tool (which is supposed to be even better) is to use a SAT solver. I.e., construct a SAT formula that is true iff the sub-graph is a graph and supply it to a SAT solver.
For a graph of Kn there are approximately n! paths between any two pairs of vertices. I haven't gone through your code but here is what I would do.
Select a pair of vertices.
Start from a vertex and try to reach the destination vertex recursively (something like dfs but not exactly). I think this would output all the paths between the chosen vertices.
You could do the above for all possible pairs of vertices to get all simple paths.
It seems that the following solution will work.
Go over all partitions into two parts of the set of all vertices. Then count the number of edges which endings lie in different parts (k); these edges correspond to the edge of the tree, they connect subtrees for the first and the second parts. Calculate the answer for both parts recursively (p1, p2). Then the answer for the entire graph can be calculated as sum over all such partitions of k*p1*p2. But all trees will be considered N times: once for each edge. So, the sum must be divided by N to get the answer.
Your solution as is doesn't work I think, although it can be made to work. The main problem is that the subproblems may produce overlapping trees so when you take the union of them you don't end up with a tree of size n. You can reject all solutions where there is an overlap, but you may end up doing a lot more work than needed.
Since you are ok with exponential runtime, and potentially writing 2^n trees out, having V.2^V algorithms is not not bad at all. So the simplest way of doing it would be to generate all possible subsets n nodes, and then test each one if it forms a tree. Since testing whether a subset of nodes form a tree can take O(E.V) time, we are potentially talking about V^2.V^n time, unless you have a graph with O(1) degree. This can be improved slightly by enumerating subsets in a way that two successive subsets differ in exactly one node being swapped. In that case, you just have to check if the new node is connected to any of the existing nodes, which can be done in time proportional to number of outgoing edges of new node by keeping a hash table of all existing nodes.
The next question is how do you enumerate all the subsets of a given size
such that no more than one element is swapped between succesive subsets. I'll leave that as an exercise for you to figure out :)
I think there is a good algorithm (with Perl implementation) at this site (look for TGE), but if you want to use it commercially you'll need to contact the author. The algorithm is similar to yours in the question but avoids the recursion explosion by making the procedure include a current working subtree as a parameter (rather than a single node). That way each edge emanating from the subtree can be selectively included/excluded, and recurse on the expanded tree (with the new edge) and/or reduced graph (without the edge).
This sort of approach is typical of graph enumeration algorithms -- you usually need to keep track of a handful of building blocks that are themselves graphs; if you try to only deal with nodes and edges it becomes intractable.
This algorithm is big and not easy one to post here. But here is link to reservation search algorithm using which you can do what you want. This pdf file contains both algorithms. Also if you understand russian you can take a look to this.
So you have a graph with with edges e_1, e_2, ..., e_E.
If I understand correctly, you are looking to enumerate all subgraphs which are trees and contain N edges.
A simple solution is to generate each of the E choose N subgraphs and check if they are trees.
Have you considered this approach? Of course if E is too large then this is not viable.
EDIT:
We can also use the fact that a tree is a combination of trees, i.e. that each tree of size N can be "grown" by adding an edge to a tree of size N-1. Let E be the set of edges in the graph. An algorithm could then go something like this.
T = E
n = 1
while n<N
newT = empty set
for each tree t in T
for each edge e in E
if t+e is a tree of size n+1 which is not yet in newT
add t+e to newT
T = newT
n = n+1
At the end of this algorithm, T is the set of all subtrees of size N. If space is an issue, don't keep a full list of the trees, but use a compact representation, for instance implement T as a decision tree using ID3.
I think problem is under-specified. You mentioned that graph is undirected and that subgraph you are trying to find is of size N. What is missing is number of edges and whenever trees you are looking for binary or you allowed to have multi-trees. Also - are you interested in mirrored reflections of same tree, or in other words does order in which siblings are listed matters at all?
If single node in a tree you trying to find allowed to have more than 2 siblings which should be allowed given that you don't specify any restriction on initial graph and you mentioned that resulting subgraph should contain all nodes.
You can enumerate all subgraphs that have form of tree by performing depth-first traversal. You need to repeat traversal of the graph for every sibling during traversal. When you'll need to repeat operation for every node as a root.
Discarding symmetric trees you will end up with
N^(N-2)
trees if your graph is fully connected mesh or you need to apply Kirchhoff's Matrix-tree theorem

Resources