Elegant and efficient way to traverse through edges and resolve intersections - algorithm

I have an array which represents adjacency matrix of a graph like this.
connections:Array = [0,0,1,1,
0,0,1,1,
0,0,0,1,
0,0,0,0];
Every node of the graph has a 2D point assigned to it which represents its position on a plane.
I am trying to write a function which traverses through all the edges and returns false if any of the two edges intersect. Here is my code
function test():boolean
{
for (i = 0; i < nodes.length ; i++)
{
for (j = i ; j < nodes.length ; j ++)
{
if (connections[i * nodes.length + j] == 1)
{
//we found an edge
//This is the place where i am stuck I, can't figure out how
//to take pairs of edges to test them for intersections
}
}
}
}
You can give the answer in any language or even pseudo code.
Note: I don't need any code for intersection algorithm.

While traversing the adjacency matrix, build an edge-centered data structure. The easiest option is a plain list of edges. Then you can iterate all previously visited edges, which will give you a complexity of O(V^2 + E^2), where V is the number of vertices and E the number of edges.
If you have potentially large graphs, you might want to use more efficient data structures, such as a BSP tree that you build on-the-fly. This will get your complexity down to O(V^2 + E log E).

Related

Count triangles (cycles of length 3) in a graph

In an undirected graph with V vertices and E edges how would you count the number of triangles in O(|V||E|)? I see the algorithm here but I'm not exactly sure how that would be implemented to achieve that complexity. Here's the code presented in that post:
for each edge (u, v):
for each vertex w:
if (v, w) is an edge and (w, u) is an edge:
return true
return false
Would you use an adjacency list representation of the graph to traverse all edges in the outer loop and then an adjacency matrix to check for the existence of the 2 edges in the inner loop?
Also, I saw a another solution presented as O(|V||E|) which involves performing a depth-first search on the graph and when you encounter a backedge (u,v) from the vertex u you're visiting check if the grandparent of the vertex u is vertex v. If it is then you have found a triangle. Is this algorithm correct? If so, wouldn't this be O(|V|+|E|)? In the post I linked to there is a counterexample for the breadth-first search solution offered up but based on the examples I came up with it seems like the depth-first search method I outlined above works.
Firstly, note that the algorithm does not so much count the number of triangles, but rather returns whether one exists at all.
For the first algorithm, the analysis becomes simple if we assume that we can do the lookup of (a, b) is an edge in constant time. (Since we loop over all vertices for all edges, and only do something with constant time we get O(|V||E|*1). ) Telling whether something is a member of a set in constant time can be done using for example a hashtable/set. We could also, as you said, do this by the use of the adjacency matrix, which we could create beforehand by looping over all the edges, not changing our total complexity.
An adjacency list representation for looping over the edges could perhaps be used, but traversing this may be O(|V|+|E|), giving us the total complexity O(|V||V| + |V||E|) which may be more than we wanted. If that is the case, we should instead loop over this first, and add all our edges to a normal collection (like a list).
For your proposed DFS algorithm, the problem is that we cannot be sure to encounter a certain edge as a backedge at the correct moment, as is illustrated by the following counterexample:
A -- B --- C -- D
\ / |
E ----- F
Here if we look from A-B-C-E, and then find the backedge E-B, we correctly find the triangle; but if we instead go A-B-C-D-F-E, the backedges E-B, and E-C, do no longer satisfy our condition.
This is a naive approach to count the number of cycles.
We need the input in the form of an adjacency matrix.
public int countTricycles(int [][] adj){
int n = adj.length;
int count = 0;
for(int i = 0; i < n ;i++){
for(int j = 0; j < n; j++){
if(adj[i][j] != 0){
for(int k = 0; k < n; k++){
if(k!=i && adj[j][k] != 0 && adj[i][k] != 0 ){
count++;
}
}
}
}
}
return count/6;
}
The complexity would be O(n^3).

Longest path in ordered graph

Let G = (V, E) be a directed graph with nodes v_1, v_2,..., v_n. We say that G is an ordered graph if it has the following properties.
Each edge goes from a node with lower index to a node with a higher index. That is, every directed edge has the form (v_i, v_j) with i < j.
Each node except v_n has at least one edge leaving it. That is, for every node v_i, there is at least one edge of the form (v_i, v_j).
Give an efficient algorithm that takes an ordered graph G and returns the length of the longest path that begins at v_1 and ends at v_n.
If you want to see the nice latex version: here
My attempt:
Dynamic programming. Opt(i) = max {Opt(j)} + 1. for all j such such j is reachable from i.
Is there perhaps a better way to do this? I think even with memoization my algorithm will still be exponential. (this is just from an old midterm review I found online)
Your approach is right, you will have to do
Opt(i) = max {Opt(j)} + 1} for all j such that j is reachable from i
However, this is exponential only if you run it without memoization. With memoization, you will have the memoized optimal value for every node j, j > i, when you are on node i.
For the worst case complexity, let us assume that every two nodes that could be connected are connected. This means, v_1 is connected with (v_2, v_3, ... v_n); v_i is connected with (v_(i+1), v_(i+2), ... v_n).
Number of Vertices (V) = n
Hence, number of edges (E) = n*(n+1)/2 = O(V^2)
Let us focus our attention on a vertex v_k. For this vertex, we have to go through the already derived optimal values of (n-k) nodes.
Number of ways of reaching v_k directly = (k-1)
Hence worst case time complexity => sigma((k-1)*(n-k)) from k=1 to k=n, which is a sigma of power 2 polynomical, and hence will result in O(n^3) Time complexity.
Simplistically, the worst case time complexity is O(n^3) == O(V^3) == O(E) * O(V) == O(EV).
Thanks to the first property, this problem can be solved O(V^2) or even better with O(E) where V is the number of vertices and E is the number of edges. Indeed, it uses the dynamic programming approach which is quiet similar with the one you gives. Let opt[i] be the length of the longest path for v_1 to v_i. Then
opt[i] = max(opt[j]) + 1 where j < i and we v_i and v_j is connected,
using this equation, it can be solved in O(V^2).
Even better, we can solve this in another order.
int LongestPath() {
for (int v = 1; v <= V; ++v) opt[v] = -1;
opt[1] = 0;
for (int v = 1; v <= V; ++v) {
if (opt[v] >= 0) {
/* Each edge can be visited at most once,
thus the runtime time is bounded by |E|.
*/
for_each( v' can be reached from v)
opt[v'] = max(opt[v]+1, opt[v']);
}
}
return opt[V];
}

Floyd Warshall algorithm with maximum steps allowed

Is it possible to modify Floyd-Warshall algorithm when solving the shortest path problem for a directed weighted graph with n nodes, such that each shortest path have no more than m steps? More precisely, for each pair of nodes i and j, you are about to find the shortest directed path from i to j that contains no more than m nodes. Does time complexity still remain O(n3)?
Meanwhile, I found an O(n3logm) algorithm for finding all-pairs shortest paths (ASPP) problem for the graph with n nodes such that no path contain more than m nodes.
Given two n x n matrices, say A = [aij] and B = [bij], their distance product is n x n matrix C = [cij] = A x B, defined by cij = min1≤k≤n {aik + bkj}.
This is related to the ASPP in the following way. Given weighted matrix E of distances in the graph, En is matrix of all-pairs shortest path. If we add constraint that no path contains more than m nodes, then matrix Em is the solution to ASPP. Since calculating power can be found in O(logm) time, this gives us an O(n3logm) algorithm.
Here, one may find faster algorithms for calculating distance product of matrices in some special cases, but the trivial one O(n3) is enough for me, since overall time is almost as fast as Floyd-Warshall approach.
Yes and Yes.
Each iteration of the algotithm adds a single unit of length of the paths you search for. So if you limit the iterations to m then you find a path of length at most m.
The complexity will remain O(n^3) in the worst case of m -> n. However, more precise estimate would be O(m * n^2).
I believe that this could be done with a different data structure (one that would allow you to keep track of the number of steps)?
Since usually Floyd-Warshall is done with a connectivity matrix (where matrix [j][k] represents the distance between nodes j and k), instead of making that matrix an integer we can make it a struct that has two integers : the distance between two nodes and the number of steps between them.
I wrote up something in C++ to explain what I mean :
#define INFINITY 9999999
struct floydEdge
{
int weight;
int steps;
};
floydEdge matrix[10][10];
int main()
{
//We initialize the matrix
for(int j=0;j<10;j++)
{
for(int k=0;k<10;k++)
{
matrix[j][k].weight=INFINITY;
matrix[j][k].steps=0;
}
}
//We input some weights
for(int j=0;j<10;j++)
{
int a, b;
cin>>a>>b;
cin>>matrix[a][b].weight;
matrix[b][a].weight=matrix[a][b].weight;
matrix[a][b].steps=matrix[b][a].steps=1;
}
//We do the Floyd-Warshall, also checking for the step count as well as the weights
for(int j=0;j<10;j++)
{
for(int k=0;k<10;k++)
{
for(int i=0;i<10;i++)
{
//We check if there is a shorter path between nodes j and k, using the node i. We also check if that path is shorter or equal to 4 steps.
if((matrix[j][k].weight > matrix[j][i].weight + matrix[i][k].weight) && (matrix[j][i].steps+matrix[i][k].steps<=4))
{
matrix[j][k].weight=matrix[k][j].weight=matrix[j][i].weight + matrix[i][k].weight;
matrix[j][k].steps=matrix[k][j].steps=matrix[j][i].steps+matrix[i][k].steps;
}
//If the path is not shorter, we can also check if the path is equal BUT requires less steps than the path we currently have.
else if((matrix[j][k].weight == matrix[j][i].weight + matrix[i][k].weight) && (matrix[j][i].steps+matrix[i][k].steps<matrix[j][k].steps))
{
matrix[j][k].weight=matrix[k][j].weight=matrix[j][i].weight + matrix[i][k].weight;
matrix[j][k].steps=matrix[k][j].steps=matrix[j][i].steps+matrix[i][k].steps;
}
}
}
}
return 0;
}
I believe (but I am not completely sure) this works perfectly (always giving the shortest paths for between all nodes). Give it a try and let me know!

Connecting a Set of Vertices into an optimally weighted graph

This is essentially the problem of connecting n destinations with the minimal amount of road possible.
The input is a set of vertices (a,b, ... , n)
The weight of an edge between two vertices is easily calculated (example the cartesian distance between the two vertices)
I would like an algorithm that given a set of vertices in euclidian space, returns a set of edges that would constitute a connected graph and whose total weight of edges is as small as it could be.
In graph language, this is the Minimum Spanning Tree of a Connected Graph.
With brute force I would have:
Define all possible edges between all vertices - say you have n
vertices, then you have n(n-1)/2 edges in the complete graph
A possible edge can be on or off (2 states)
Go through all possible edge on/off
combinations: 2^(n(n-1)/2)!
Ignore all those that would not connect the
graph
From the remaining combinations, find the one whose sum of
edge weights is the smallest of all
I understand this is an NP-Hard problem. However, realistically for my application, I will have a maximum of 11 vertices. I would like to be able to solve this on a typical modern smart phone, or at the very least on a small server size.
As a second variation, I would like to obtain the same goal, with the restriction that each vertex is connected to a maximum of one other vertex. Essentially obtaining a single trace, starting from any point, and finishing at any other point, as long as the graph is connected. There is no need to go back to where you started. In graph language, this is the Open Euclidian Traveling Salesman Problem.
Some pseudocode algorithms would be much helpful.
Ok for the first problem you have to build a Minimum Spanning Tree. There are several algorithms to do so, Prim and Kruskal. But take a look also in the first link to the treatment for complete graphs that it is your case.
For the second problem, it becomes a little more complicated. The problem becomes an Open Traveling Salesman Problem (oTSP). Reading the previous link maybe focused on Euclidean and Asymmetric.
Regards
Maybee you could try a greedy algorithm:
1. Create a list sortedList that stores each pair of nodes i and j and is sorted by the
weight w(i,j).
2. Create a HashSet connectedNodes that is empty at the beginning
3. while (connectedNodes.size() < n)
element := first element of sortedList
if (connectedNodes.isEmpty())
connectedNodes.put(element.nodeI);
connectedNodes.put(element.nodeJ);
delete element from sortedList
else
for(element in sortedList) //start again with the first
if(connectedNodes.get(element.nodeI) || connectedNodes.get(element.nodeJ))
if(!(connectedNodes.get(element.nodeI) && connectedNodes.get(element.nodeJ)))
//so it does not include already both nodes
connectedNodes.put(element.nodeI);
connectedNodes.put(element.nodeJ);
delete element from sortedList
break;
else
continue;
So I explain step 3 a little bit:
You add as long nodes till all nodes are connected to one other. It is sure that the graph is connected, because you just add a node, if he has a connection to an other one already in the connectedNodes list.
So this algorithm is greedy what means, it does not make sure, that the solution is optimal. But it is a quite good approximation, because it always takes the shortest edge (because sortedList is sorted by the weight of the edge).
Yo don't get duplicates in connectedNodes, because it is a HashSet, which also make the runtime faster.
All in all the runtime should be O(n^2) for the sorting at the beginning and below its around O(n^3), because in worst case you run in every step through the whole list that has size of n^2 and you do it n times, because you add one element in each step.
But more likely is, that you find an element much faster than O(n^2), i think in most cases it is O(n).
You can solve the travelsalesman problem and the hamilton path problem with the optimap tsp solver fron gebweb or a linear program solver. But the first question seems to ask for a minimum spanning tree maybe the question tag is wrong?
For the first problem, there is an O(n^2 * 2^n) time algorithm. Basically, you can use dynamic programming to reduce the search space. Let's say the set of all vertices is V, so the state space consists of all subsets of V, and the objective function f(S) is the minimum sum of weights of the edges connecting vertices in S. For each state S, you may enumerate over all edges (u, v) where u is in S and v is in V - S, and update f(S + {v}). After checking all possible states, the optimal answer is then f(V).
Below is the sample code to illustrate the idea, but it is implemented in a backward approach.
const int n = 11;
int weight[n][n];
int f[1 << n];
for (int state = 0; state < (1 << n); ++state)
{
int res = INF;
for (int i = 0; i < n; ++i)
{
if ((state & (1 << i)) == 0) continue;
for (int j = 0; j < n; ++j)
{
if (j == i || (state & (1 << j)) == 0) continue;
if (res > f[state - (1 << j)] + weight[i][j])
{
res = f[state - (1 << j)] + weight[i][j];
}
}
}
f[state] = res;
}
printf("%d\n", f[(1 << n) - 1]);
For the second problem, sorry I don't quite understand it. Maybe you should provide some examples?

Finding the heaviest length-constrained path in a weighted Binary Tree

UPDATE
I worked out an algorithm that I think runs in O(n*k) running time. Below is the pseudo-code:
routine heaviestKPath( T, k )
// create 2D matrix with n rows and k columns with each element = -∞
// we make it size k+1 because the 0th column must be all 0s for a later
// function to work properly and simplicity in our algorithm
matrix = new array[ T.getVertexCount() ][ k + 1 ] (-∞);
// set all elements in the first column of this matrix = 0
matrix[ n ][ 0 ] = 0;
// fill our matrix by traversing the tree
traverseToFillMatrix( T.root, k );
// consider a path that would arc over a node
globalMaxWeight = -∞;
findArcs( T.root, k );
return globalMaxWeight
end routine
// node = the current node; k = the path length; node.lc = node’s left child;
// node.rc = node’s right child; node.idx = node’s index (row) in the matrix;
// node.lc.wt/node.rc.wt = weight of the edge to left/right child;
routine traverseToFillMatrix( node, k )
if (node == null) return;
traverseToFillMatrix(node.lc, k ); // recurse left
traverseToFillMatrix(node.rc, k ); // recurse right
// in the case that a left/right child doesn’t exist, or both,
// let’s assume the code is smart enough to handle these cases
matrix[ node.idx ][ 1 ] = max( node.lc.wt, node.rc.wt );
for i = 2 to k {
// max returns the heavier of the 2 paths
matrix[node.idx][i] = max( matrix[node.lc.idx][i-1] + node.lc.wt,
matrix[node.rc.idx][i-1] + node.rc.wt);
}
end routine
// node = the current node, k = the path length
routine findArcs( node, k )
if (node == null) return;
nodeMax = matrix[node.idx][k];
longPath = path[node.idx][k];
i = 1;
j = k-1;
while ( i+j == k AND i < k ) {
left = node.lc.wt + matrix[node.lc.idx][i-1];
right = node.rc.wt + matrix[node.rc.idx][j-1];
if ( left + right > nodeMax ) {
nodeMax = left + right;
}
i++; j--;
}
// if this node’s max weight is larger than the global max weight, update
if ( globalMaxWeight < nodeMax ) {
globalMaxWeight = nodeMax;
}
findArcs( node.lc, k ); // recurse left
findArcs( node.rc, k ); // recurse right
end routine
Let me know what you think. Feedback is welcome.
I think have come up with two naive algorithms that find the heaviest length-constrained path in a weighted Binary Tree. Firstly, the description of the algorithm is as follows: given an n-vertex Binary Tree with weighted edges and some value k, find the heaviest path of length k.
For both algorithms, I'll need a reference to all vertices so I'll just do a simple traversal of the Tree to have a reference to all vertices, with each vertex having a reference to its left, right, and parent nodes in the tree.
Algorithm 1
For this algorithm, I'm basically planning on running DFS from each node in the Tree, with consideration to the fixed path length. In addition, since the path I'm looking for has the potential of going from left subtree to root to right subtree, I will have to consider 3 choices at each node. But this will result in a O(n*3^k) algorithm and I don't like that.
Algorithm 2
I'm essentially thinking about using a modified version of Dijkstra's Algorithm in order to consider a fixed path length. Since I'm looking for heaviest and Dijkstra's Algorithm finds the lightest, I'm planning on negating all edge weights before starting the traversal. Actually... this doesn't make sense since I'd have to run Dijkstra's on each node and that doesn't seem very efficient much better than the above algorithm.
So I guess my main questions are several. Firstly, do the algorithms I've described above solve the problem at hand? I'm not totally certain the Dijkstra's version will work as Dijkstra's is meant for positive edge values.
Now, I am sure there exist more clever/efficient algorithms for this... what is a better algorithm? I've read about "Using spine decompositions to efficiently solve the length-constrained heaviest path problem for trees" but that is really complicated and I don't understand it at all. Are there other algorithms that tackle this problem, maybe not as efficiently as spine decomposition but easier to understand?
You could use a DFS downwards from each node that stops after k edges to search for paths, but notice that this will do 2^k work at each node for a total of O(n*2^k) work, since the number of paths doubles at each level you go down from the starting node.
As DasBoot says in a comment, there is no advantage to using Dijkstra's algorithm here since it's cleverness amounts to choosing the shortest (or longest) way to get between 2 points when multiple routes are possible. With a tree there is always exactly 1 way.
I have a dynamic programming algorithm in mind that will require O(nk) time. Here are some hints:
If you choose some leaf vertex to be the root r and direct all other vertices downwards, away from the root, notice that every path in this directed tree has a highest node -- that is, a unique node that is nearest to r.
You can calculate the heaviest length-k path overall by going through each node v and calculating the heaviest length-k path whose highest node is v, finally taking the maximum over all nodes.
A length-k path whose highest node is v must have a length-i path descending towards one child and a length-(k-i) path descending towards the other.
That should be enough to get you thinking in the right direction; let me know if you need further help.
Here's my solution. Feedback is welcome.
Lets treat the binary tree as a directed graph, with edges going from parent to children. Lets define two concepts for each vertex v:
a) an arc: which is a directed path, that is, it starts from vertex v, and all vertices in the path are children of the starting vertex v.
b) a child-path: which is a directed or non-directed path containing v, that is, it could start anywhere, end anywhere, and go from child of v to v, and then, say to its other child. The set of arcs is a subset of the set of child-paths.
We also define a function HeaviestArc(v,j), which gives, for a vertex j, the heaviest arc, on the left or right side, of length j, starting at v. We also define LeftHeaviest(v,j), and RightHeaviest(v,j) as the heaviest left and right arcs of length j respectively.
Given this, we can define the following recurrences for each vertex v, based on its children:
LeftHeaviest(v,j) = weight(LeftEdge(v)) + HeaviestArc(LeftChild(v)),j-1);
RightHeaviest(v,j) = weight(RightEdge(v)) + HeaviestArc(RightChild(v)),j-1);
HeaviestArc(v,j) = max(LeftHeaviest(v,j),RightHeaviest(v,j));
Here j here goes from 1 to k, and HeaviestArc(v,0)=LeftHeaviest(v,0),RightHeaviest(v,0)=0 for all. For leaf nodes, HeaviestArc(v,0) = 0, and HeaviestArc(v,j)=-inf for all other j (I need to think about corner cases more thoroughly).
And then HeaviestChildPath(v), the heaviest child-path containing v, can be calculated as:
HeaviestChildPath(v) = max{ for j = 0 to k LeftHeaviest(j) + RightHeaviest(k-j)}
The heaviest path should be the heaviest of all child paths.
The estimated runtime of the algorithm should be order O(kn).
def traverse(node, running_weight, level):
if level == 0:
if max_weight < running_weight:
max_weight = running_weight
return
traverse(node->left,running_weight+node.weight,level-1)
traverse(node->right,running_weight+node.weight,level-1)
traverse(node->parent,running_weight+node.weight,level-1)
max_weight = 0
for node in tree:
traverse(node,0,N)

Resources