I have an undirected unweighted graph represented using adjacency matrix where each node of the graph represents a space partition (e.g. State) while the edges represent the neiborhood relationship (i.e. neighboring states sharing common boundaries). My baseline algorithm uses DFS to traverse the graph and form subgraphs after each step (i.e. adding the new node visited which would result in a bunch of contiguous states). With that subgraph I perform a statistical significance test on certain patterns which exist in the nodes of the graph (i.e. within the states).
At this point I am essentially trying to make the traversal step faster.
I was wondering if you all could suggest any algorithm or resources (e.g. research paper) which performs graph traversal computationally faster than DFS.
Thanks for your suggestion and your time!
Most graph algorithms contain "for given vertex u, list all its neighbors v" as a primitive. Not sure, but sounds like you might want to speed up this piece. Indeed, each country has only few neighbors, typically much less than the total number of countries. If this is the case, replace adjacency matrix graph representation with adjacency lists.
Note that the algorithm itself (DFS or other) will likely remain the same, with just a few changes where it uses this primitive.
I am trying to get a vertex cover for an "almost" tree with 50,000 vertices. The graph is generated as a tree with random edges added in making it "almost" a tree.
I used the approximation method where you marry two vertices, add them to the cover and remove them from the graph, then move on to another set of vertices. After that I tried to reduce the number of vertices by removing the vertices that have all of their neighbors inside the vertex cover.
My question is how would I make the vertex cover even smaller? I'm trying to go as low as I can.
Here's an idea, but I have no idea if it is an improvement in practice:
From https://en.wikipedia.org/wiki/Biconnected_component "Any connected graph decomposes into a tree of biconnected components called the block-cut tree of the graph." Furthermore, you can compute such a decomposition in linear time.
I suggest that when you marry and remove two vertices you do this only for two vertices within the same biconnected component. When you have run out of vertices to merge you will have a set of trees not connected with each other. The vertex cover problem on trees is tractable via dynamic programming: for each node compute the cost of the best answer if that node is added to the cover and if that node is not added to the cover. You can compute the answers for a node given the best answers for its children.
Another way - for all I know better - would be to compute the minimum spanning tree of the graph and to use dynamic programming to compute the best vertex cover for that tree, neglecting the links outside the tree, remove the covered links from the graph, and then continue by marrying vertices as before.
I think I prefer the minimum spanning tree one. In producing the minimum spanning tree you are deleting a small number of links. A tree with N nodes had N-1 links, so even if you don't get back the original tree you get back one with as many links as it. A vertex cover for the complete graph is also a vertex cover for the minimum spanning tree so if the correct answer for the full graph has V vertices there is an answer for the minimum spanning tree with at most V vertices. If there were k random edges added to the tree there are k edges (not necessarily the same) that need to be added to turn the minimum spanning tree into the full graph. You can certainly make sure these new edges are covered with at most k vertices. So if the optimum answer has V vertices you will obtain an answer with at most V+k vertices.
Here's an attempt at an exact answer which is tractable when only a small number of links are added, or when they don't change the inter-node distances very much.
Find a minimum spanning tree, and divide edges into "tree edges" and "added edges", where the tree edges form a minimum spanning tree, and the added edges were not chosen for this. They may not be the edges actually added during construction but that doesn't matter. All trees on N nodes have N-1 edges so we have the same number of added edges as were used during creation, even if not the same edges.
Now pretend you can peek at the answer in the back of the book just enough to see, for one vertex from each added edge, whether that vertex was part of the best vertex cover. If it was, you can remove that vertex and its links from the problem. If not, the other vertex must be so you can remove it and its links from the problem.
You now have to find a minimum vertex cover for a tree or a number of disconnected trees, and we know how to do this - see my other answer for a bit more handwaving.
If you can't peek at the back of the book for an answer, and there are k added edges, try all 2^k possible answers that might have been in the back of the book and find the best. If you are lucky then added link A is in a different subtree from added link B. In that case you can confine the two calculations needed for the two possibilities for added link A (or B) to the dynamic programming calculations for the relevant subtree so you have only doubled the work instead of quadrupled it. In general, if your k added edges are in k different subtrees that don't interfere with each other, the cost is multiplied by 2 instead of 2^k.
Minimum vertex cover is an NP complete algorithm, which means that you can not solve it in a reasonable time even for something like 100 vertices (not to mention 50k).
For a tree there is a polynomial time greedy algorithm which is based on DFS, but the fact that you have "random edges added" screws everything up and makes this algorithm useless.
Wikipedia has an article about approximation algorithm, claims that it reaches factor 2 and claims that no better algorithm is know, which makes it quit unlikely that you will find one.
What is the exact difference between Dijkstra's and Prim's algorithms? I know Prim's will give a MST but the tree generated by Dijkstra will also be a MST. Then what is the exact difference?
Prim's algorithm constructs a minimum spanning tree for the graph, which is a tree that connects all nodes in the graph and has the least total cost among all trees that connect all the nodes. However, the length of a path between any two nodes in the MST might not be the shortest path between those two nodes in the original graph. MSTs are useful, for example, if you wanted to physically wire up the nodes in the graph to provide electricity to them at the least total cost. It doesn't matter that the path length between two nodes might not be optimal, since all you care about is the fact that they're connected.
Dijkstra's algorithm constructs a shortest path tree starting from some source node. A shortest path tree is a tree that connects all nodes in the graph back to the source node and has the property that the length of any path from the source node to any other node in the graph is minimized. This is useful, for example, if you wanted to build a road network that made it as efficient as possible for everyone to get to some major important landmark. However, the shortest path tree is not guaranteed to be a minimum spanning tree, and the sum of the costs on the edges of a shortest-path tree can be much larger than the cost of an MST.
Another important difference concerns what types of graphs the algorithms work on. Prim's algorithm works on undirected graphs only, since the concept of an MST assumes that graphs are inherently undirected. (There is something called a "minimum spanning arborescence" for directed graphs, but algorithms to find them are much more complicated). Dijkstra's algorithm will work fine on directed graphs, since shortest path trees can indeed be directed. Additionally, Dijkstra's algorithm does not necessarily yield the correct solution in graphs containing negative edge weights, while Prim's algorithm can handle this.
Dijkstra's algorithm doesn't create a MST, it finds the shortest path.
Consider this graph
5 5
s *-----*-----* t
\ /
-------
9
The shortest path is 9, while the MST is a different 'path' at 10.
Prim and Dijkstra algorithms are almost the same, except for the "relax function".
Prim:
MST-PRIM (G, w, r) {
for each key ∈ G.V
u.key = ∞
u.parent = NIL
r.key = 0
Q = G.V
while (Q ≠ ø)
u = Extract-Min(Q)
for each v ∈ G.Adj[u]
if (v ∈ Q)
alt = w(u,v) <== relax function, Pay attention here
if alt < v.key
v.parent = u
v.key = alt
}
Dijkstra:
Dijkstra (G, w, r) {
for each key ∈ G.V
u.key = ∞
u.parent = NIL
r.key = 0
Q = G.V
while (Q ≠ ø)
u = Extract-Min(Q)
for each v ∈ G.Adj[u]
if (v ∈ Q)
alt = w(u,v) + u.key <== relax function, Pay attention here
if alt < v.key
v.parent = u
v.key = alt
}
The only difference is pointed out by the arrow, which is the relax function.
The Prim, which searches for the minimum spanning tree, only cares about the minimum of the total edges cover all the vertices. The relax function is alt = w(u,v)
The Dijkstra, which searches for the minimum path length, so it cares about the edge accumulation. The relax function is alt = w(u,v) + u.key
Dijsktra's algorithm finds the minimum distance from node i to all nodes (you specify i). So in return you get the minimum distance tree from node i.
Prims algorithm gets you the minimum spaning tree for a given graph. A tree that connects all nodes while the sum of all costs is the minimum possible.
So with Dijkstra you can go from the selected node to any other with the minimum cost, you don't get this with Prim's
The only difference I see is that Prim's algorithm stores a minimum cost edge whereas Dijkstra's algorithm stores the total cost from a source vertex to the current vertex.
Dijkstra gives you a way from the source node to the destination node such that the cost is minimum. However Prim's algorithm gives you a minimum spanning tree such that all nodes are connected and the total cost is minimum.
In simple words:
So, if you want to deploy a train to connecte several cities, you would use Prim's algo. But if you want to go from one city to other saving as much time as possible, you'd use Dijkstra's algo.
Both can be implemented using exactly same generic algorithm as follows:
Inputs:
G: Graph
s: Starting vertex (any for Prim, source for Dijkstra)
f: a function that takes vertices u and v, returns a number
Generic(G, s, f)
Q = Enqueue all V with key = infinity, parent = null
s.key = 0
While Q is not empty
u = dequeue Q
For each v in adj(u)
if v is in Q and v.key > f(u,v)
v.key = f(u,v)
v.parent = u
For Prim, pass f = w(u, v) and for Dijkstra pass f = u.key + w(u, v).
Another interesting thing is that above Generic can also implement Breadth First Search (BFS) although it would be overkill because expensive priority queue is not really required. To turn above Generic algorithm in to BFS, pass f = u.key + 1 which is same as enforcing all weights to 1 (i.e. BFS gives minimum number of edges required to traverse from point A to B).
Intuition
Here's one good way to think about above generic algorithm: We start with two buckets A and B. Initially, put all your vertices in B so the bucket A is empty. Then we move one vertex from B to A. Now look at all the edges from vertices in A that crosses over to the vertices in B. We chose the one edge using some criteria from these cross-over edges and move corresponding vertex from B to A. Repeat this process until B is empty.
A brute force way to implement this idea would be to maintain a priority queue of the edges for the vertices in A that crosses over to B. Obviously that would be troublesome if graph was not sparse. So question would be can we instead maintain priority queue of vertices? This in fact we can as our decision finally is which vertex to pick from B.
Historical Context
It's interesting that the generic version of the technique behind both algorithms is conceptually as old as 1930 even when electronic computers weren't around.
The story starts with Otakar Borůvka who needed an algorithm for a family friend trying to figure out how to connect cities in the country of Moravia (now part of the Czech Republic) with minimal cost electric lines. He published his algorithm in 1926 in a mathematics related journal, as Computer Science didn't existed then. This came to the attention to Vojtěch Jarník who thought of an improvement on Borůvka's algorithm and published it in 1930. He in fact discovered the same algorithm that we now know as Prim's algorithm who re-discovered it in 1957.
Independent of all these, in 1956 Dijkstra needed to write a program to demonstrate the capabilities of a new computer his institute had developed. He thought it would be cool to have computer find connections to travel between two cities of the Netherlands. He designed the algorithm in 20 minutes. He created a graph of 64 cities with some simplifications (because his computer was 6-bit) and wrote code for this 1956 computer. However he didn't published his algorithm because primarily there were no computer science journals and he thought this may not be very important. The next year he learned about the problem of connecting terminals of new computers such that the length of wires was minimized. He thought about this problem and re-discovered Jarník/Prim's algorithm which again uses the same technique as the shortest path algorithm he had discovered a year before. He mentioned that both of his algorithms were designed without using pen or paper. In 1959 he published both algorithms in a paper that is just 2 and a half page long.
Dijkstra finds the shortest path between it's beginning node
and every other node. So in return you get the minimum distance tree from beginning node i.e. you can reach every other node as efficiently as possible.
Prims algorithm gets you the MST for a given graph i.e. a tree that connects all nodes while the sum of all costs is the minimum possible.
To make a story short with a realistic example:
Dijkstra wants to know the shortest path to each destination point by saving traveling time and fuel.
Prim wants to know how to efficiently deploy a train rail system i.e. saving material costs.
Directly from Dijkstra's Algorithm's wikipedia article:
The process that underlies Dijkstra's algorithm is similar to the greedy process used in Prim's algorithm. Prim's purpose is to find a minimum spanning tree that connects all nodes in the graph; Dijkstra is concerned with only two nodes. Prim's does not evaluate the total weight of the path from the starting node, only the individual path.
Here's what clicked for me: think about which vertex the algorithm takes next:
Prim's algorithm takes next the vertex that's closest to the tree, i.e. closest to some vertex anywhere on the tree.
Dijkstra's algorithm takes next the vertex that is closest to the source.
Source: R. Sedgewick's lecture on Dijkstra's algorithm, Algorithms, Part II: https://coursera.org/share/a551af98e24292b6445c82a2a5f16b18
I was bothered with the same question lately, and I think I might share my understanding...
I think the key difference between these two algorithms (Dijkstra and Prim) roots in the problem they are designed to solve, namely, shortest path between two nodes and minimal spanning tree (MST). The formal is to find the shortest path between say, node s and t, and a rational requirement is to visit each edge of the graph at most once. However, it does NOT require us to visit all the node. The latter (MST) is to get us visit ALL the node (at most once), and with the same rational requirement of visiting each edge at most once too.
That being said, Dijkstra allows us to "take shortcut" so long I can get from s to t, without worrying the consequence - once I get to t, I am done! Although there is also a path from s to t in the MST, but this s-t path is created with considerations of all the rest nodes, therefore, this path can be longer than the s-t path found by the Dijstra's algorithm. Below is a quick example with 3 nodes:
2 2
(s) o ----- o ----- o (t)
| |
-----------------
3
Let's say each of the top edges has the cost of 2, and the bottom edge has cost of 3, then Dijktra will tell us to the take the bottom path, since we don't care about the middle node. On the other hand, Prim will return us a MST with the top 2 edges, discarding the bottom edge.
Such difference is also reflected from the subtle difference in the implementations: in Dijkstra's algorithm, one needs to have a book keeping step (for every node) to update the shortest path from s, after absorbing a new node, whereas in Prim's algorithm, there is no such need.
The simplest explanation is in Prims you don't specify the Starting Node, but in dijsktra you (Need to have a starting node) have to find shortest path from the given node to all other nodes.
The key difference between the basic algorithms lies in their different edge-selection criteria. Generally, they both use a priority queue for selecting next nodes, but have different criteria to select the adjacent nodes of current processing nodes: Prim's Algorithm requires the next adjacent nodes must be also kept in the queue, while Dijkstra's Algorithm does not:
def dijkstra(g, s):
q <- make_priority_queue(VERTEX.distance)
for each vertex v in g.vertex:
v.distance <- infinite
v.predecessor ~> nil
q.add(v)
s.distance <- 0
while not q.is_empty:
u <- q.extract_min()
for each adjacent vertex v of u:
...
def prim(g, s):
q <- make_priority_queue(VERTEX.distance)
for each vertex v in g.vertex:
v.distance <- infinite
v.predecessor ~> nil
q.add(v)
s.distance <- 0
while not q.is_empty:
u <- q.extract_min()
for each adjacent vertex v of u:
if v in q and weight(u, v) < v.distance:// <-------selection--------
...
The calculations of vertex.distance are the second different point.
Dijkstras algorithm is used only to find shortest path.
In Minimum Spanning tree(Prim's or Kruskal's algorithm) you get minimum egdes with minimum edge value.
For example:- Consider a situation where you wan't to create a huge network for which u will be requiring a large number of wires so these counting of wire can be done using Minimum Spanning Tree(Prim's or Kruskal's algorithm) (i.e it will give you minimum number of wires to create huge wired network connection with minimum cost).
Whereas "Dijkstras algorithm" will be used to get the shortest path between two nodes while connecting any nodes with each other.
Dijkstra's algorithm is a single source shortest path problem between node i and j, but Prim's algorithm a minimal spanning tree problem. These algorithm use programming concept named 'greedy algorithm'
If you check these notion, please visit
Greedy algorithm lecture note : http://jeffe.cs.illinois.edu/teaching/algorithms/notes/07-greedy.pdf
Minimum spanning tree : http://jeffe.cs.illinois.edu/teaching/algorithms/notes/20-mst.pdf
Single source shortest path : http://jeffe.cs.illinois.edu/teaching/algorithms/notes/21-sssp.pdf
#templatetypedef has covered difference between MST and shortest path. I've covered the algorithm difference in another So answer by demonstrating that both can be implemented using same generic algorithm that takes one more parameter as input: function f(u,v). The difference between Prim and Dijkstra's algorithm is simply which f(u,v) you use.
At the code level, the other difference is the API.
You initialize Prim with a source vertex, s, i.e., Prim.new(s); s can be any vertex, and regardless of s, the end result, which are the edges of the minimum spanning tree (MST) are the same. To get the MST edges, we call the method edges().
You initialize Dijkstra with a source vertex, s, i.e., Dijkstra.new(s) that you want to get shortest path/distance to all other vertices. The end results, which are the shortest path/distance from s to all other vertices; are different depending on the s. To get the shortest paths/distances from s to any vertex, v, we call the methods distanceTo(v) and pathTo(v) respectively.
They both create trees with the greedy method.
With Prim's algorithm we find minimum cost spanning tree. The goal is to find minimum cost to cover all nodes.
with Dijkstra we find Single Source Shortest Path. The goal is find the shortest path from the source to every other node
Prim’s algorithm works exactly as Dijkstra’s, except
It does not keep track of the distance from the source.
Storing the edge that connected the front of the visited vertices to the next closest vertex.
The vertex used as “source” for Prim’s algorithm is
going to be the root of the MST.
I was reading about Graph algorithms and I came across these two algorithms:
Dijkstra's algorithm
Breadth-first search
What is the difference between Dijkstra's algorithm and BFS while looking for the shortest-path between nodes?
I searched a lot about this but didn't get any satisfactory answer!
The rules for BFS for finding shortest-path in a graph are:
We discover all the connected vertices,
Add them in the queue and also
Store the distance (weight/length) from source u to that vertex v.
Update with path from source u to that vertex v with shortest distance and we have it!
This is exactly the same thing we do in Dijkstra's algorithm!
So why are the time complexities of these algorithms so different?
If anyone can explain it with the help of a pseudo code then I will be
very grateful!
I know I am missing something! Please help!
Breadth-first search is just Dijkstra's algorithm with all edge weights equal to 1.
Dijkstra's algorithm is conceptually breadth-first search that respects edge costs.
The process for exploring the graph is structurally the same in both cases.
When using BFS for finding the shortest path in a graph, we discover all the connected vertices, add them to the queue and also maintain the distance from source to that vertex. Now, if we find a path from source to that vertex with less distance then we update it!
We do not maintain a distance in BFS. It is for discovery of nodes.
So we put them in a general queue and pop them. Unlike in Dijikstra, where we put accumulative weight of node (after relaxation) in a priority queue and pop the min distance.
So BFS would work like Dijikstra in equal weight graph. Complexity varies because of the use of simple queue and priority queue.
Dijkstra and BFS, both are the same algorithm. As said by others members, Dijkstra using priority_queue whereas BFS using a queue. The difference is because of the way the shortest path is calculated in both algorithms.
In BFS Algorithm, for finding the shortest path we traverse in all directions and update the distance array respectively. Basically, the pseudo-code will be as follow:
distance[src] = 0;
q.push(src);
while(queue not empty) {
pop the node at front (say u)
for all its adjacent (say v)
if dist[u] + weight < dist[v]
update distance of v
push v into queue
}
The above code will also give the shortest path in a weighted graph. But the time complexity is not equal to normal BFS i.e. O(E+V). Time complexity is more than O(E+V) because many of the edges are repeated twice.
Graph-Diagram
Consider, the above graph. Dry run it for the above pseudo-code you will find that node 2 and node 3 are pushed two times into the queue and further the distance for all future nodes is updated twice.
BFS-Traversal-Working
So, assume if there is lot more nodes after 3 then the distance calculated by the first insertion of 2 will be used for all future nodes then those distance will be again updated using the second push of node 2. Same scenario with 3.
So, you can see that nodes are repeated. Hence, all nodes and edges are not traversed only once.
Dijkstra Algorithm does a smart work here...rather than traversing in all the directions it only traverses in the direction with the shortest distance, so that repetition of updation of distance is prevented.
So, to trace the shortest distance we have to use priority_queue in place of the normal queue.
Dijkstra-Algo-Working
If you try to dry run the above graph again using the Dijkstra algorithm you will find that nodes are push twice but only that node is considered which has a shorter distance.
So, all nodes are traversed only once but time complexity is more than normal BFS because of the use of priority_queue.
With SPFA algorithm, you can get shortest path with normal queue in weighted edge graph.
It is variant of bellman-ford algorithm, and it can also handle negative weights.
But on the down side, it has worse time complexity over Dijkstra's
Since you asked for psuedocode this website has visualizations with psuedocode https://visualgo.net/en/sssp