I'm having difficulty understanding the time of the dijkstra algorithm. Below I put the pseudo code I was analyzing for array. Considering that V are the vertices and E the edges. The Q as an array has its initialization with time O(V), the minimum and the for inside the while would have O(V) of while times O(V + E), then the result would be O(V²), am I correct?
1 function Dijkstra(Graph, source):
2 dist[source] ← 0 // Initialization
3
4 create vertex set Q
5
6 for each vertex v in Graph:
7 if v ≠ source
8 dist[v] ← INFINITY // Unknown distance from source to v
9 prev[v] ← UNDEFINED // Predecessor of v
10
11 Q.add_with_priority(v, dist[v])
12
13
14 while Q is not empty: // The main loop
15 u ← Q.extract_min() // Remove and return best vertex
16 for each neighbor v of u: // only v that is still in Q
17 alt ← dist[u] + length(u, v)
18 if alt < dist[v]
19 dist[v] ← alt
20 prev[v] ← u
21 Q.decrease_priority(v, alt)
22
23 return dist[], prev[]
Below I put the pseudo code I was analyzing for priority queue. Now, the Q as an priority queue has its initialization with time O(V), the minimum and the for inside the while would have O(V) of while times O(1 + ElogV), then the result would be O(VElogV), am I correct? Considering the worst case is E = (V - 1), then the result could not be O(V²logV), why the value I find on the web is O(ElogV)? why with priority queue is faster than array?
1 function Dijkstra(Graph, source):
2
3 create vertex set Q
4
5 for each vertex v in Graph: // Initialization
6 dist[v] ← INFINITY // Unknown distance from source to v
7 prev[v] ← UNDEFINED // Previous node in optimal path from source
8 add v to Q // All nodes initially in Q (unvisited nodes)
9
10 dist[source] ← 0 // Distance from source to source
11
12 while Q is not empty:
13 u ← vertex in Q with min dist[u] // Node with the least distance
14 // will be selected first
15 remove u from Q
16
17 for each neighbor v of u: // where v is still in Q.
18 alt ← dist[u] + length(u, v)
19 if alt < dist[v]: // A shorter path to v has been found
20 dist[v] ← alt
21 prev[v] ← u
22
23 return dist[], prev[]
The Q as an array has its initialization with time O(V), the minimum and the for inside the while would have O(V) of while times O(V + E), then the result would be O(V²), am I correct?
If that was correct, the time complexity would be O(V² + VE), and O(VE) would dominate (since V <= E on a 'useful' graph), giving you O(VE) for the whole algorithm. But you are NOT correct in your analysis, which is why the resulting time complexity is O(V²).
You are correct that the min() operation inside the WHILE loop has O(V) and thus O(V²) for the whole algorithm, but for the FOR loop, it's O(E) for the WHOLE duration of the algorithm, not the individual iterations. This is because you remove each vertex from Q exactly once, and you inspect all outgoing edges from each removed vertex. In other words, you inspect all edges, thus O(E).
Now, the Q as an priority queue has its initialization with time O(V), the minimum and the for inside the while would have O(V) of while times O(1 + ElogV), then the result would be O(VElogV), am I correct?
No. the time complexity of the min() operation and the FOR loop would depend on what data structure is used as the priority queue. Assuming that you are using a min-heap as the priority queue, min() will take O(logV), and the FOR loop will take a TOTAL of O(ElogV), which dominates, and becomes the total time complexity of the algorithm.
Here's a link to another answer that explains how to analyze the time complexity of Dijkstra's algorithm depending on which data structure you use to implement the priority queue:
Complexity Of Dijkstra's algorithm
Related
Given a graph like this one:
A
^ ^
/ \
3 4
/ \
B -- 5 -> C
E={(B,A)(C,A)(B,C)}
What happens if we run Dijkstra on node A?
A is initialized to 0, B and C to infinity, but A doesn't points anywhere.
So then we choose randomly between B and C? Or the algorithm doesn't work in that case?
Thanks!
Dijkstra will still run and give you the right answer for this graph. If you so choose you can initialize the queue with just the start node and add or update neighbors to/in the queue as you explore them. In this case the algorithm will just terminate after one iteration of extracting (A) from the queue and exploring its zero neighbors, appropriately leaving the distances to B and C as infinity (with no prev nodes) and leaving A's path zero. If you think about it, this is the desired answer, as there are no paths from A to B or C.
Or, if you implement it as in Wikipedia, adding every node to the queue at the start, it will still produce the same results.
1 function Dijkstra(Graph, source):
2 dist[source] ← 0 // Initialization
3
4 create vertex priority queue Q
5
6 for each vertex v in Graph:
7 if v ≠ source
8 dist[v] ← INFINITY // Unknown distance from source to v
9 prev[v] ← UNDEFINED // Predecessor of v
10
11 Q.add_with_priority(v, dist[v])
12
13
14 while Q is not empty: // The main loop
15 u ← Q.extract_min() // Remove and return best vertex
16 for each neighbor v of u: // only v that are still in Q
17 alt ← dist[u] + length(u, v)
18 if alt < dist[v]
19 dist[v] ← alt
20 prev[v] ← u
21 Q.decrease_priority(v, alt)
22
23 return dist, prev
After extracting A and exploring it's nonexistent neighbors, nothing is updated. It will then arbitrarily choose between B and C to extract next as they have the same distance (not 'randomly' of course, just depending on how you initialize/extract from your queue).
When it checks B, it will see it can get to C in Infinity + 5, not any better than the current distance to C of Infinity so nothing updates, and to A in Infinity + 3, not better than A's current distance of 0.
When it checks C, it will see it can get to A in Infinity + 4, not better than the current distance to A of 0, so nothing updates.
Then the queue is empty and the same result of dist[A] = 0, dist[B] = dist[C] = Infinity is returned.
So a correct implementation of Dijkstra will be able to handle such a graph (as it should any directed graph with non-negative weights).
as I see Dijkstra's and Prim's algorithms are amost the same. Here is the pseudocode from wikipedia, I'll explain the poinf of my confusion.
1 function Dijkstra(Graph, source):
2 dist[source] ← 0 // Initialization
3
4 create vertex set Q
5
6 for each vertex v in Graph:
7 if v ≠ source
8 dist[v] ← INFINITY // Unknown distance from source to v
9 prev[v] ← UNDEFINED // Predecessor of v
10
11 Q.add_with_priority(v, dist[v])
12
13
14 while Q is not empty: // The main loop
15 u ← Q.extract_min() // Remove and return best vertex
16 for each neighbor v of u: // only v that is still in Q
17 alt ← dist[u] + length(u, v)
18 if alt < dist[v]
19 dist[v] ← alt
20 prev[v] ← u
21 Q.decrease_priority(v, alt)
22
23 return dist[], prev[]
Prim's algorithm is almost the same, for convenience, I'll just change the loop that starts in 14th line
14 while Q is not empty:
15 u ← Q.extract_min()
16 for each neighbor v of u:
17 if v ∈ Q and length(u, v) < cost[v]
18 cost[v] ← length(u, v)
19 prev[v] ← u
20 Q.decrease_priority(v, length(u, v))
There are two changes, the first is replacing dist[] with cost[] and as I understand this is related to the fact that algorithms solve different problems.
The second one is obscure for me, namely the absence of if v ∈ Q this condition in Dijkstra's algorithm. I don't really get why we CAN return to the set of visited vertices in Prim's algorithm and this CANNOT happen in Dijkstra's algorithm.
In Dijkstra, we compute alt ← dist[u] + length(u, v) and set dist[v] to alt if alt is smaller than the current value of dist[v]. alt represents the distance from the start node to v if we go via u. However, u is the node that was just taken out of Q, and so, its distance from the start node is greater than or equal to all other nodes that have previously been taken out of Q. Because Dijkstra requires all edge weights to be nonnegative, alt is guaranteed to be greater than or equal to dist[v] if v is not in Q since it is the sum of dist[u] and length(u, v), and so it won't pass the condition in the if. In other words, if v is not in Q, u will be a detour relative to the path we already have to v.
Not sure if I got your idea right. For both Dijkstra and prims algorithms, we should only deal with the vertex in the Q.
For the Dijkstra algorithm, the pseudo code may not explicitly check if current vertice is still in Q, but it commented as "only v that is still in Q"
for each neighbor v of u: // only v that is still in Q
I assume they means the same thing as x ∈ Q
17 if x ∈ Q and length(u, v) < cost[v]
if the x here represents "v" in line 16.
Dijkstra and Prim algorithms are very similar.
The difference is:
Prim's algorithm: Closest vertex to a minimum spanning tree via an undirected edge
Dijsktra's algorithm: Closest vertex to a source via a directed path
Source: Algorithms by Sedgewick & Wayne
I'm looking at Djikstra's algorithm in pseudo-code on Wikipedia
1 function Dijkstra(Graph, source):
2
3 create vertex set Q
4
5 for each vertex v in Graph: // Initialization
6 dist[v] ← INFINITY // Unknown distance from source to v
7 prev[v] ← UNDEFINED // Previous node in optimal path from source
8 add v to Q // All nodes initially in Q (unvisited nodes)
9
10 dist[source] ← 0 // Distance from source to source
11
12 while Q is not empty:
13 u ← vertex in Q with min dist[u] // Source node will be selected first
14 remove u from Q
15
16 for each neighbor v of u: // where v is still in Q.
17 alt ← dist[u] + length(u, v)
18 if alt < dist[v]: // A shorter path to v has been found
19 dist[v] ← alt
20 prev[v] ← u
21
22 return dist[], prev[]
https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
and the part that's confusing me is line 16. It says for each neighbor but shouldn't that be for each child (i.e. for each neighbor where neighbor != parent). Otherwise I don't see the point of setting the parent in line 20.
The previous node is set on line 20:
prev[v] ← u
This can only happen if line 14 is executed:
remove u from Q
So, for any v, prev[v] cannot be in Q - it was previously removed, and it will never return to Q (within the loop starting at 12, items are not added anymore to Q). This is the same as saying for any u, prev[u] cannot be in Q - asides from changing the name of the variable, it says the same thing.
In the question you say that about line 16:
it says for each neighbor
But, if you look at the pseudocode, it actually says
for each neighbor v of u: // where v is still in Q.
So, prev[u] will not be iterated over - it's not in Q.
For what it's worth, I think the pseudocode is a bit sloppy and confusing // where v is still in Q should not be a comment. It doesn't clarify or explain the rest of the code - it alters the meaning, and should be part of the code. Perhaps that confused you.
Ultimately, Dijkstra's algorithm computes something called a shortest-path tree, a tree structure rooted at some starting node where the paths in the tree give the shortest paths to each node in the graph. The logic you're seeing that sets the parent of each node is the part of Dijkstra's algorithm that builds the tree one node at a time.
Although Dijkstra's algorithm builds the shortest-path tree, it doesn't walk over it. Rather, it works by processing the nodes of the original path in a particular order, constantly updating candidate distances of nodes adjacent to processed nodes. This means that in the pseudocode, the logic that says "loop over all the adjacent nodes" is correct because it means "loop over all the adjacent nodes in the original graph." It wouldn't work to iterate over all the child nodes in the generated shortest-path tree because that tree hasn't been completely assembled at that point in the algorithm.
Consider an undirected graph containing N nodes and M edges. Each edge Mi has an integer cost, Ci, associated with it.
The penalty of a path is the bitwise OR of every edge cost in the path between a pair of nodes, A and B. In other words, if a path contains edges M1,M2,...,Mk then the penalty for this path is C1 OR C2 OR ... OR Ck.
Given a graph and two nodes, A and B, find the path between A and B having the minimal possible penalty and print its penalty; if no such path exists, print −1 to indicate that there is no path from A to B.
Note: Loops and multiple edges are allowed.
constraints:
1≤N≤103
1≤M≤103
1≤Ci<1024
1≤Ui,Vi≤N
1≤A,B≤N
A≠B
this question is asked in a contest and its over I went through the tutorial but could not get it. can anyone explain or give the answer how to proceed?
It can be solved using Dynamic programming by following the recursive formula:
D(s,0) = true
D(v,i) = false OR D(v,i) OR { D(u,j) | (u,v) is an edge, j or c(u,v) = i }
Where s is the source node.
The idea is D(v,i) == true if and only if there is a path from s to v with weight of exactly i.
Now, you iteratively modify the graph in your dynamic programming, until it converges (which is at most after n iterations).
This is basically a variant of Bellman-Ford algorithm.
When you are done creating the DP table for the solution, the minimal path is min { x | D(t,x) = true} (where t is the target node).
Time complexity is O(m*n*log_2(R)), where R is the maximal weight allowed (1024 in your case).
What you are looking for is Dijkstra's Algorithm. Rather than adding the weight for each node, you should be ORing it.
So, the pseudo-code would be as follows (modified from the wikipedia example):
1 function Dijkstra(Graph, source):
2
3 create vertex set Q
4
5 for each vertex v in Graph: // Initialization
6 dist[v] ← INFINITY // Unknown distance from source to v
7 prev[v] ← UNDEFINED // Previous node in optimal path from source
8 add v to Q // All nodes initially in Q (unvisited nodes)
9
10 dist[source] ← 0 // Distance from source to source
11
12 while Q is not empty:
13 u ← vertex in Q with min dist[u] // Source node will be selected first
14 remove u from Q
15
16 for each neighbor v of u: // where v is still in Q.
17 alt ← dist[u] OR length(u, v)
18 if alt < dist[v]: // A shorter path to v has been found
19 dist[v] ← alt
20 prev[v] ← u
21
22 return dist[], prev[]
Note the OR on line 17.
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
typedef pair <ll,ll > pr;
vector <pr> adj[10005];
bool visited[10005][10005];
int main(){
ll n,m;
scanf("%lld%lld",&n,&m);
for(ll i=1;i<=m;i++){
ll u,v,w;
scanf("%lld%lld%lld",&u,&v,&w);
adj[u].push_back(make_pair(v,w));
adj[v].push_back(make_pair(u,w));
}
ll source,destination;
scanf("%lld%lld",&source,&destination);
queue<ll> bfsq;
bfsq.push(source);// source into queue
bfsq.push(0);//
while(!bfsq.empty()){
ll u=bfsq.front();
bfsq.pop();
ll cost=bfsq.front();
bfsq.pop();
visited[u][cost]=true;
for(ll i=0;i<adj[u].size();i++){
ll v=adj[u][i].first;// neighbor of u is v
ll w2=adj[u][i].second;//// u is connected to v with this cost
if(visited[v][w2|cost]==false){
visited[v][w2|cost]=true;
bfsq.push(v);
bfsq.push(w2|cost);
}
}
}
ll ans=-1LL;
for(ll i=0;i<1024;i++){
if(visited[destination][i]==true){
ans=i;
break;
}
}
printf("%lld\n",ans);
return 0;
}
In the wiki page on Dijkstra, I am informed that if destination is known, I can terminate the search after line 13. I don't get this, how do I terminate the search after line 13?
1 function Dijkstra(Graph, source):
2
3 dist[source] ← 0 // Distance from source to source
4 prev[source] ← undefined // Previous node in optimal path initialization
5
6 create vertex set Q
7
8 for each vertex v in Graph: // Initialization
9 if v ≠ source: // v has not yet been removed from Q (unvisited nodes)
10 dist[v] ← INFINITY // Unknown distance from source to v
11 prev[v] ← UNDEFINED // Previous node in optimal path from source
12 add v to Q // All nodes initially in Q (unvisited nodes)
13
14 while Q is not empty:
15 u ← vertex in Q with min dist[u] // Source node in the first case
16 remove u from Q
17
18 for each neighbor v of u: // where v is still in Q.
19 alt ← dist[u] + length(u, v)
20 if alt < dist[v]: // A shorter path to v has been found
21 dist[v] ← alt
22 prev[v] ← u
23
24 return dist[], prev[]
Wikipedia is wrong, the right line is 16 in that algorithm. An edit probably spoiled the line numbers or the paragraph below them.
What the paragraph meant is that if you're only interested in the shortest path from vertex S to Q, you can safely get out of the loop when Q is found since any other path will have a cost higher to reach it. Pseudocode follows
8 for each vertex v in Graph: // Initialization
9 if v ≠ source: // v has not yet been removed from Q (unvisited nodes)
10 dist[v] ← INFINITY // Unknown distance from source to v
11 prev[v] ← UNDEFINED // Previous node in optimal path from source
12 add v to Q // All nodes initially in Q (unvisited nodes)
13
14 while Q is not empty:
15 u ← vertex in Q with min dist[u] // Source node in the first case
16 remove u from Q
17
(extra) if u == destination_point then break loop
When you encounter the end point Q, you can safely skip the update part where you update any adjacent vertex with a shortest path to reach it -> you've already found your destination. To reconstruct the path simply reverse walk the vector prev from destination_point to the source.
More details and a C++ example here
You need to add some code after line 17:
1 function Dijkstra(Graph, source, destination):
2
3 dist[source] ← 0 // Distance from source to source
4 prev[source] ← undefined // Previous node in optimal path initialization
5
6 create vertex set Q
7
8 for each vertex v in Graph: // Initialization
9 if v ≠ source: // v has not yet been removed from Q (unvisited nodes)
10 dist[v] ← INFINITY // Unknown distance from source to v
11 prev[v] ← UNDEFINED // Previous node in optimal path from source
12 add v to Q // All nodes initially in Q (unvisited nodes)
13
14 while Q is not empty:
15 u ← vertex in Q with min dist[u] // Source node in the first case
16 remove u from Q
17
18 if source = destination:
19 return dist[], prev[] // Valid only for destination vertex and vertex with lower distance
20
21 for each neighbor v of u: // where v is still in Q.
22 alt ← dist[u] + length(u, v)
23 if alt < dist[v]: // A shorter path to v has been found
24 dist[v] ← alt
25 prev[v] ← u
26
27 return dist[], prev[]