Consider an undirected graph containing N nodes and M edges. Each edge Mi has an integer cost, Ci, associated with it.
The penalty of a path is the bitwise OR of every edge cost in the path between a pair of nodes, A and B. In other words, if a path contains edges M1,M2,...,Mk then the penalty for this path is C1 OR C2 OR ... OR Ck.
Given a graph and two nodes, A and B, find the path between A and B having the minimal possible penalty and print its penalty; if no such path exists, print −1 to indicate that there is no path from A to B.
Note: Loops and multiple edges are allowed.
constraints:
1≤N≤103
1≤M≤103
1≤Ci<1024
1≤Ui,Vi≤N
1≤A,B≤N
A≠B
this question is asked in a contest and its over I went through the tutorial but could not get it. can anyone explain or give the answer how to proceed?
It can be solved using Dynamic programming by following the recursive formula:
D(s,0) = true
D(v,i) = false OR D(v,i) OR { D(u,j) | (u,v) is an edge, j or c(u,v) = i }
Where s is the source node.
The idea is D(v,i) == true if and only if there is a path from s to v with weight of exactly i.
Now, you iteratively modify the graph in your dynamic programming, until it converges (which is at most after n iterations).
This is basically a variant of Bellman-Ford algorithm.
When you are done creating the DP table for the solution, the minimal path is min { x | D(t,x) = true} (where t is the target node).
Time complexity is O(m*n*log_2(R)), where R is the maximal weight allowed (1024 in your case).
What you are looking for is Dijkstra's Algorithm. Rather than adding the weight for each node, you should be ORing it.
So, the pseudo-code would be as follows (modified from the wikipedia example):
1 function Dijkstra(Graph, source):
2
3 create vertex set Q
4
5 for each vertex v in Graph: // Initialization
6 dist[v] ← INFINITY // Unknown distance from source to v
7 prev[v] ← UNDEFINED // Previous node in optimal path from source
8 add v to Q // All nodes initially in Q (unvisited nodes)
9
10 dist[source] ← 0 // Distance from source to source
11
12 while Q is not empty:
13 u ← vertex in Q with min dist[u] // Source node will be selected first
14 remove u from Q
15
16 for each neighbor v of u: // where v is still in Q.
17 alt ← dist[u] OR length(u, v)
18 if alt < dist[v]: // A shorter path to v has been found
19 dist[v] ← alt
20 prev[v] ← u
21
22 return dist[], prev[]
Note the OR on line 17.
#include <bits/stdc++.h>
using namespace std;
typedef long long ll;
typedef pair <ll,ll > pr;
vector <pr> adj[10005];
bool visited[10005][10005];
int main(){
ll n,m;
scanf("%lld%lld",&n,&m);
for(ll i=1;i<=m;i++){
ll u,v,w;
scanf("%lld%lld%lld",&u,&v,&w);
adj[u].push_back(make_pair(v,w));
adj[v].push_back(make_pair(u,w));
}
ll source,destination;
scanf("%lld%lld",&source,&destination);
queue<ll> bfsq;
bfsq.push(source);// source into queue
bfsq.push(0);//
while(!bfsq.empty()){
ll u=bfsq.front();
bfsq.pop();
ll cost=bfsq.front();
bfsq.pop();
visited[u][cost]=true;
for(ll i=0;i<adj[u].size();i++){
ll v=adj[u][i].first;// neighbor of u is v
ll w2=adj[u][i].second;//// u is connected to v with this cost
if(visited[v][w2|cost]==false){
visited[v][w2|cost]=true;
bfsq.push(v);
bfsq.push(w2|cost);
}
}
}
ll ans=-1LL;
for(ll i=0;i<1024;i++){
if(visited[destination][i]==true){
ans=i;
break;
}
}
printf("%lld\n",ans);
return 0;
}
Related
Given a graph like this one:
A
^ ^
/ \
3 4
/ \
B -- 5 -> C
E={(B,A)(C,A)(B,C)}
What happens if we run Dijkstra on node A?
A is initialized to 0, B and C to infinity, but A doesn't points anywhere.
So then we choose randomly between B and C? Or the algorithm doesn't work in that case?
Thanks!
Dijkstra will still run and give you the right answer for this graph. If you so choose you can initialize the queue with just the start node and add or update neighbors to/in the queue as you explore them. In this case the algorithm will just terminate after one iteration of extracting (A) from the queue and exploring its zero neighbors, appropriately leaving the distances to B and C as infinity (with no prev nodes) and leaving A's path zero. If you think about it, this is the desired answer, as there are no paths from A to B or C.
Or, if you implement it as in Wikipedia, adding every node to the queue at the start, it will still produce the same results.
1 function Dijkstra(Graph, source):
2 dist[source] ← 0 // Initialization
3
4 create vertex priority queue Q
5
6 for each vertex v in Graph:
7 if v ≠ source
8 dist[v] ← INFINITY // Unknown distance from source to v
9 prev[v] ← UNDEFINED // Predecessor of v
10
11 Q.add_with_priority(v, dist[v])
12
13
14 while Q is not empty: // The main loop
15 u ← Q.extract_min() // Remove and return best vertex
16 for each neighbor v of u: // only v that are still in Q
17 alt ← dist[u] + length(u, v)
18 if alt < dist[v]
19 dist[v] ← alt
20 prev[v] ← u
21 Q.decrease_priority(v, alt)
22
23 return dist, prev
After extracting A and exploring it's nonexistent neighbors, nothing is updated. It will then arbitrarily choose between B and C to extract next as they have the same distance (not 'randomly' of course, just depending on how you initialize/extract from your queue).
When it checks B, it will see it can get to C in Infinity + 5, not any better than the current distance to C of Infinity so nothing updates, and to A in Infinity + 3, not better than A's current distance of 0.
When it checks C, it will see it can get to A in Infinity + 4, not better than the current distance to A of 0, so nothing updates.
Then the queue is empty and the same result of dist[A] = 0, dist[B] = dist[C] = Infinity is returned.
So a correct implementation of Dijkstra will be able to handle such a graph (as it should any directed graph with non-negative weights).
as I see Dijkstra's and Prim's algorithms are amost the same. Here is the pseudocode from wikipedia, I'll explain the poinf of my confusion.
1 function Dijkstra(Graph, source):
2 dist[source] ← 0 // Initialization
3
4 create vertex set Q
5
6 for each vertex v in Graph:
7 if v ≠ source
8 dist[v] ← INFINITY // Unknown distance from source to v
9 prev[v] ← UNDEFINED // Predecessor of v
10
11 Q.add_with_priority(v, dist[v])
12
13
14 while Q is not empty: // The main loop
15 u ← Q.extract_min() // Remove and return best vertex
16 for each neighbor v of u: // only v that is still in Q
17 alt ← dist[u] + length(u, v)
18 if alt < dist[v]
19 dist[v] ← alt
20 prev[v] ← u
21 Q.decrease_priority(v, alt)
22
23 return dist[], prev[]
Prim's algorithm is almost the same, for convenience, I'll just change the loop that starts in 14th line
14 while Q is not empty:
15 u ← Q.extract_min()
16 for each neighbor v of u:
17 if v ∈ Q and length(u, v) < cost[v]
18 cost[v] ← length(u, v)
19 prev[v] ← u
20 Q.decrease_priority(v, length(u, v))
There are two changes, the first is replacing dist[] with cost[] and as I understand this is related to the fact that algorithms solve different problems.
The second one is obscure for me, namely the absence of if v ∈ Q this condition in Dijkstra's algorithm. I don't really get why we CAN return to the set of visited vertices in Prim's algorithm and this CANNOT happen in Dijkstra's algorithm.
In Dijkstra, we compute alt ← dist[u] + length(u, v) and set dist[v] to alt if alt is smaller than the current value of dist[v]. alt represents the distance from the start node to v if we go via u. However, u is the node that was just taken out of Q, and so, its distance from the start node is greater than or equal to all other nodes that have previously been taken out of Q. Because Dijkstra requires all edge weights to be nonnegative, alt is guaranteed to be greater than or equal to dist[v] if v is not in Q since it is the sum of dist[u] and length(u, v), and so it won't pass the condition in the if. In other words, if v is not in Q, u will be a detour relative to the path we already have to v.
Not sure if I got your idea right. For both Dijkstra and prims algorithms, we should only deal with the vertex in the Q.
For the Dijkstra algorithm, the pseudo code may not explicitly check if current vertice is still in Q, but it commented as "only v that is still in Q"
for each neighbor v of u: // only v that is still in Q
I assume they means the same thing as x ∈ Q
17 if x ∈ Q and length(u, v) < cost[v]
if the x here represents "v" in line 16.
Dijkstra and Prim algorithms are very similar.
The difference is:
Prim's algorithm: Closest vertex to a minimum spanning tree via an undirected edge
Dijsktra's algorithm: Closest vertex to a source via a directed path
Source: Algorithms by Sedgewick & Wayne
I'm looking at Djikstra's algorithm in pseudo-code on Wikipedia
1 function Dijkstra(Graph, source):
2
3 create vertex set Q
4
5 for each vertex v in Graph: // Initialization
6 dist[v] ← INFINITY // Unknown distance from source to v
7 prev[v] ← UNDEFINED // Previous node in optimal path from source
8 add v to Q // All nodes initially in Q (unvisited nodes)
9
10 dist[source] ← 0 // Distance from source to source
11
12 while Q is not empty:
13 u ← vertex in Q with min dist[u] // Source node will be selected first
14 remove u from Q
15
16 for each neighbor v of u: // where v is still in Q.
17 alt ← dist[u] + length(u, v)
18 if alt < dist[v]: // A shorter path to v has been found
19 dist[v] ← alt
20 prev[v] ← u
21
22 return dist[], prev[]
https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm
and the part that's confusing me is line 16. It says for each neighbor but shouldn't that be for each child (i.e. for each neighbor where neighbor != parent). Otherwise I don't see the point of setting the parent in line 20.
The previous node is set on line 20:
prev[v] ← u
This can only happen if line 14 is executed:
remove u from Q
So, for any v, prev[v] cannot be in Q - it was previously removed, and it will never return to Q (within the loop starting at 12, items are not added anymore to Q). This is the same as saying for any u, prev[u] cannot be in Q - asides from changing the name of the variable, it says the same thing.
In the question you say that about line 16:
it says for each neighbor
But, if you look at the pseudocode, it actually says
for each neighbor v of u: // where v is still in Q.
So, prev[u] will not be iterated over - it's not in Q.
For what it's worth, I think the pseudocode is a bit sloppy and confusing // where v is still in Q should not be a comment. It doesn't clarify or explain the rest of the code - it alters the meaning, and should be part of the code. Perhaps that confused you.
Ultimately, Dijkstra's algorithm computes something called a shortest-path tree, a tree structure rooted at some starting node where the paths in the tree give the shortest paths to each node in the graph. The logic you're seeing that sets the parent of each node is the part of Dijkstra's algorithm that builds the tree one node at a time.
Although Dijkstra's algorithm builds the shortest-path tree, it doesn't walk over it. Rather, it works by processing the nodes of the original path in a particular order, constantly updating candidate distances of nodes adjacent to processed nodes. This means that in the pseudocode, the logic that says "loop over all the adjacent nodes" is correct because it means "loop over all the adjacent nodes in the original graph." It wouldn't work to iterate over all the child nodes in the generated shortest-path tree because that tree hasn't been completely assembled at that point in the algorithm.
Given an undirected graphG = (V,E), is there an algorithm that computes the total number of shortest path between two arbitrary vertices u & v ? I think we can make use of the Dijkstra's algorithm.
Yes you can use dijkstra. Create an array that store total number of shortest path to any node. Call it total. Initial value of all array member is 0 except total[s] = 1 where s is source.
In dijkstra loop, when comparing smallest path of a node, if the result of comparation is smaller, update the total array of that node with the number of total of current node. if it equals, add the total array of that node with the number of total of current node.
Pseudocode taken from wikipedia with some modification:
function Dijkstra(Graph, source):
create vertex set Q
for each vertex v in Graph: // Initialization
dist[v] ← INFINITY // Unknown distance from source to v
total[v] ← 0 // total number of shortest path
add v to Q // All nodes initially in Q (unvisited nodes)
dist[source] ← 0 // Distance from source to source
total[source] ← 1 // total number of shortest path of source is set to 1
while Q is not empty:
u ← vertex in Q with min dist[u] // Source node will be selected first
remove u from Q
for each neighbor v of u: // where v is still in Q.
alt ← dist[u] + length(u, v)
if alt < dist[v]: // A shorter path to v has been found
dist[v] ← alt
total[v] ← total[u] // update the total array of that node with the number of total array of current node
elseif alt = dist[v]
total[v] ← total[v] + total[u] // add the total array of that node with the number of total array of current node
return dist[], total[]
I've been studying the three and I'm stating my inferences from them below. Could someone tell me if I have understood them accurately enough or not? Thank you.
Dijkstra's algorithm is used only when you have a single source and you want to know the smallest path from one node to another, but fails in cases like this
Floyd-Warshall's algorithm is used when any of all the nodes can be a source, so you want the shortest distance to reach any destination node from any source node. This only fails when there are negative cycles
(this is the most important one. I mean, this is the one I'm least sure about:)
3.Bellman-Ford is used like Dijkstra's, when there is only one source. This can handle negative weights and its working is the same as Floyd-Warshall's except for one source, right?
If you need to have a look, the corresponding algorithms are (courtesy Wikipedia):
Bellman-Ford:
procedure BellmanFord(list vertices, list edges, vertex source)
// This implementation takes in a graph, represented as lists of vertices
// and edges, and modifies the vertices so that their distance and
// predecessor attributes store the shortest paths.
// Step 1: initialize graph
for each vertex v in vertices:
if v is source then v.distance := 0
else v.distance := infinity
v.predecessor := null
// Step 2: relax edges repeatedly
for i from 1 to size(vertices)-1:
for each edge uv in edges: // uv is the edge from u to v
u := uv.source
v := uv.destination
if u.distance + uv.weight < v.distance:
v.distance := u.distance + uv.weight
v.predecessor := u
// Step 3: check for negative-weight cycles
for each edge uv in edges:
u := uv.source
v := uv.destination
if u.distance + uv.weight < v.distance:
error "Graph contains a negative-weight cycle"
Dijkstra:
1 function Dijkstra(Graph, source):
2 for each vertex v in Graph: // Initializations
3 dist[v] := infinity ; // Unknown distance function from
4 // source to v
5 previous[v] := undefined ; // Previous node in optimal path
6 // from source
7
8 dist[source] := 0 ; // Distance from source to source
9 Q := the set of all nodes in Graph ; // All nodes in the graph are
10 // unoptimized - thus are in Q
11 while Q is not empty: // The main loop
12 u := vertex in Q with smallest distance in dist[] ; // Start node in first case
13 if dist[u] = infinity:
14 break ; // all remaining vertices are
15 // inaccessible from source
16
17 remove u from Q ;
18 for each neighbor v of u: // where v has not yet been
19 removed from Q.
20 alt := dist[u] + dist_between(u, v) ;
21 if alt < dist[v]: // Relax (u,v,a)
22 dist[v] := alt ;
23 previous[v] := u ;
24 decrease-key v in Q; // Reorder v in the Queue
25 return dist;
Floyd-Warshall:
1 /* Assume a function edgeCost(i,j) which returns the cost of the edge from i to j
2 (infinity if there is none).
3 Also assume that n is the number of vertices and edgeCost(i,i) = 0
4 */
5
6 int path[][];
7 /* A 2-dimensional matrix. At each step in the algorithm, path[i][j] is the shortest path
8 from i to j using intermediate vertices (1..k−1). Each path[i][j] is initialized to
9 edgeCost(i,j).
10 */
11
12 procedure FloydWarshall ()
13 for k := 1 to n
14 for i := 1 to n
15 for j := 1 to n
16 path[i][j] = min ( path[i][j], path[i][k]+path[k][j] );
You are correct about the first two questions, and about the goal of Floyd-Warshall (finding the shortest paths between all pairs), but not about the relationship between Bellman-Ford and Floyd-Warshall: Both algorithms use dynamic programming to find the shortest path, but FW isn't the same as running BF from each starting node to every other node.
In BF, the question is: What is the shortest path from the source to the target using at most k steps, and the running time is O(EV). If we were to run it to each other node, the running time would be O(EV^2).
In FW, the question is: what is the shortest path from i to j through k, for all nodes i,j,k. This leads to O(V^3) running time - better than BF for each starting node (by a factor of up to |V| for dense graphs).
One more note about negative cycles / weights: Dijkstra may simply fail to give the correct results. BF and FW won't fail - they will correctly state that there is no minimum weight path, since the negative weight is unbounded.
Single source shortest paths:
Dijkstra Algorithm - No negative weight allowed - O(E+Vlg(V))
Bellman ford Algorithm - Negative weight is allowed. But if a negative cycle is present Bellman ford will detect the -ve cycle - O(VE)
Directed Acyclic Graph - as name suggests it works only for DAG - O(V+E)
All pairs shortest paths:
Dijkstra Algorithm - No negative weight allowed - O(VE + V^2lg(V))
Bellman ford Algorithm - O(V^2E)
Matrix chain multiplication method -complexity same as Bellman ford algorithm
Floyd Warshall algorithm -uses dynamic programming method - Complexity is O(V^3)