from this website's pseudocode:
Given a graph, G, with edges E of the form (v1, v2) and vertices V, and a
source vertex, s
dist : array of distances from the source to each vertex
prev : array of pointers to preceding vertices
i : loop index
F : list of finished vertices
U : list or heap unfinished vertices
/* Initialization: set every distance to INFINITY until we discover a path */
for i = 0 to |V| - 1
dist[i] = INFINITY
prev[i] = NULL
end
/* The distance from the source to the source is defined to be zero */
dist[s] = 0
/* This loop corresponds to sending out the explorers walking the paths, where
* the step of picking "the vertex, v, with the shortest path to s" corresponds
* to an explorer arriving at an unexplored vertex */
while(F is missing a vertex)
pick the vertex, v, in U with the shortest path to s
add v to F
for each edge of v, (v1, v2)
/* The next step is sometimes given the confusing name "relaxation"
if(dist[v1] + length(v1, v2) < dist[v2])
dist[v2] = dist[v1] + length(v1, v2)
prev[v2] = v1
possibly update U, depending on implementation
end if
end for
end while
what is meant by: if(dist[v1] + length(v1, v2) < dist[v2])?
particularly: length(v1, v2).
shouldn't: dist[v1] < dist[v2] enough?
length(v1, v2) is the weight of the edge from node v1 to v2.
This condition checks whatever the path to v2 can be improved by going to v1 and then through edge (v1,v2).
Related
I would like to know, what would be the most efficient way (w.r.t., Space and Time) to solve the following problem:
Given an undirected Graph G = (V, E), a positive number N and a vertex S in V. Assume that every vertex in V has a cost value. Find the N highest cost vertices that is connected to S.
For example:
G = (V, E)
V = {v1, v2, v3, v4},
E = {(v1, v2),
(v1, v3),
(v2, v4),
(v3, v4)}
v1 cost = 1
v2 cost = 2
v3 cost = 3
v4 cost = 4
N = 2, S = v1
result: {v3, v4}
This problem can be solved easily by the graph traversal algorithm (e.g., BFS or DFS). To find the vertices connected to S, we can run either BFS or DFS starting from S. As the space and time complexity of BFS and DFS is same (i.e., time complexity: O(V+E), space complexity: O(E)), here I am going to show the pseudocode using DFS:
Parameter Definition:
* G -> Graph
* S -> Starting node
* N -> Number of connected (highest cost) vertices to find
* Cost -> Array of size V, contains the vertex cost value
procedure DFS-traversal(G,S,N,Cost):
let St be a stack
let Q be a min-priority-queue contains <cost, vertex-id>
let discovered is an array (of size V) to mark already visited vertices
St.push(S)
// Comment: if you do not want to consider the case "S is connected to S"
// then, you can consider commenting the following line
Q.push(make-pair(S, Cost[S]))
label S as discovered
while St is not empty
v = St.pop()
for all edges from v to w in G.adjacentEdges(v) do
if w is not labeled as discovered:
label w as discovered
St.push(w)
Q.push(make-pair(w, Cost[w]))
if Q.size() == N + 1:
Q.pop()
let ret is a N sized array
while Q is not empty:
ret.append(Q.top().second)
Q.pop()
Let's first describe the process first. Here, I run the iterative version of DFS to traverse the graph starting from S. During the traversal, I use a priority-queue to keep the N highest cost vertices that is reachable from S. Instead of the priority-queue, we can use a simple array (or even we can reuse the discovered array) to keep the record of the reachable vertices with cost.
Analysis of space-complexity:
To store the graph: O(E)
Priority-queue: O(N)
Stack: O(V)
For labeling discovered: O(V)
So, as O(E) is the dominating term here, we can consider O(E) as the overall space complexity.
Analysis of time-complexity:
DFS-traversal: O(V+E)
To track N highest cost vertices:
By maintaining priority-queue: O(V*logN)
Or alternatively using array: O(V*logV)
The overall time-complexity would be: O(V*logN + E) or O(V*logV + E)
Given an undirected weighted graph G(V,E). and any three vertices let u, v and w. Find a vertex x of G. such that dist(u,x) + dist(v,x) + dist(w,x) is minimum.
x could be any vertex in G (u, v and w included). is there exits any particular algorithm for this problem?
You can do it with stack algorithm like the pseudo-code below:
void FindNeigh(node node1, node node2,int graphsize)
{
byte[graphsize] isGraphProcessed; // 0 array
stack nodes1, nodes2; //0 arrays
nodes1.push(node1);
nodes2.push(node2);
bool found = false;
while(!nodes1.empty && !nodes2.empty())
{
stack tmp = null;
for(node: nodes1)
for(neigh : node.neighbors)
if(!isGraphProcessed[neigh.id])
{
tmp.push(neigh.id);
isGraphProcessed[neigh.id] = 1; // Flags for node 1
}
else if(isGraphProcessed[neigh.id] == 2) // The flag of node 2 is set
return neigh;
nodes1 =tmp;
tmp = null;
for(node: nodes2)
for(neigh : node.neighbors)
if(!isGraphProcessed[neigh.id])
{
tmp.push(neigh.id);
isGraphProcessed[neigh.id] = 2; // Flags for node 2
}
else if(isGraphProcessed[neigh.id] == 1) // The flag of node 1 is set
return neigh;
nodes2 = tmp;
}
return NULL; // don't exist
}
How does it work
You start from both edges of the graph
You check neighbors in a stack
If a neighbor have already been added in the stack of the other node, that mean that it have already been reached by the other node --> He is the closest node. We return it.
If nothing is found, we do the same thing with the neighbour of the neigbors (and so on recursively) until something is found.
If node2 can't be reached from node1 it returns 0.
Note : This algorithm works to find the minimal distance between 2 edges. If you want to do it for 3 edges you can add a 3rd stack and look for the first node having the 3 flags (e.g. 1, 2 and 4).
Hope it helps :)
If k is large and there are no negative edge cost cycles then Floyd Warshall's Algorithm can work. It runs in O(|V|^3) time and after its completion we have the entire shortest distance matrix and we can get the shortest distances between any two vertices in O(1) time. Then just scan and look for the best vertex x that gives the least sum of total distance value from the k vertices.
There is a directed graph G = [V ; E] with edge weights w(u, v) for (u, v) ∈ E.
Suppose the values for {d[v], π[v]}; v ∈ V and claims
that these are the length of the shortest path and the predecessor node in
it for v ∈ V , how could I verify if this statement is true or false that does not solve the entire shortest path problem from scratch? This is an problem I met with not many ideas in my head ..
The problem is a bit unclear, but to clarify:
There's a node s in your graph, and that for each vertex v:
for v != s, pi[v] is intended to be a node adjacent to v that's on a shortest path from v to s.
d[v] is intended to store the shortest distance from v to s.
The problem is to verify, given a pi, d, that they legitimately contain back-edges and minimal distances.
An easily implemented condition that verifies this is as follows:
For each vertex v
Either:
v = s and d[v] = 0
Or:
d[pi[v]] = d[v] - 1
d[u] >= d[v] - 1 for each u adjacent to v
pi[v] is adjacent to v
This check runs in O(V + E) time.
The problem is
Given a graph and N sets of vertices, how to check that if the vertices
in each set exist on a path (that may not have to be simple) in the
graph.
For example, if a set of vertices is {A, B, C}
1) On a graph
A --> B --> C
The vertices A, B and C are on a path
2) On a graph
A ---> B ---> ...
^
C -----+
The vertices A, B and C are not on a path
3) On a graph
A <--- B <--- ...
|
C <----+
The vertices A, B and C are not on a path
4) On a graph
A ---> X -----> B ---> C --> ...
| ^
+---------------+
The vertices A, B and C are on a path
Here is a simple algorithm with complexity N * (V + E).
for each set S of vertices with more than 1 elements {
initialize M that maps a vertice to another vertice it reaches;
mark all vertices of G unvisited;
pick and remove a vertex v from S;
while S is not empty {
from all unvisited node, find one vertex v' in S that v can reach;
if v' does not exist {
pick and remove a vertex v from S;
continue;
}
M[v] = v';
remove v' from S;
v = v';
}
// Check that the relations in M forms a path
{
if size[M] != size(S)-1 {
return false;
}
take the vertex v in S that is not in M
while M is not empty {
if not exist v' s.t. M[v]' = v {
return false;
}
}
return true;
}
}
The for-loop takes N step; the while-loop would visit all node/edge in the worst case with cost V + E.
Is there any known algorithm to solve the problem?
If a graph is DAG, could we have a better solution?
Thank you
Acyclicity here is not a meaningful assumption, since we can merge each strong component into one vertex. The paths are a red herring too; with a bunch of two-node paths, we're essentially making a bunch of reachability queries in a DAG. Doing this faster than O(n2) is thought to be a hard problem: https://cstheory.stackexchange.com/questions/25298/dag-reachability-with-on-log-n-space-and-olog-n-time-queries .
You should check out the Floyd-Warshall algorithm. It will give you the lengths of the shortest path between all pairs of vertices in the graph in O(V3). Once you have those results, you can do a brute-force depth-first traversal to determine whether you can go from one node to the next and the next etc. That should happen in O(nn) where n is the number of vertices in your current set. Total complexity, then, should be O(V3 + N*nn) (or something like that).
The nn seems daunting, but if n is small compared to V it won't be a big deal in practice.
I can guess that one could improve on that given certain restrictions on the graph.
(This is derived from a recently completed programming contest)
You are given G, a connected graph with N nodes and N-1 edges.
(Notice that this implies G forms a tree.)
Each edge of G is directed. (not necessarily upward to any root)
For each vertex v of G it is possible to invert zero or more edges such that there is a directed path from every other vertex w to v. Let the minimum possible number of edge inversions to achieve this be f(v).
By what linear or loglinear algorithm can we determine the subset of vertexes that have the minimal overall f(v) (including the value of f(v) of those vertexes)?
For example consider the 4 vertex graph with these edges:
A<--B
C<--B
D<--B
The value of f(A) = 2, f(B) = 3, f(C) = 2 and f(D) = 2...
..so therefore the desired output is {A,C,D} and 2
(note we only need to calculate the f(v) of vertexes that have a minimal f(v) - not all of them)
Code:
For posterity here is the code of solution:
int main()
{
struct Edge
{
bool fwd;
int dest;
};
int n;
cin >> n;
vector<vector<Edge>> V(n+1);
rep(i, n-1)
{
int src, dest;
scanf("%d %d", &src, &dest);
V[src].push_back(Edge{true, dest});
V[dest].push_back(Edge{false, src});
}
vector<int> F(n+1, -1);
vector<bool> done(n+1, false);
vector<int> todo;
todo.push_back(1);
done[1] = true;
F[1] = 0;
while (!todo.empty())
{
int next = todo.back();
todo.pop_back();
for (Edge e : V[next])
{
if (done[e.dest])
continue;
if (!e.fwd)
F[1]++;
done[e.dest] = true;
todo.push_back(e.dest);
}
}
todo.push_back(1);
while (!todo.empty())
{
int next = todo.back();
todo.pop_back();
for (Edge e : V[next])
{
if (F[e.dest] != -1)
continue;
if (e.fwd)
F[e.dest] = F[next] + 1;
else
F[e.dest] = F[next] - 1;
todo.push_back(e.dest);
}
}
int minf = INT_MAX;
rep(i,1,n)
chmin(minf, F[i]);
cout << minf << endl;
rep(i,1,n)
if (F[i] == minf)
cout << i << " ";
cout << endl;
}
I think that the following algorithm works correctly, and it certainly works in linear time.
The motivation for this algorithm is the following. Let's suppose that you already know the value of f(v) for some single node v. Now, consider any node u adjacent to v. If we want to compute the value of f(u), we can reuse some of the information from f(v) in order to compute it. Note that in order to get from any node w in the graph to u, one of two cases must happen:
That path passes through the edge connecting u and v. In that case, the way that we get from w to u is to go from w to v, then to follow the edge from v to u.
That path does not pass through the edge connecting u and v. In that case, the way that we get from w to u is the exact same way that we got from w to v, except that we stop as soon as we get to u.
The reason that this observation is important is that it means that if we know the number of edges we'd flip to get from any node to v, we can easily modify it to get the set of edges that we'd flip to get from any node to u. Specifically, it's going to be the same set of edges as before, except that we want to direct the edge connecting u and v so that it connects v to u rather than the other way around.
If the edge from u to v is initially directed (u, v), then we have to flip all the normal edges we flipped to get every node pointing at v, plus one more edge to get v pointed back at u. Thus f(u) = f(v) + 1. Otherwise, if the edge is originally directed (v, u), then the set of edges that we'd flip would be the same as before (pointing everything at v), except that we wouldn't flip the edge (v, u). Thus f(u) = f(v) - 1.
Consequently, once we know the value of f for a single node v, we can compute it for each adjacent node u as follows:
f(u) = f(v) + 1 if (u, v) is an edge.
f(u) = f(v) - 1 otherwise
This means that we can compute f(v) for all nodes v as follows:
Compute f(v) for some initial node v, chosen arbitrarily.
Do a DFS starting from v. When reaching a node u, compute its f score using the above logic.
All that's left to do is to compute f(v) for some initial node. To do this, we can run a DFS from v outward. Every time we see an edge pointed the wrong way, we have to flip it. Thus the initial value of f(v) is given by the number of wrong-pointing edges we find during the initial DFS.
We thus can compute the f score for each node in O(n) time by doing an initial DFS to compute f(v) for the initial node, then a secondary DFS to compute f(u) for each other node u. You can then for-loop over each of the n f-scores to find the minimum score, then do one more loop to find all values with that f-score. Each of these steps takes O(n) time, so the overall algorithm takes O(n) time as well.
Hope this helps! This was an awesome problem!