First, implement the flow network G = (V, E). Let G' = (V, E') be the residual network, where E' = {(u, v) : (u, v) is in E or (v,u) is in E}.
The issue I am having is understanding the following text from CLRS:
The time to find a path in a residual network is therefore O(V + E') = O(E) if we use either depth-first search or breadth-first search.
According to this source, BFS/DFS takes O(V + E), so why is O(V + E') = O(E)?
Related
I would like to know, what would be the most efficient way (w.r.t., Space and Time) to solve the following problem:
Given an undirected Graph G = (V, E), a positive number N and a vertex S in V. Assume that every vertex in V has a cost value. Find the N highest cost vertices that is connected to S.
For example:
G = (V, E)
V = {v1, v2, v3, v4},
E = {(v1, v2),
(v1, v3),
(v2, v4),
(v3, v4)}
v1 cost = 1
v2 cost = 2
v3 cost = 3
v4 cost = 4
N = 2, S = v1
result: {v3, v4}
This problem can be solved easily by the graph traversal algorithm (e.g., BFS or DFS). To find the vertices connected to S, we can run either BFS or DFS starting from S. As the space and time complexity of BFS and DFS is same (i.e., time complexity: O(V+E), space complexity: O(E)), here I am going to show the pseudocode using DFS:
Parameter Definition:
* G -> Graph
* S -> Starting node
* N -> Number of connected (highest cost) vertices to find
* Cost -> Array of size V, contains the vertex cost value
procedure DFS-traversal(G,S,N,Cost):
let St be a stack
let Q be a min-priority-queue contains <cost, vertex-id>
let discovered is an array (of size V) to mark already visited vertices
St.push(S)
// Comment: if you do not want to consider the case "S is connected to S"
// then, you can consider commenting the following line
Q.push(make-pair(S, Cost[S]))
label S as discovered
while St is not empty
v = St.pop()
for all edges from v to w in G.adjacentEdges(v) do
if w is not labeled as discovered:
label w as discovered
St.push(w)
Q.push(make-pair(w, Cost[w]))
if Q.size() == N + 1:
Q.pop()
let ret is a N sized array
while Q is not empty:
ret.append(Q.top().second)
Q.pop()
Let's first describe the process first. Here, I run the iterative version of DFS to traverse the graph starting from S. During the traversal, I use a priority-queue to keep the N highest cost vertices that is reachable from S. Instead of the priority-queue, we can use a simple array (or even we can reuse the discovered array) to keep the record of the reachable vertices with cost.
Analysis of space-complexity:
To store the graph: O(E)
Priority-queue: O(N)
Stack: O(V)
For labeling discovered: O(V)
So, as O(E) is the dominating term here, we can consider O(E) as the overall space complexity.
Analysis of time-complexity:
DFS-traversal: O(V+E)
To track N highest cost vertices:
By maintaining priority-queue: O(V*logN)
Or alternatively using array: O(V*logV)
The overall time-complexity would be: O(V*logN + E) or O(V*logV + E)
I am confused about addition with the big O notation.
I'm to create an algorithm to find a MST for a graph with some other requirements for a school problem. It's time complexity is to be in O(E * log V), where E is the number of edges and V the number of vertices in the graph. I have arrived at a solution that is in O(E * log V) + O(V).
Does it hold that O(E * log V) + O(V) = O(E * log V)?
Thank you for all the answers! I am assuming this complexity on connected graphs, on graphs that are not connected, my algorithm works in O(E * log V).
For any x, you can make a graph with x edges and 2ˣ (mostly disconnected) vertices.
For such a graph, E log V = x², so (V + E log V)/(E log V) = (2ˣ+x²)/x².
This grows without bound as x increases, so O(E log V) + O(V) is NOT the same as O(E log V), even for graphs.
HOWEVER, if you specify connected graphs, then you have V < E. In that case, as long as V>=2, you have V + E log V < E + E log V <= 2(E log V)
So O(E log V) = O(E log V) + O(V) for connected graphs.
O(ElogV+V) is not the same as O(ElogV). In general V can be arbitrarily larger than ElogV, which makes the two complexity classes different.
But, assuming you have an O(ElogV + V) time algorithm for finding an MST if one exists, you can turn it into a guaranteed O(ElogV) time algorithm, assuming the graph is represented in adjacency list form.
We can determine, in O(E) time, if E>=V/2. Go through the vertices of the graph, and see if there's any edges adjacent to that vertex. If you find a vertex with no adjacent edges, the graph clearly has no MST since that vertex is not connected to the rest of the graph. If you have gone through all vertices, you know that E>=V/2. If you find a vertex with no adjacent edges after n steps, you know you have at least (n-1)/2 edges in the graph, so this procedure takes O(E) time (even though naively it looks like it's O(V) time).
If E is less than V/2, the graph is disconnected (since in a connected graph, E>=V-1), and there's no MST.
So: check if E>=V/2 and only if so, run your MST algorithm.
This takes O(E + ElogV + V) = O(E + ElogV + 2E) = O(ElogV) time.
Let G = (V, E) be a directed graph, given in the adjacency list format. Define a
directed graph G' = (V, E') where an edge (u, v) ∈ E'
if and only if (v, u) ∈ E (namely, G'reverses the direction of each edge in G). Describe an algorithm to obtain an adjacency list representation
of G'
in O(|V | + |E|) time.
is there a simple way inverse the adjacency list?
say if it was:
a-> b
b-> de
c-> c
d-> ab
e->
to:
a-> d
b-> ad
c-> c
d-> ab
e-> b
Let's say you iterate the adjacency lists of all vertices in the graph as follows.
for each u in V:
for each v in adjacency_list[u]:
reverse_adjacency_list[v].append(u)
With this approach, you traverse the adjacency lists of all |V| vertices, which contributes O(|V|) to the overall time complexity of your algorithm. Also, as you traverse all of those adjacency list, you effectively traverse all the edges in the graph. In other words, if you concatenated all the adjacency lists you traverse, the length of that resulting list would be |E|. Thus, another O(|E|) is contributed to the overall complexity.
Consequently, the time complexity will be O(|V| + |E|) with this pretty standard approach, and you do not need to devise any peculiar method to achieve this complexity.
There is a directed graph G = [V ; E] with edge weights w(u, v) for (u, v) ∈ E.
Suppose the values for {d[v], π[v]}; v ∈ V and claims
that these are the length of the shortest path and the predecessor node in
it for v ∈ V , how could I verify if this statement is true or false that does not solve the entire shortest path problem from scratch? This is an problem I met with not many ideas in my head ..
The problem is a bit unclear, but to clarify:
There's a node s in your graph, and that for each vertex v:
for v != s, pi[v] is intended to be a node adjacent to v that's on a shortest path from v to s.
d[v] is intended to store the shortest distance from v to s.
The problem is to verify, given a pi, d, that they legitimately contain back-edges and minimal distances.
An easily implemented condition that verifies this is as follows:
For each vertex v
Either:
v = s and d[v] = 0
Or:
d[pi[v]] = d[v] - 1
d[u] >= d[v] - 1 for each u adjacent to v
pi[v] is adjacent to v
This check runs in O(V + E) time.
Let G = (V, E) be a weighted, directed graph with weight function w : E → R. Give an O(V E)-time algorithm to find, for each vertex v ∈ V , the value δ*(v) = min{u∈V} {δ(u, v)}.
I don't understand the question. Could someone give me some ideas?
This basically means:
G = (V, E) having a graph with V vertices and E edges
weighted, directed graph with weight function w : E → R the graph is directed and weighted, where each edge has real value weight
O(V E)-time algorithm find the algorithm that runs in number of operations proportional to number of vertices multiplied by number of edges
for each vertex v ∈ V , the value δ*(v) = min{u∈V} {δ(u, v)} here they have not described what δ(u, v) means, but most probably this is the sum of weights of edges from vertex u to v. This basically asks you to find the minimum distance from vertex u to all vertices v.
And the answer to your question Bellman Ford.