Complete graph with only two possible costs. What's the shortest path's cost from 0 to N - 1 - algorithm

You are given a complete undirected graph with N vertices. All but K edges have a cost of A. Those K edges have a cost of B and you know them (as a list of pairs). What's the minimum cost from node 0 to node N - 1.
2 <= N <= 500k
0 <= K <= 500k
1 <= A, B <= 500k
The problem is, obviously, when those K edges cost more than the other ones and node 0 and node N - 1 are connected by a K-edge.
Dijkstra doesn't work. I've even tried something very similar with a BFS.
Step1: Let G(0) be the set of "good" adjacent nodes with node 0.
Step2: For each node in G(0):
compute G(node)
if G(node) contains N - 1
return step
else
add node to some queue
repeat step2 and increment step
The problem is that this uses up a lot of time due to the fact that for every node you have to make a loop from 0 to N - 1 in order to find the "good" adjacent nodes.
Does anyone have any better ideas? Thank you.
Edit: Here is a link from the ACM contest: http://acm.ro/prob/probleme/B.pdf

This is laborous case work:
A < B and 0 and N-1 are joined by A -> trivial.
B < A and 0 and N-1 are joined by B -> trivial.
B < A and 0 and N-1 are joined by A ->
Do BFS on graph with only K edges.
A < B and 0 and N-1 are joined by B ->
You can check in O(N) time is there is a path with length 2*A (try every vertex in middle).
To check other path lengths following algorithm should do the trick:
Let X(d) be set of nodes reachable by using d shorter edges from 0. You can find X(d) using following algorithm: Take each vertex v with unknown distance and iterativelly check edges between v and vertices from X(d-1). If you found short edge, then v is in X(d) otherwise you stepped on long edge. Since there are at most K long edges you can step on them at most K times. So you should find distance of each vertex in at most O(N + K) time.

I propose a solution to a somewhat more general problem where you might have more than two types of edges and the edge weights are not bounded. For your scenario the idea is probably a bit overkill, but the implementation is quite simple, so it might be a good way to go about the problem.
You can use a segment tree to make Dijkstra more efficient. You will need the operations
set upper bound in a range as in, given U, L, R; for all x[i] with L <= i <= R, set x[i] = min(x[i], u)
find a global minimum
The upper bounds can be pushed down the tree lazily, so both can be implemented in O(log n)
When relaxing outgoing edges, look for the edges with cost B, sort them and update the ranges in between all at once.
The runtime should be O(n log n + m log m) if you sort all the edges upfront (by outgoing vertex).
EDIT: Got accepted with this approach. The good thing about it is that it avoids any kind of special casing. It's still ~80 lines of code.

In the case when A < B, I would go with kind of a BFS, where you would check where you can't reach instead of where you can. Here's the pseudocode:
G(k) is the set of nodes reachable by k cheap edges and no less. We start with G(0) = {v0}
while G(k) isn't empty and G(k) doesn't contain vN-1 and k*A < B
A = array[N] of zeroes
for every node n in G(k)
for every expensive edge (n,m)
A[m]++
# now we have that A[m] == |G(k)| iff m can't be reached by a cheap edge from any of G(k)
set G(k+1) to {m; A[m] < |G(k)|} except {n; n is in G(0),...G(k)}
k++
This way you avoid iterating through the (many) cheap edges and only iterate through the relatively few expensive edges.

As you have correctly noted, the problem comes when A > B and edge from 0 to n-1 has a cost of A.
In this case you can simply delete all edges in the graph that have a cost of A. This is because an optimal route shall only have edges with cost B.
Then you can perform a simple BFS since the costs of all edges are the same. It will give you optimal performance as pointed out by this link: Finding shortest path for equal weighted graph
Moreover, you can stop your BFS when the total cost exceeds A.

Related

How to count all reachable nodes in a directed graph?

There is a directed graph (which might contain cycles), and each node has a value on it, how could we get the sum of reachable value for each node. For example, in the following graph:
the reachable sum for node 1 is: 2 + 3 + 4 + 5 + 6 + 7 = 27
the reachable sum for node 2 is: 4 + 5 + 6 + 7 = 22
.....
My solution: To get the sum for all nodes, I think the time complexity is O(n + m), the n is the number of nodes, and m stands for the number of edges. DFS should be used,for each node we should use a method recursively to find its sub node, and save the sum of sub node when finishing the calculation for it, so that in the future we don't need to calculate it again. A set is needed to be created for each node to avoid endless calculation caused by loop.
Does it work? I don't think it is elegant enough, especially many sets have to be created. Is there any better solution? Thanks.
This can be done by first finding Strongly Connected Components (SCC), which can be done in O(|V|+|E|). Then, build a new graph, G', for the SCCs (each SCC is a node in the graph), where each node has value which is the sum of the nodes in that SCC.
Formally,
G' = (V',E')
Where V' = {U1, U2, ..., Uk | U_i is a SCC of the graph G}
E' = {(U_i,U_j) | there is node u_i in U_i and u_j in U_j such that (u_i,u_j) is in E }
Then, this graph (G') is a DAG, and the question becomes simpler, and seems to be a variant of question linked in comments.
EDIT previous answer (striked out) is a mistake from this point, editing with a new answer. Sorry about that.
Now, a DFS can be used from each node to find the sum of values:
DFS(v):
if v.visited:
return 0
if v is leaf:
return v.value
v.visited = true
return sum([DFS(u) for u in v.children])
This is O(V^2 + VE) worst vase, but since the graph has less nodes, V
and E are now significantly lower.
Some local optimizations can be made, for example, if a node has a single child, you can reuse the pre-calculated value and not apply DFS on the child again, since there is no fear of counting twice in this case.
A DP solution for this problem (DAG) can be:
D[i] = value(i) + sum {D[j] | (i,j) is an edge in G' }
This can be calculated in linear time (after topological sort of the DAG).
Pseudo code:
Find SCCs
Build G'
Topological sort G'
Find D[i] for each node in G'
apply value for all node u_i in U_i, for each U_i.
Total time is O(|V|+|E|).
You can use DFS or BFS algorithms for solving Your problem.
Both have complexity O(V + E)
You dont have to count all values for all nodes. And you dont need recursion.
Just make something like this.
Typically DFS looks like this.
unmark all vertices
choose some starting vertex x
mark x
list L = x
while L nonempty
choose some vertex v from front of list
visit v
for each unmarked neighbor w
mark w
add it to end of list
In Your case You have to add some lines
unmark all vertices
choose some starting vertex x
mark x
list L = x
float sum = 0
while L nonempty
choose some vertex v from front of list
visit v
sum += v->value
for each unmarked neighbor w
mark w
add it to end of list

Graph Theory - Length of Cycle UnDirected Graph - Adjacency Matrix

Study Review Question for comprehensive Exam for algorithms part.
Let G be an undirected Graph with n vertices that contains exactly one cycle and isolated vertices (i.e. no leaves). That means the degree of a vertex is 0 (isolated) if it is not in the cycle and 2 if it is part of the cycle. Assume that the graph is reresented by an adjacency matrix. Describe an efficeint algorithm that finds the length of the cycle.
I am looking for assistance on verifying my understanding, checking if it is correct and if the analysis is also correct.
My Answer (pseudo pythonic)
visited = [] // this will be list of u,v pairs belonging to cycle
for each u,v in G[][]:
if G[u][v] == 1: // is an edge
if G[u][v] in visited : //
return len(visited) // return the length of the cycle, since weve hit begining of cycle
else :
visited.add((u,v))
English Based understanding
We know a cycle must exist, be definition of the question, the case wherein no cycle found need not be accounted for
For each pair of vertices, check if it is an edge
if it is an edge, check if weve been there before. If we have, we've found the cycle, and return the size of all visited edges. (size of cycle)
If it is not a visited edge, add it to the visited list, and continue until we find the source edge (grow the cycle by 1 until we hit source)
My analysis for it I think may be off. Since we visit each (u,v) pair at least once, AND then check if it is an edge, as well as 2 comparisons per edge. I think it comes to O(|v|^2 + 2 |E|)
# of vertices, squared (since we visit every pair in a matrix), + 2 comparisons per edge.
Can someone please advise on efficiency and correctness? Also maybe provide more english based understanding if there is a logical leap I may have made, without acknowledge the proof of logic?
Thanks for reading and thanks in advance for assistance.
Given the conditions in the question (that is, the only edges in the graph are part of the cycle), the length of the cycle is the number edges in the graph, which is half the number of 1s in the adjacency matrix (each edge (i, j) appears twice in the adjacency matrix: A[i,j]=1 and A[j,i]=1).
The obvious algorithm therefore is to just sum the entries of the adjacency matrix and divide by 2. This is O(V^2) if there's V vertices.
One thing that looks like it might help is, once you've found the first 1 in the adjacency matrix, follow edges until you've got back to the start:
Find i, j such that A[i, j]=1.
start = i
cycle_length = 1
repeat
find k != i with A[j, k] = 1
i, j = j, k
cycle_length++
until i = start
After this process terminates, cycle_length is the length of the cycle. This is still worst-case O(V^2) though, although if you can find a single vertex on the cycle quickly, it's O(V*C) where C is the length of the cycle.
The code in your question doesn't work. You're iterating over (u, v) as indexes in the matrix, and it's impossible to find the same (u, v) twice.
Since theres exactly one cycle, a vertex is part of the cycle, if he is connected to atleast one other vertex. Since the graph is undirected, the following rule can be used:
if a edge between v1 and v2 exists, the edge aswell exists between v2 and v1 or in other words: the algorithm only needs to scan the part of the matrix where v1 < v2 is given, which reduces the number of matrixelements read even in worstcase by more than 50%. And since were searching a cylce, we can simply save every node we have visited before the previous node to ensure we don't visit it again and end, if we end up with the current node being equal to the starting node.
//search for any node that is in the cycle
int firstNode = -1
for int i in 1 , len(graph)
boolean notIsolated = false
for int j in 0 , i - 1
if graph[i][j] == 1
notIsolated = true
break
if notIsolated
firstNode = i
break
int node_c = firstNode
int node_p = -1
int count = 0
do
//search the neighbour that isn't the previous node with above given
//optimizations
int n
for n in 0 , node_c - 1
if graph[node_c][n] == 1 AND n != node_p
break
//update the nodevars for the next step
node_p = node_c
node_c = n
++count
while node_c != firstNode //continue until the algorithm reaches the startnode
Apart from that, there won't be much to be optimized (at least i don't know any way to further optimize runtime).

Find max subset of tree with max distance not greater than K

I run into a dynamic programming problem on interviewstreet named "Far Vertices".
The problem is like:
You are given a tree that has N vertices and N-1 edges. Your task is
to mark as small number of verices as possible so that the maximum
distance between two unmarked vertices be less than or equal to K. You
should write this value to the output. Distance between two vertices i
and j is defined as the minimum number of edges you have to pass in
order to reach vertex i from vertex j.
I was trying to do dfs from every node of the tree, in order to find the max connected subset of the nodes, so that every pair of subset did not have distance more than K.
But I could not define the state, and transitions between states.
Is there anybody that could help me?
Thanks.
The problem consists essentially of finding the largest subtree of diameter <= k, and subtracting its size from n. You can solve it using DP.
Some useful observations:
The diameter of a tree rooted at node v (T(v)) is:
1 if n has no children,
max(diameter T(c), height T(c) + 1) if there is one child c,
max(max(diameter T(c)) for all children c of v, max(height T(c1) + height T(c2) + 2) for all children c1, c2 of v, c1 != c2)
Since we care about maximizing tree size and bounding tree diameter, we can flip the above around to suggest limits on each subtree:
For any tree rooted at v, the subtree of interest is at most k deep.
If n is a node in T(v) and has no children <= k away from v, its maximum size is 1.
If n has one child c, the maximum size of T(n) of diameter <= k is max size T(c) + 1.
Now for the tricky bit. If n has more than one child, we have to find all the possible tree sizes resulting from allocating the available depth to each child. So say we are at depth 3, k = 7, we have 4 depth left to play with. If we have three children, we could allocate all 4 to child 1, 3 to child 1 and 1 to child 2, 2 to child 1 and 1 to children 2 and 3, etc. We have to do this carefully, making sure we don't exceed diameter k. You can do this with a local DP.
What we want for each node is to calculate maxSize(d), which gives the max size of the tree rooted at that node that is up to d deep that has diameter <= k. Nodes with 0 and 1 children are easy to figure this for, as above (for example, for one child, v.maxSize(i) = c.maxSize(i - 1) + 1, v.maxSize(0) = 1). Nodes with 2 or more children, you compute dp[i][j], which gives the max size of a k-diameter-bound tree using up to the ith child taking up to j depth. The recursion is dp[i][j] = max(child(i).maxSize(m - 1) + dp[i - 1][min(j, k - m)] for m from 1 to j. d[i][0] = 1. This says, try giving the ith child 1 to j depth, and give the rest of the available depth to the previous nodes. The "rest of the available depth" is the minimum of j, the depth we are working with, or k - m, because depth given to child i + depth given to the rest cannot exceed k. Transfer the values of the last row of dp to the maxSize table for this node. If you run the above using a depth-limited DFS, it will compute all the necessary maxSize entries in the correct order, and the answer for node v is v.maxSize(k). Then you do this once for every node in the tree, and the answer is the maximum value found.
Sorry for the muddled nature of the explanation. It was hard for me to think through, and difficult to describe. Working through a few simple examples should make it clearer. I haven't calculated the complexity, but n is small, and it went through all the test cases in .5 to 1s in Scala.
A few basic things I can notice (maybe very obvious to others):
1. There is only one route possible between two given vertices.
2. The farthest vertices would be the one with only one outgoing edge.
Now to solve the issue.
I would start with the set of Vertices that have only one edge and call them EDGE[] calculate the distances between the vertices in EDGE[]. This will give you (EDGE[i],EDGE[j],distance ) value pairs
For all the vertices pairs in EDGE that have a distance of > K, DO EDGE[i].occur++,EDGE[i].distance = MAX(EDGE[i].distance, distance)
EDGE[j].occur++,EDGE[j].distance = MAX(EDGE[j].distance, distance)
Find the CANDIDATES in EDGE[] that have max(distance) from those Mark the with with max (occur)
Repeat till all edge vertices pair have distance less then or equal to K

Time complexity of Prim's MST Algorithm

Can someone explain to me why is Prim's Algorithm using adjacent matrix result in a time complexity of O(V2)?
(Sorry in advance for the sloppy looking ASCII math, I don't think we can use LaTEX to typeset answers)
The traditional way to implement Prim's algorithm with O(V^2) complexity is to have an array in addition to the adjacency matrix, lets call it distance which has the minimum distance of that vertex to the node.
This way, we only ever check distance to find the next target, and since we do this V times and there are V members of distance, our complexity is O(V^2).
This on it's own wouldn't be enough as the original values in distance would quickly become out of date. To update this array, all we do is at the end of each step, iterate through our adjacency matrix and update the distance appropriately. This doesn't affect our time complexity since it merely means that each step takes O(V+V) = O(2V) = O(V). Therefore our algorithm is O(V^2).
Without using distance we have to iterate through all E edges every single time, which at worst contains V^2 edges, meaning our time complexity would be O(V^3).
Proof:
To prove that without the distance array it is impossible to compute the MST in O(V^2) time, consider that then on each iteration with a tree of size n, there are V-n vertices to potentially be added.
To calculate which one to choose we must check each of these to find their minimum distance from the tree and then compare that to each other and find the minimum there.
In the worst case scenario, each of the nodes contains a connection to each node in the tree, resulting in n * (V-n) edges and a complexity of O(n(V-n)).
Since our total would be the sum of each of these steps as n goes from 1 to V, our final time complexity is:
(sum O(n(V-n)) as n = 1 to V) = O(1/6(V-1) V (V+1)) = O(V^3)
QED
Note: This answer just borrows jozefg's answer and tries to explain it more fully since I had to think a bit before I understood it.
Background
An Adjacency Matrix representation of a graph constructs a V x V matrix (where V is the number of vertices). The value of cell (a, b) is the weight of the edge linking vertices a and b, or zero if there is no edge.
Adjacency Matrix
A B C D E
--------------
A 0 1 0 3 2
B 1 0 0 0 2
C 0 0 0 4 3
D 3 0 4 0 1
E 2 2 3 1 0
Prim's Algorithm is an algorithm that takes a graph and a starting node, and finds a minimum spanning tree on the graph - that is, it finds a subset of the edges so that the result is a tree that contains all the nodes and the combined edge weights are minimized. It may be summarized as follows:
Place the starting node in the tree.
Repeat until all nodes are in the tree:
Find all edges that join nodes in the tree to nodes not in the tree.
Of those edges, choose one with the minimum weight.
Add that edge and the connected node to the tree.
Analysis
We can now start to analyse the algorithm like so:
At every iteration of the loop, we add one node to the tree. Since there are V nodes, it follows that there are O(V) iterations of this loop.
Within each iteration of the loop, we need to find and test edges in the tree. If there are E edges, the naive searching implementation uses O(E) to find the edge with minimum weight.
So in combination, we should expect the complexity to be O(VE), which may be O(V^3) in the worst case.
However, jozefg gave a good answer to show how to achieve O(V^2) complexity.
Distance to Tree
| A B C D E
|----------------
Iteration 0 | 0 1* # 3 2
1 | 0 0 # 3 2*
2 | 0 0 4 1* 0
3 | 0 0 3* 0 0
4 | 0 0 0 0 0
NB. # = infinity (not connected to tree)
* = minimum weight edge in this iteration
Here the distance vector represents the smallest weighted edge joining each node to the tree, and is used as follows:
Initialize with the edge weights to the starting node A with complexity O(V).
To find the next node to add, simply find the minimum element of distance (and remove it from the list). This is O(V).
After adding a new node, there are O(V) new edges connecting the tree to the remaining nodes; for each of these determine if the new edge has less weight than the existing distance. If so, update the distance vector. Again, O(V).
Using these three steps reduces the searching time from O(E) to O(V), and adds an extra O(V) step to update the distance vector at each iteration. Since each iteration is now O(V), the overall complexity is O(V^2).
First of all, it's obviously at least O(V^2), because that is how big the adjacency matrix is.
Looking at http://en.wikipedia.org/wiki/Prim%27s_algorithm, you need to execute the step "Repeat until Vnew = V" V times.
Inside that step, you need to work out the shortest link between any vertex in V and any vertex outside V. Maintain an array of size V, holding for each vertex either infinity (if the vertex is in V) or the length of the shortest link between any vertex in V and that vertex, and its length (so in the beginning this just comes from the length of links between the starting vertex and every other vertex). To find the next vertex to add to V, just search this array, at cost V. Once you have a new vertex, look at all the links from that vertex to every other vertex and see if any of them give shorter links from V to that vertex. If they do, update the array. This also cost V.
So you have V steps (V vertexes to add) each taking cost V, which gives you O(V^2)

Path finding algorithm on graph considering both nodes and edges

I have an undirected graph. For now, assume that the graph is complete. Each node has a certain value associated with it. All edges have a positive weight.
I want to find a path between any 2 given nodes such that the sum of the values associated with the path nodes is maximum while at the same time the path length is within a given threshold value.
The solution should be "global", meaning that the path obtained should be optimal among all possible paths. I tried a linear programming approach but am not able to formulate it correctly.
Any suggestions or a different method of solving would be of great help.
Thanks!
If you looking for an algorithm in general graph, your problem is NP-Complete, Assume path length threshold is n-1, and each vertex has value 1, If you find the solution for your problem, you can say given graph has Hamiltonian path or not. In fact If your maximized vertex size path has value n, then you have a Hamiltonian path. I think you can use something like Held-Karp relaxation, for finding good solution.
This might not be perfect, but if the threshold value (T) is small enough, there's a simple algorithm that runs in O(n^3 T^2). It's a small modification of Floyd-Warshall.
d = int array with size n x n x (T + 1)
initialize all d[i][j][k] to -infty
for i in nodes:
d[i][i][0] = value[i]
for e:(u, v) in edges:
d[u][v][w(e)] = value[u] + value[v]
for t in 1 .. T
for k in nodes:
for t' in 1..t-1:
for i in nodes:
for j in nodes:
d[i][j][t] = max(d[i][j][t],
d[i][k][t'] + d[k][j][t-t'] - value[k])
The result is the pair (i, j) with the maximum d[i][j][t] for all t in 0..T
EDIT: this assumes that the paths are allowed to be not simple, they can contain cycles.
EDIT2: This also assumes that if a node appears more than once in a path, it will be counted more than once. This is apparently not what OP wanted!
Integer program (this may be a good idea or maybe not):
For each vertex v, let xv be 1 if vertex v is visited and 0 otherwise. For each arc a, let ya be the number of times arc a is used. Let s be the source and t be the destination. The objective is
maximize ∑v value(v) xv .
The constraints are
∑a value(a) ya ≤ threshold
∀v, ∑a has head v ya - ∑a has tail v ya = {-1 if v = s; 1 if v = t; 0 otherwise (conserve flow)
∀v ≠ x, xv ≤ ∑a has head v ya (must enter a vertex to visit)
∀v, xv ≤ 1 (visit each vertex at most once)
∀v ∉ {s, t}, ∀cuts S that separate vertex v from {s, t}, xv ≤ ∑a such that tail(a) ∉ S &wedge; head(a) &in; S ya (benefit only from vertices not on isolated loops).
To solve, do branch and bound with the relaxation values. Unfortunately, the last group of constraints are exponential in number, so when you're solving the relaxed dual, you'll need to generate columns. Typically for connectivity problems, this means using a min-cut algorithm repeatedly to find a cut worth enforcing. Good luck!
If you just add the weight of a node to the weights of its outgoing edges you can forget about the node weights. Then you can use any of the standard algorigthms for the shortest path problem.

Resources