I think with A* algorithm it should be SAEFG, but the answer is SBEFG. Now my prof is a man of unavailability. Can someone explain why SBEFG?
The heuristic function used in the example is not consistent. For the heuristic to be consistent the following equality must be true:
For each nodes x, y: h(x) + w(x, y) >= h(y), where h(v) is the heuristic function's value for node v, w(x, y) is the real distance between nodes x and y.
In this example, h(B) = 13, h(E) = 4 and w(B, E) = 6. As you can see, h(E) + w(B, E) = 10 < h(B), so the heuristic is not consistent.
What does that mean? Well, the A* search with such a heuristic might not be optimal in graphs. It might not find the shortest path without revisiting some nodes.
In this example the author probably assumes revisiting the E node, so the A* will first go to C, then A, E, F, D, B and it should revisit E again from B, because SBE is shorter than SAE, then revisit F and go to G. The final path will be SBEFG.
Related
remarks:c' is logc with base 17
MST means (minimum spanning tree)
it's easy to prove the conclusion is correct when we use linear function to transform the cost of every edge.
But log function is not a linear function ,I could not understand why this conclusion is correct。
Supplementary notes:
I did not consider specific algorithms, such as the greedy algorithm. I simply consider the relationship between the sum of the weights of the two trees after transformation.
Numerically if (a + b) > (c + d) , (log a + log b) maybe not > ( logc + logd) .
If a tree generated by G has two edge a and b ,another tree generated by G has c and d,a + b < c + d and the first tree is a MST,but in transformed graph G' ,the sum of weights of edges of second tree may be smaller.
Because of this, I want to construct a counterexample based on "if (a + b)> (c + d), (log a + log b) maybe not> (logc + logd) ", but I failed.
One way to characterize when a spanning tree T is a minimum spanning tree is that, for every edge e not in T, the cycle formed by e and edges of T (the fundamental cycle of e with respect to T) has no edge more expensive than e. Using this characterization, I hope you see how to prove that transforming the costs with any increasing function preserves minimum spanning trees.
There's a one line proof that this condition is necessary. If the fundamental cycle contained a more expensive edge, we could replace it with e and get a spanning tree that costs less than T.
It's less obvious that this condition is sufficient, since at first glance it looks like we're trying to prove global optimality from a local optimality condition. To prove this statement, let T be a spanning tree that satisfies the condition, let T' be a minimum spanning tree, and let G' be the graph whose edges are the union of the edges of T and T'. Run Kruskal's algorithm on G', breaking ties by favoring edges in T over edges not in T. Let T'' be the resulting minimum spanning tree in G'. Since T' is a spanning tree in G', the cost of T'' is not greater than T', hence T'' is a minimum spanning tree in G as well as G'.
Suppose to the contrary that T'' ≠ T. Then there exists an edge in T but not in T''. Let e be the first such edge considered by Kruskal's algorithm. At the time that e was considered, it formed a cycle C in the edges that had been selected from T''. Since T is acyclic, C \ T is nonempty. By the tie breaking criterion, we know that every edge in C \ T costs less than e. Observing that some edge e' in C \ T must have one endpoint in each of the two connected components of T \ {e}, we infer that the fundamental cycle of e' with respect to T contains e, which violates the local optimality condition. In conclusion, T = T'', hence is a minimum spanning tree in G.
If you want a deeper dive, this logic gets abstracted out in the theory of matroids.
Well, its pretty easy to understand...let's see if I can break it down for you:
c` = log_17(c) // here 17 is base
log may not be linear function...but we can say that:
log_b(x) > log_b(y) if x > y and b > 1 (and of course x > 0 and y > 0)
I hope you get the equation I've written...In words in means, consider a base "b" such that b > 1, then log_b(x) would be greater than log_b(y) if x > y.
So, if we apply this rule in your costs of MST of G, then we see that the edges those were selected for G, would still produce the least possible edges to construct MST G' if c' = log_17(c) // here 17 is base.
UPDATE: As I can see you've problem understanding the proof, I'm elaborating a bit:
I guess, you know MST construction is greedy. We're going to use kruskal's algo to proof why it is correct.(In case, you don't know, how kruskal's algo works, you can read it somewhere, or just google it, you'll find millions of resources). Now, Let me write some steps of kruskal's edge selection for MST of G:
// the following edges are sorted by cost..i.e. c_0 <= c_1 <= c_2 ....
c_0: A, F // here, edge c_0 connects A, F, we've to take the edge in MST
c_1: A, B // it is also taken to construct MST
c_2: B, R // it is also taken to construct MST
c_3: A, R // we won't take it to construct to MST, cause (A, R) already connected through A -> B -> R
c_4: F, X // it is also taken to construct MST
...
...
so on...
Now, when constructing MST of G', we've to select edges which are in the form c' = log_17(c) // where 17 is base
Now, if we convert the edges using log of base 17, then c_0 becomes c_0', c_1 becomes c_1' and so on...
But we, know that:
log_b(x) > log_b(y) if x > y and b > 1 (and of course x > 0 and y > 0)
So, we may say that,
log_17(c_0) <= log_17(c_1), cause c_0 <= c_1
in general,
log_17(c_i) <= log_17(c_j), where i <= j
And now, we may say:
c_0` <= c_1` <= c_2` <= c_3` <= ....
So, the edge selection process to construct MST of G' would be:
// the following edges are sorted by cost..i.e. c_0` <= c_1` <= c_2` ....
c_0`: A, F // here, edge c_0` connects A, F, we've to take the edge in MST
c_1`: A, B // it is also taken to construct MST
c_2`: B, R // it is also taken to construct MST
c_3`: A, R // we won't take it to construct to MST, cause (A, R) already connected through A -> B -> R
c_4`: F, X // it is also taken to construct MST
...
...
so on...
Which is same as MST of G...
That proves the theorem ultimately....
I hope you get it...if not ask me in the comment what is not clear to you...
I have Vertices V={s,u,v,x} as well as Edges E={(s,u),(s,x),(s,v),(u,v),(v,x),(x,u)) as well as the following Weights:
W(s, u) = 1
W(v, x) = W(x, u) = W(s, v)=2
W(u, v) = -3
W(s, x) = -1
Now I am executing Initialize(G,w,s) making s the starting point and initialize s.d = 0.
I need the shortest path distances of u,v,x. Since they are all connected to s, I can just use the weight of W(s, u), W(s, v), W(s, x). But x.d would be -1. Is that even applicable ? Could I now use this distance to correctly execute Relax(s,x,w) and get a correct output?
Thanks in Advance
As Dijkstra's algorithm with negative weights says, as long as there are no negative cycles, Bellman-Ford will converge. If there is a negative cycle, it will detect that fact.
If there are negative cycles then there is no solution for it but to pair the cost of getting from A to B with the set of negative edges that were visited along the way so that you can trace them out and not revisit them again. This is theoretically correct but expensive both in memory and running time.
we consider only graphs that are undirected. The diameter of a graph is the maximum, over all choices of vertices s and t, of the shortest-path distance between s and t . (Recall the shortest-path distance between s and t is the fewest number of edges in an s-t path.) Next, for a vertex s, let l(s) denote the maximum, over all vertices t, of the shortest-path distance between s and t. The radius of a graph is the minimum of l(s) over all choices of the vertex s.
with radius r and the diameter d which of the following is always hold? choose the best answer.
1) r >= d/2
2) r <= d
we know the (1) and (2) always hold and in any reference book that written.
my challenge is this problem mentioned on Entrance Exam and just one of (1) or (2) should be true, the OP says choose the best answer and after the exam answer sheet wrote (1) is the best choice. How can verify me, why the (1) is better than (2).
They both are indeed True.
Don't let an exam with ambiguous questions weaken your concepts.
Well as for the Proof:
First of all 2nd inequality is quite trivial (from the definition itself)
Now the 1st one
d <= 2*r
Let z be a central vertex, then:
e(z)=r
Now,
diameter = d(x,y) [d(x,y) denotes distance between some vertex x & y]
d(x,y) <= d(x,z) + d(z,y)
d(x,y) <= d(z,x) + d(z,y)
d(x,y) <= e(z) + e(z) [this can be an upper bound as e(z)>=d(z,u) for all u]
diameter <= 2*r
They both hold.
2) should be clear.
1) holds using the triangle inequality. We can use this property because distances on graphs are a metric (http://en.wikipedia.org/wiki/Metric_%28mathematics%29). Using Let d(x, z) = diameter(G) and let y be a center of G (i.e. there exists a vertex v in G such that d(y, v) = radius(G)). Because d(y, v) = radius(G) and d(y, v) = d(v, y), we know that d(v, z) <= radius(G). Then we have that diameter(G) = d(x, z) <= d(y, v) + d(v, z) <= 2*radius(G).
The OP defined the shortest-path distance between s and t as "the fewest number of edges in an s-t path". This makes things simpler.
We may write the definitions in terms of some pseudocode:
def dist(s, t):
return min([len(path)-1 for path starts with s and ends with t])
r = min([max([dist(s, t) for t in V]) for s in V])
d = max([max([dist(s, t) for t in V]) for s in V])
where V is the set of all vertexes.
Now (2) is obviously true. The definition itself tells this: max always >= min.
(1) is slightly less obvious. It requires at least a few steps to prove.
Suppose d = dist(A, B), and r = dist(C, D), we have
dist(C, A) + dist(C, B) >= dist(A, B),
otherwise the length of path A-C-B would be smaller than dist(A, B).
From the definition of r, we know that
dist(C, D) >= dist(C, A)
dist(C, D) >= dist(C, B)
Hence 2 * dist(C, D) >= dist(A, B), i.e., 2 * r >= d.
So which one is better? This depends on how you define "better". If we consider something non-trivially correct (or not so obvious) to be better than something trivially correct, then we may agree that (1) is better than (2).
In the DIJKSTRA pseudo-code in chapter 24 page 658 CLRS Third Edition, in the inner loop, while relaxing adjacent edges from the new added vertex why is the relaxing allowed on the edges already dequed from the queue and added to Shortest Path to tree?
while(Q not empty){
u = extractMin from Q;
Add S to the shortest path tree;
for each vertex v adjacent to u
relax(u,v,w)
}
Why is the inner loop not checking if the vertex is already part of Shortest path tree like,
while(Q not empty){
u = extractMin from Q;
Add S to the shortest path tree;
for each vertex v adjacent to u
if v is in Q
then relax(u,v,w)
}
Which is correct approach?
The first thing relax does is to check
if v.d > u.d + w(u,v)
If v is already on the shortest path tree, the check will always fail and relax will not proceed. An if v is in Q check would be redundant.
However, if if v is in Q is a significantly faster operation than if v.d > u.d + w(u,v) in a concrete implementation of the algorithm, including it may be a useful optimization.
Both approaches are functionally correct. However, your version is less optimal than the CLRS version.
You don't want to do if v is in Q because that's an O(log n) operation, whereas if v.d > u.d + w(u, v) is O(1). At the beginning of the algorithm, Q contains all the vertices in the graph. So for, say a very large sparsely-connected graph, your version would end-up being much worse than CLRS.
Your question, however, is not entirely without merit. The explanation for Dijkstra's algorithm in CLRS is a bit confusing, which is what actually brought me to this discussion thread. Looking at the pseudo-code on page 658:
DIJKSTRA(G, w, s)
1 INITIALIZE-SINGLE-SOURCE(G, s)
2 S = 0
3 Q = G.V
4 while Q not empty
5 u = EXTRACT-MIN(Q)
6 add u to S
7 for each vertex v in G.Adj[u]
8 RELAX(u, v, w)
one wonders what is the point of maintaining S at all? If we do away with it entirely by removing lines 2 and 6, the algorithm still works, and after it's complete you can print the shortest path by following the predecessor pointers (already stored in each vertex) backwards through the graph (using PRINT-PATH(G, s, v) on page 601, as described on page 647). S seems to be used more as an explanation tool here, to illustrate the fact that Dijkstra is a greedy algorithm, but in an actual graph implementation, seems to me it would not be needed.
I have an undirected graph. For now, assume that the graph is complete. Each node has a certain value associated with it. All edges have a positive weight.
I want to find a path between any 2 given nodes such that the sum of the values associated with the path nodes is maximum while at the same time the path length is within a given threshold value.
The solution should be "global", meaning that the path obtained should be optimal among all possible paths. I tried a linear programming approach but am not able to formulate it correctly.
Any suggestions or a different method of solving would be of great help.
Thanks!
If you looking for an algorithm in general graph, your problem is NP-Complete, Assume path length threshold is n-1, and each vertex has value 1, If you find the solution for your problem, you can say given graph has Hamiltonian path or not. In fact If your maximized vertex size path has value n, then you have a Hamiltonian path. I think you can use something like Held-Karp relaxation, for finding good solution.
This might not be perfect, but if the threshold value (T) is small enough, there's a simple algorithm that runs in O(n^3 T^2). It's a small modification of Floyd-Warshall.
d = int array with size n x n x (T + 1)
initialize all d[i][j][k] to -infty
for i in nodes:
d[i][i][0] = value[i]
for e:(u, v) in edges:
d[u][v][w(e)] = value[u] + value[v]
for t in 1 .. T
for k in nodes:
for t' in 1..t-1:
for i in nodes:
for j in nodes:
d[i][j][t] = max(d[i][j][t],
d[i][k][t'] + d[k][j][t-t'] - value[k])
The result is the pair (i, j) with the maximum d[i][j][t] for all t in 0..T
EDIT: this assumes that the paths are allowed to be not simple, they can contain cycles.
EDIT2: This also assumes that if a node appears more than once in a path, it will be counted more than once. This is apparently not what OP wanted!
Integer program (this may be a good idea or maybe not):
For each vertex v, let xv be 1 if vertex v is visited and 0 otherwise. For each arc a, let ya be the number of times arc a is used. Let s be the source and t be the destination. The objective is
maximize ∑v value(v) xv .
The constraints are
∑a value(a) ya ≤ threshold
∀v, ∑a has head v ya - ∑a has tail v ya = {-1 if v = s; 1 if v = t; 0 otherwise (conserve flow)
∀v ≠ x, xv ≤ ∑a has head v ya (must enter a vertex to visit)
∀v, xv ≤ 1 (visit each vertex at most once)
∀v ∉ {s, t}, ∀cuts S that separate vertex v from {s, t}, xv ≤ ∑a such that tail(a) ∉ S ∧ head(a) ∈ S ya (benefit only from vertices not on isolated loops).
To solve, do branch and bound with the relaxation values. Unfortunately, the last group of constraints are exponential in number, so when you're solving the relaxed dual, you'll need to generate columns. Typically for connectivity problems, this means using a min-cut algorithm repeatedly to find a cut worth enforcing. Good luck!
If you just add the weight of a node to the weights of its outgoing edges you can forget about the node weights. Then you can use any of the standard algorigthms for the shortest path problem.