What do the absolute value bars mean in graph theory? - notation

Just wanted to know, for example in the wikipedia page Dijkstra's algorithm what the absolute value bars meant in O(|E| + |V|log|V|)

The vertical bars indicate the cardinality (or size) of a set. In the case of Dijkstra's algorithm, |E| is the number of edges and |V| is the number of vertices.

Related

Heuristic value in A* algorithm

I am learning A* algorithm and dijkstra algorithm. And found out the only difference is the Heuristic value it used by A* algorithm. But how can I get these Heuristic value in my graph?. I found a example graph for A* Algorithm(From A to J). Can you guys help me how these Heuristic value are calculated.
The RED numbers denotes Heuristic value.
My current problem is in creating maze escape.
In order to get a heuristic that estimates (lower bounds) the minimum path cost between two nodes there are two possibilities (that I know of):
Knowledge about the underlying space the graph is part of
As an example assume the nodes are points on a plane (with x and y coordinate) and the cost of each edge is the euclidean distance between the corresponding nodes. In this case you can estimate (lower bound) the path cost from node U to node V by calculating the euclidean distance between U.position and V.position.
Another example would be a road network where you know its lying on the earth surface. The cost on the edges might represent travel times in minutes. In order to estimate the path cost from node U to node V you can calculate the great-circle distance between the two and divide it by the maximum travel speed possible.
Graph Embedding
Another possibility is to embed your graph in a space where you can estimate the path distance between two nodes efficiently. This approach does not make any assumptions on the underlying space but requires precomputation.
For example you could define a landmark L in your graph. Then you precalculate the distance between each node of the graph to your landmark and safe this distance at the node. In order to estimate the path distance during A* search you can now use the precalculated distances as follows: The path distance between node U and V is lower bounded by |dist(U, L) - dist(V,L)|.You can improve this heuristic by using more than one landmark.
For your graph you could use node A and node H as landmarks, which will give you the graph embedding as shown in the image below. You would have to precompute the shortest paths between the nodes A and H and all other nodes beforehand in order to compute this embedding. When you want to estimate for example the distance between two nodes B and J you can compute the distance in each of the two dimensions and use the maximum of the two distances as estimation. This corresponds to the L-infinity norm.
The heuristic is an estimate of the additional distance you would have to traverse to get to your destination.
It is problem specific and appears in different forms for different problems. For your graph , a good heuristic could be: the actual distance from the node to destination, measured by an inch tape or centimeter scale. Funny right but thats exactly how my college professor did it. He took an inch tape on black board and came up with very good heuristic.
So h(A) could be 10 units means the length measured by a measuring scale physically from A to J.
Of course for your algorithm to work the heuristic must be admissible, if not it could give you wrong answer.

Why is the number of edges ignored in the big-O notation

I am having a hard time understanding Dijkstra's big O notation exactly. I have a question regarding Dijkstra with an unsorted array.
As from Wikipedia:
The simplest implementation of the Dijkstra's algorithm stores
vertices of set Q in an ordinary linked list or array, and extract
minimum from Q is simply a linear search through all vertices in Q. In
this case, the running time is O(|E| + |V|^2) = O(|V|^2).
I have myself implemented the algorithm in my application, I know how it works.
I do not understand why O(|E| + |V|^2) = O(|V|^2) or why the number of edges |E| is ignored?
Considering as pseudo code my Dijkstra looks something like:
for all vertices, current u
for each neighbor v of u:
.. do stuff
end for
This is how I explain to myself the O(|V|^2), but I do not understand how do they get the |E| and then remove it?
You can safely assume that |E| < |V|^2, so you can remove the slowly growing parts which are dominated (standard oh notation stuff...).
Explanation:
If you had edges between every vertices, that would be still only E = V*(V-1)/2 edges.
When we say some characteristic (time, space, etc.) of an algorithm (let's call this characteristic T(N)) is "O(f(N))" for some function f, we are saying this:
For all values of N larger than some minimum, (we don't care what that minimum is, just that it exists), we can be sure T(N) < k(f(N)) for some positive constant k.
Well it turns out that any function f(N) with a squared term grows so fast that it eventually becomes larger (i.e. we can always find a minimum N where for all larger values f is larger) than any function with only linear terms, no matter what constants might magnify those linear terms.
This means that the number of edges in a graph - which can number up to (|V|^2+|V|)/2 (note the squared term) grows so much faster than the number of edges that - for purposes of big-O - it can be ignored.
For intuition, a graph with 1000 vertices can have about half a million edges. So the number of vertices is only 0.2% the number of edges. The bigger the graph, the more stark this disparity.

Does Dijkstra's algorithm apply even if there is only one negative weight edge?

Will Dijkstra's Algorithm work if the digraph has only one negative weight edge and does not contain negative weight cycles?
No. Dijkstra's algorithm is greedy. It assumes path weights are strictly increasing.
Consider the following graph. S→A→E is optimal, but the Dijkstra's will return S→B→E.
Not necessarily. In this earlier answer, I gave an example of a graph with no negative cycles and one negative edge where Dijkstra's algorithm doesn't produce the correct answer. Therefore, Dijkstra's algorithm doesn't always work in this case.
Hope this helps!
No. Dijkstra is greedy algorithm. Once it added an edge, it never looks back.
No. Consider the following simple counterexample, with just 3 nodes, S (start), A, and B.
w(S, A) = 1
w(S, B) = 2
w(B, A) = -2
The algorithm will fix the distance for A first (cost 1), but it is cheaper to go there via B (cost 0).
Since Dijkstra's algorithm is greedy, it won't work with negative weights. U need some other algorithm like Bellman-Ford Algorithm for this purpose.
But, if you still want to use Dijkstra's Algo, there is a known way. In this method, you need to reassign costs, so that all become positive.
Here it is:
Suppose there is an edge from u to v. And the cost of the edge is cost(u,v).
u(d(u))------>v(d(v))
Define:
new_cost(u,v) = cost(u,v) + d(u) - d(v)
This is guaranteed to be positive since,
d(v) < d(u) + cost(u,v)
Now, we can apply Dijkstra's algorithm normally, only difference being, in the cost of the new path, which will be (say the path is in between s' and t')
= original cost of the same path + d(s') - d(t')
You can not apply Dijkstra's algorithm directly to a graph with a negative edge as some of the other answers have correctly noted.
There is a way to reweigh the graph edges given that there are no negative cycles in the original graph. It's the same technique used in Johnson's algorithm where first you run one instance of Bellman-Ford's algorithm to get the weights h(v) for each vertex v. Then you modify each edge w(u,v) to w(u,v) + h(u) − h(v) which is guaranteed to be positive so you end up with a new graph with only positive edges on which you can run Dijkstra's algorithm.
Section XV. from the Coursera Algorithms class explains it much better than me.
The only issue with applying that technique for the single source shortest path problem is that reweighting with Bellman-Ford takes O(mn) time which is slower than Dijkstra's O(m log(n)). So you are better off just running Bellman-Ford for your original graph.
Dijkstra's algorithm will work with a single negative edge as long as you start from the node which has that negative edge as an outgoing edge.
By starting with the smallest valued edge of the graph, you can no longer decrease the total cost by considering other edge weights (Which is how Dijkstra's algorithm works)
No, Dijkstras Algorithm is well known to not work with negative weights.
In case you need negative weights use Bellman-Ford algorithm.
WHY Dijkstra Can Fail It's Simple
because the Shortest Path Should Be : distance(s, vi) ≤ distance(s, vk )
For Exemple we have this Graph :
A---->B with Cost 2 B--->C with Cost Minus 4 the condition was False Now because Distance from A to B > Distance B to C

Complexity for edge addition (planar graph)?

I created a program that adds edges between vertices. The goal is to add as many edges as possible without crossing them(ie Planar graph). What is the complexity?
Attempt: Since I used depth first search I think it is O(n+m) where n is node and m is edge.
Also, if we plot the number of edges as a function of n what is it going to look like?
Your first question is impossible to answer, since you have not described the algorithm.
For your second question, any maximal planar graph with v ≥ 3 vertices has exactly 3v - 6 edges.

Longest Simple Path

So, I understand the problem of finding the longest simple path in a graph is NP-hard, since you could then easily solve the Hamiltonian circuit problem by setting edge weights to 1 and seeing if the length of the longest simple path equals the number of edges.
My question is: What kind of path would you get if you took a graph, found the maximum edge weight, m, replaced each edge weight w with m - w, and ran a standard shortest path algorithm on that? It's clearly not the longest simple path, since if it were, then NP = P, and I think the proof for something like that would be a bit more complicated =P.
If you could solve shortest path problems with negative weights you would find a longest path, the two are equivalent, this could be done by putting a weight of -w instead of w
The standard algorithm for negative weights is the Bellman-Ford algorithm. However the algorithm will not work if there is a cycle in the graph such that the sum of edges is negative. In the graph that you create, all such cycles have negative sum weights and so the algorithm won't work. Unless of course you have no cycles, in which case you have a tree (or a forest) and the problem is solvable via dynamic programming.
If we replace a weight of w by m-w, which guarantees that all weights will be positive, then the shortest path can be found via standard algorithms. If the shortest path P in this graph has k edges then the length is k*m-w(P) where w(P) is the length of the path with the original weights. This path is not necessarily the longest one, however, of all paths with k edges, P is the longest one.
alt text http://dl.getdropbox.com/u/317805/path2.jpg
The graph above is transformed to below using your algorithm.
The Longest path is the red line in the above graph.And depending on how ties are broken and algorithm you use, the shortest path in the transformed graph could be the blue line or the red line. So transforming graph edge weights using the constant that you mentioned yields no significant results. This is why you cannot find the longest path using the shortest path algorithms no matter how clever you are. A simpler transformation could be to negate all the edge weights and run the algorithm. I dont know if I have answered your question but as far as the path property goes the transformed graph doesnt have any useful information regarding the distance.
However this particular transformation is useful in other areas. For example you could force the algorithm to select a particular edge weight in bipatrite matching if you have more than one constraint by adding a huge constant.
Edit: I have been told to add this statement: The above graph is not just about the physical distance. They need not hold the triangle inequality. Thanks.

Resources