Verify Dijkstras algorithm in O (V + E) [closed] - algorithm

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am working on solving this problem:
Professor Gaedel has written a program that he claims implements Dijkstra’s algorithm.
The program produces v.d and v.π for each vertex v in V. Give an
O.(V + E)-time algorithm to check the output of the professor’s program. It should
determine whether the d and π attributes match those of some shortest-paths tree.
You may assume that all edge weights are nonnegative.
v.d is the shortest distance from starting node to v.
v.π is v's predecessor in the shortest path from starting node to v
My idea is:
For every vertex (i), compare i.d with (i.π).d. If i's predecessor has a larger d value then we cannot have a shortest-path tree.
I believe this can check if the professor's output is not a shortest-path tree, but I don't think it can confirm that the output is a shortest-path tree. I cannot think of a way to do this without more information.
Am I on the right track?

I think this would work
Do a DFS, but instead of following the regular graph edges, follow only the π value for each vertex. You're doing this to produce a topological ordering, so that the first vertex to finish will be the first vertex in the topological ordering. Note that the first vertex in the topo sort you produce will be the "source" vertex that was given to Gaedel's algorithm.
Now that you have a topo ordering, you can relax edges in the most efficient order, just like how you would do it on a DAG.
for each v in topoSortedVerts
if v.d_verify != v.d_Gaedel
//fail
for each u in v.adjacencies
relax(v, u)
if v.d_verify != v.d_Gaedel
//fail
I think you may also need to make sure all V verts are considered, and that the source vertex matches. Maybe. Also, I guess Gaedel's predecessor subgraph induced by the π values could be real jacked up and have all kinds of crazy things wrong with it, but I assume it doesn't.
It is O(V + E) because the outer loop runs V times, and the inner loop runs E times using aggregate analysis.

Related

prove connected graph with degree = 2 has hamiltonian cycle [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
excuse me if my question is repeated but i couldn't find a complete answer to prove that a connected graph which all vertices has degree = 2 is a hamiltonian graph.
I have read this and this
Let the given graph be G. Starting from a vertex v in the graph, let us trace an arbitrary walk(path with repeated vertices allowed), P, by repeatedly picking a vertex adjacent to the last vertex added to P, without repeated any edges. Terminate if you cannot add any more vertices or if you reach a vertex that was already visited before. This process will eventually terminate since there are finitely many vertices. Note that since every vertex has degree two, the termination will be caused by a vertex repeating. Let this termination vertex be t. What we have found is really a cycle containing t. Let this subgraph consisting of just this cycle containing t be C. Let V(C) be the set of all vertices of C. Since all the vertices have degree 2 in G and C, every edge in G involving any of the vertices in V(C) is already in C. Now, let us suppose there is a vertex of G, say u, not present in V(C). There will be no path from u to any vertex of V(C), because if there was one, you would end up with an edge going from V(C) to a vertex outside, which as we just saw isn't possible. But you know that G is connected, implying that there is no such vertex u. Thus, G = C and hence G is just a cycle. Trivially, it is Hamiltonian.

Finding all vertices on negative cycles [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I know that the problem of checking whether given edge of a weighted digraph belongs to a negative cycle is NP-complete (Finding the minimal subgraph that contains all negative cycles) and Bellman-Ford allows to check a vertex for the same thing in O(|V|*|E|) time. But what if I want to find all vertices belonging to negative cycles? I wonder if it could be done faster than Floyd-Warshall's O(|V|^3).
I don't think Floyd-Warshall does the job of finding these vertices. Using a similar approach as taken in the post you're referring to, it can be shown that finding the set of all vertices that lie on a negative cycle is NP-complete as well.
The related post shows that one may use the algorithm to find the set of all edges that lie on a negative cycle to solve the hamiltonian cycle problem, which means that the former problem is NP-complete.
If we can reduce the problem of finding all edges that lie on a negative cycle to the problem of finding the set of all vertices that lie on a negative cycle, we've shown NP-completeness of the latter problem.
For each edge (u,w) in your weighted digraph, introduce a new auxiliary vertex v, and split (u, w) in two edges (u, v) and (v, w). The weight of (u, w) can be assigned to either (u, v) or (v, w).
Now apply the magic polynomial-time algorithm to find all the vertices that lie on a negative cycle, and take the subset that consists of the auxiliary vertices. Since each auxiliary vertex is associated with an edge, we've solved the problem of finding the minimal subgraph that contains all negative cycles, and we can thus also solve the hamiltonian cycle problem in polynomial time, which implies P = NP. Assuming P != NP, finding all vertices that lie on a negative cycle is NP-complete.

Using DFS on a Graph - Determine if a graph is a clique with specific SCC [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a simple question on DFS and I'm trying to understand how to use it and not how to solve the whole problem. I'm really looking for an explanation and not a solution to my homework.
I'll write down the question first.
"Suppose you have an undirected graph G=(V,E) and let three of its
vertices to be called v1, v2 and v3. Find an algorithm which
determines if these three vertices are part of a clique
(complete graph) (k>=3)"
Now I suppose to use DFS in order to solve it. As far as I understand DFS will let me know if v1, v2 and v3 are in the same strongly connected component. If I'm correct I should also determine if G is also a clique(complete graph).
I read in the internet and I found out that asserting if a graph is clique or not is NP and cannot be solved easily. Am I correct? Am I missing anything? Is there any propery I can use to determine immediately if a graph is comeplete ?
To clarify the confusion about the NP-completeness: checking whether a graph is a clique is not NP-complete; just count the edges and see whether there are n(n-1)/2. What is NP-complete is to find a maximum clique (meaning the subgraph that has the biggest number of vertices and is a clique) or a clique of k vertices in a graph of n vertices (if k is part of the input instead of a fixed number); the latter case is called the clique decision problem.
EDIT: I just realized you asked something regarding strongly connected components as well; that term only applies to directed graphs (i.e. the edges have a direction, which means for two vertices v and w, the edge v->w is not the same as the edge w->v). Cliques are commonly defined on undirected graphs, for which there are only connected components.
If I understood it properly, all you have to check whether these three vertices are connected, i.e., the edges v1-v2, v2-v3 and v3-v1 exists. If they exist, they form a clique of K=3. If at least one of them does not, these three vertices together can not be in a clique of size k>=3.

Difference between DIjkstra and BellmanFord algorithm [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am wring thesis about shortest path algorithms.
And i don't understand one thing...
I have made visualisation of dijkstras algorithm.
1) Is it correct ? Or am i doing something wrong?
2) How would look Bellman-Ford algorithm? As fas as i have looked for difference, i found "Bellman-ford: the basic idea is very similar to Dijkstra's, but instead of selecting the shortest distance neighbour edges, it select all the neighbour edges." But also dijkstra checks all vertexes and all edges, isnt it?
dijkstra assumes that the cost of paths is montonically increasing. that plus the ordered search (using the priority queue) mans that when you first reach a node, you have arrived via the shortest path.
this is not true with negative weights. if you use dijkstra with negative weights then you may find a later path is better than an earlier one (because a negative weight improved the path on a later step).
so in bellman-ford, when you arrive at a node you test to see if the new path is shorter. in contrast, with dijkstra, you can cull nodes
in some (most) cases dijkstra will not explore all complete paths. for example, if G linked only back to C then any path through G would be higher cost that any through C. bellman-ford would still consider all paths through G to F (dijkstra would never look at those because they are of higher cost that going through C). if it does not do this it can't guarantee finding negative loops.
here's an example: the above never calculates the path AGEF. E has already been marked as visited by the time you arrive from G.
I am also thinking the same
Dijkstra's algorithm solves the single-source shortest-path problem when all edges have non-negative weights. It is a greedy algorithm and similar to Prim's algorithm. Algorithm starts at the source vertex, s, it grows a tree, T, that ultimately spans all vertices reachable from S. Vertices are added to T in order of distance i.e., first S, then the vertex closest to S, then the next closest, and so on.

efficient algorithm for loop identification in a directed graph? [duplicate]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
The community reviewed whether to reopen this question 11 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Is there an efficient algorithm for detecting cycles within a directed graph?
I have a directed graph representing a schedule of jobs that need to be executed, a job being a node and a dependency being an edge. I need to detect the error case of a cycle within this graph leading to cyclic dependencies.
Tarjan's strongly connected components algorithm has O(|E| + |V|) time complexity.
For other algorithms, see Strongly connected components on Wikipedia.
Given that this is a schedule of jobs, I suspect that at some point you are going to sort them into a proposed order of execution.
If that's the case, then a topological sort implementation may in any case detect cycles. UNIX tsort certainly does. I think it is likely that it is therefore more efficient to detect cycles at the same time as tsorting, rather than in a separate step.
So the question might become, "how do I most efficiently tsort", rather than "how do I most efficiently detect loops". To which the answer is probably "use a library", but failing that the following Wikipedia article:
http://en.wikipedia.org/wiki/Topological_sorting
has the pseudo-code for one algorithm, and a brief description of another from Tarjan. Both have O(|V| + |E|) time complexity.
According to Lemma 22.11 of Cormen et al., Introduction to Algorithms (CLRS):
A directed graph G is acyclic if and only if a depth-first search of G yields no back edges.
This has been mentioned in several answers; here I'll also provide a code example based on chapter 22 of CLRS. The example graph is illustrated below.
CLRS' pseudo-code for depth-first search reads:
In the example in CLRS Figure 22.4, the graph consists of two DFS trees: one consisting of nodes u, v, x, and y, and the other of nodes w and z. Each tree contains one back edge: one from x to v and another from z to z (a self-loop).
The key realization is that a back edge is encountered when, in the DFS-VISIT function, while iterating over the neighbors v of u, a node is encountered with the GRAY color.
The following Python code is an adaptation of CLRS' pseudocode with an if clause added which detects cycles:
import collections
class Graph(object):
def __init__(self, edges):
self.edges = edges
self.adj = Graph._build_adjacency_list(edges)
#staticmethod
def _build_adjacency_list(edges):
adj = collections.defaultdict(list)
for edge in edges:
adj[edge[0]].append(edge[1])
adj[edge[1]] # side effect only
return adj
def dfs(G):
discovered = set()
finished = set()
for u in G.adj:
if u not in discovered and u not in finished:
discovered, finished = dfs_visit(G, u, discovered, finished)
def dfs_visit(G, u, discovered, finished):
discovered.add(u)
for v in G.adj[u]:
# Detect cycles
if v in discovered:
print(f"Cycle detected: found a back edge from {u} to {v}.")
break
# Recurse into DFS tree
if v not in finished:
dfs_visit(G, v, discovered, finished)
discovered.remove(u)
finished.add(u)
return discovered, finished
if __name__ == "__main__":
G = Graph([
('u', 'v'),
('u', 'x'),
('v', 'y'),
('w', 'y'),
('w', 'z'),
('x', 'v'),
('y', 'x'),
('z', 'z')])
dfs(G)
Note that in this example, the time in CLRS' pseudocode is not captured because we're only interested in detecting cycles. There is also some boilerplate code for building the adjacency list representation of a graph from a list of edges.
When this script is executed, it prints the following output:
Cycle detected: found a back edge from x to v.
Cycle detected: found a back edge from z to z.
These are exactly the back edges in the example in CLRS Figure 22.4.
The simplest way to do it is to do a depth first traversal (DFT) of the graph.
If the graph has n vertices, this is a O(n) time complexity algorithm. Since you will possibly have to do a DFT starting from each vertex, the total complexity becomes O(n^2).
You have to maintain a stack containing all vertices in the current depth first traversal, with its first element being the root node. If you come across an element which is already in the stack during the DFT, then you have a cycle.
In my opinion, the most understandable algorithm for detecting cycle in a directed graph is the graph-coloring-algorithm.
Basically, the graph coloring algorithm walks the graph in a DFS manner (Depth First Search, which means that it explores a path completely before exploring another path). When it finds a back edge, it marks the graph as containing a loop.
For an in depth explanation of the graph coloring algorithm, please read this article: http://www.geeksforgeeks.org/detect-cycle-direct-graph-using-colors/
Also, I provide an implementation of graph coloring in JavaScript https://github.com/dexcodeinc/graph_algorithm.js/blob/master/graph_algorithm.js
Start with a DFS: a cycle exists if and only if a back-edge is discovered during DFS. This is proved as a result of white-path theorum.
If you can't add a "visited" property to the nodes, use a set (or map) and just add all visited nodes to the set unless they are already in the set. Use a unique key or the address of the objects as the "key".
This also gives you the information about the "root" node of the cyclic dependency which will come in handy when a user has to fix the problem.
Another solution is to try to find the next dependency to execute. For this, you must have some stack where you can remember where you are now and what you need to do next. Check if a dependency is already on this stack before you execute it. If it is, you've found a cycle.
While this might seem to have a complexity of O(N*M) you must remember that the stack has a very limited depth (so N is small) and that M becomes smaller with each dependency that you can check off as "executed" plus you can stop the search when you found a leaf (so you never have to check every node -> M will be small, too).
In MetaMake, I created the graph as a list of lists and then deleted every node as I executed them which naturally cut down the search volume. I never actually had to run an independent check, it all happened automatically during normal execution.
If you need a "test only" mode, just add a "dry-run" flag which disables the execution of the actual jobs.
There is no algorithm which can find all the cycles in a directed graph in polynomial time. Suppose, the directed graph has n nodes and every pair of the nodes has connections to each other which means you have a complete graph. So any non-empty subset of these n nodes indicates a cycle and there are 2^n-1 number of such subsets. So no polynomial time algorithm exists.
So suppose you have an efficient (non-stupid) algorithm which can tell you the number of directed cycles in a graph, you can first find the strong connected components, then applying your algorithm on these connected components. Since cycles only exist within the components and not between them.
I had implemented this problem in sml ( imperative programming) . Here is the outline . Find all the nodes that either have an indegree or outdegree of 0 . Such nodes cannot be part of a cycle ( so remove them ) . Next remove all the incoming or outgoing edges from such nodes.
Recursively apply this process to the resulting graph. If at the end you are not left with any node or edge , the graph does not have any cycles , else it has.
https://mathoverflow.net/questions/16393/finding-a-cycle-of-fixed-length I like this solution the best specially for 4 length:)
Also phys wizard says u have to do O(V^2). I believe that we need only O(V)/O(V+E).
If the graph is connected then DFS will visit all nodes. If the graph has connected sub graphs then each time we run a DFS on a vertex of this sub graph we will find the connected vertices and wont have to consider these for the next run of the DFS. Therefore the possibility of running for each vertex is incorrect.
The way I do it is to do a Topological Sort, counting the number of vertices visited. If that number is less than the total number of vertices in the DAG, you have a cycle.
As you said, you have set of jobs, it need to be executed in certain order. Topological sort given you required order for scheduling of jobs(or for dependency problems if it is a direct acyclic graph). Run dfs and maintain a list, and start adding node in the beginning of the list, and if you encountered a node which is already visited. Then you found a cycle in given graph.
If DFS finds an edge that points to an already-visited vertex, you have a cycle there.
If a graph satisfy this property
|e| > |v| - 1
then the graph contains at least on cycle.

Resources