I was searching for an algorithm that could find a very basic (i.e. the shortest) cycle that includes a given vertex.
In other words, if a vertex v1 participates in two cycles say one with v1, v2, v3 and the other v1, v2, v4, v5, v6. I want the algorithm to give me v1, v2, v3 cycle as the output.
Does anyone know which algorithm would do that?
Also, what could be the complexity of this algorithm.
Thanks in advance.
Start a bfs from the given vertex v0. Stop as soon as bfs considers a vertex v1 adjacent to v0, and v0 is not parent of v1 in bfs tree. The found path from v0 to v1 plus (v1,v0) edge is your shortest cycle. Complexity is O(n+m) due to bfs.
Related
Given a directed graph.
Any 2 vertices are adjacent. The edge connecting a pair of vertices may be uni-directional or bi-directional.
How do I find a Hamilton path?
Side notes:
Wikipedia says "A strongly connected simple directed graph with n vertices is Hamiltonian if every vertex has a full degree greater than or equal to n." Therefore, a solution must exist in my problem.
I understand that the general Hamilton path problem is NP-Complete. But it feels like this specific version should have a polynomial solution.
Use a variant of insertion sort to construct a path in quadratic time. Given a path
v1 v2 ... vn-1
on a subset of vertices, consider how to insert vn. If vn has an arc to v1, then prepend vn. If vn-1 has an arc to vn, then append vn. Otherwise, there exists by Sperner's lemma an index i such that vn has an arc from vi and an arc to vi+1. Insert it there.
Given an undirected graph G=(V,E) , V1,V2 are subsets of V.
d(V1,V2)=min d(v1,v2) ,
so I need to figure out how to find d(V1,V2) in O(|V|+|E|)
if then d(V1,V2)=0
otherwise, I randomly pick v1' from V1 and run BFS(V1,v1'), save the furthest vertex from v1'
at v1''
I will do the same for some random vertex v2' from V2.
return d(V1,V2)=min{ d(v1',v2'), d(v1',v2''),d(v1''v2'),d(v1'',v2'')}
will that work? since runtime of BFS is O(|V|+|E|) the suggested algorithm will run in O(|V|+|E|)
IMO what you can do is as follows:
Scan the set V1 and note all edges that begin at nodes in V1 and end at nodes not in V1.
Now combine all the nodes in V1 into one node. The edges noted in step 1 shall be the edges going out from this node.
Do the same for V2.
Now it reduces to the shortest route problem between node V1 and V2. This can be solved by conducting a simple BFS on node V1 or V2 in O(E) where E is the number of edges in the graph.
How to design an algorithm that computes in linear time the diameter of a (graph theoretical) undirected, all-edges-have-weight-1 tree? The diameter of a tree is given by the length of the longest path between two vertices.
Any idea of how to aproach this problem?
Let v1 be any vertex in the tree.
Do a depth first search from v1 to get the distances of all other vertices from v1, choose v2 as the vertex with the highest distance.
Do a depth first search from v2 to get the distances of all other vertices from v2, choose v3 as the vertex with the highest distance.
D(v2, v3) is the tree diameter. The complexity is O(|V|), as DFS is linear for a tree.
Say we have a strongly connected directed graph G (V,E) with positive edge weights and V0 belongs to V. Write an algorithm to find the shortest paths between all pairs of nodes through V0
An interview question. Clearly we could use Bellman-Ford which takes O(VE).
However there must exist a better solution. Any help please?
I think you could even use Dijkstra's algorithm. Run it once to find the shortest paths from V0 to all other vertices and then once more to find the shortest path from every other vertex to V0 (this is the same as running regular Dijkstra on the graph with reversed edges). Then for any pair (V1,V2) concatenate paths from V1 to V0 and from V0 to V2.
I'm just starting out to learn the basics of graph theory, and my textbook is a little unclear about a simple concept. The term "adjacency" as far as I understand, given a undirected graph, if A and B nodes are connected, A is adjacent to B, and B is adjacent to A. I was wondering if this was still true given a directed graph where A points to B?
Thanks
It looks like it was pretty well explained, but to provide some visuals. Adjacent edges are two nodes that are connected, and there are two basic setups:
In an undirected graph, two nodes A and B connected by an edge are adjacent to each other
In a directed graph, two nodes A and B connected by an edge from A to B means that you can get to B from A (or, B is adjacent to A):
In a directed graph where A points to B, {A,B} would be included in the graph's adjacency list and {B,A} would not be. That is, A is adjacent to B, but not vice-versa.
In a digraph, there is an edge from v1 to v2, then v2 is adjacent to v1. (From v1 to v2 as in v2 is the head and v1 is the tail.)
In an undirected graph this is symmetric - if v2 is adjacent to v1 then v1 is also adjacent to v2, and we say v1 ~ v2.
In a digraph, v1 may not necessarily also be adjacent to v2, so we say v1 ↓ v2.
EDIT: also, you could try asking this sort of question on the CSTheory Stackexchange site in the future - you might get better answers.