Could you please help me find out the time complexity of the Fleury' algorithm (which is used to get the Eulerian circuit)?
Fleury's algorithm isn't really complete until you specify how the bridge edges are identified. Tarjan gave a linear-time algorithm for identifying all bridges (see http://en.wikipedia.org/wiki/Bridge_(graph_theory) ), so a naive implementation that reran Tarjan's algorithm after each deleted edge would be O(E^2). There are probably better ways to recompute the set of bridges, but there is also a better O(E) algorithm. (See http://www.algorithmist.com/index.php/Euler_tour#Fleury.27s_algorithm ; not my site :))
Here:
http://roticv.rantx.com/book/Eulerianpathandcircuit.pdf
you can read among other things, that it is O(E), linear edge count.
Fleury algorithm involves following steps :
Make sure the graph has either 0 or 2 odd vertices.
If there are 0 odd vertices, start anywhere. If there are 2 odd vertices, start at one of them.
Follow edges one at a time. If you have a choice between a bridge and a non-bridge, always choose the non-bridge.
Stop when you run out of edges.
If bridges are found out by Tarjan's algorithm and these bridges are stored in an adjacency matrix then we need not run tarjan's algorithm every time to check whether an edge is a bridge or not. We can check it in O(1) time for all other bridge queries. Thus Flury's algorithm time complexity can be reduced to O(V+E) {as this is a DFS} but this method needs O(V2) extra space to store bridges.
Related
I would like to know the difference between Boruvkas algorithm and Kruskals algorithm.
What they have in common:
both find the minimum spanning tree (MST) in a undirected graph
both add the shortest edge to the existing tree until the MST is found
both look at the graph in it`s entirety, unlike e.g. Prims algorithm, which adds one node after another to the MST
Both algorithmns are greedy
The only difference seems to be, that Boruvkas perspective is each individual node (from where it looks for the cheapest edge), instead of looking at the entire graph (like Kruskal does).
It therefore seems to be, that Boruvka should be relatively easy to do in parallel (unlike Kruskal). Is that true?
In case of Kruskal's algorithm, first of all we want to sort all edges from the cheapest to the most expensive ones. Then in each step we remove min-weight edge and if it doesn't create a cycle in our graph (which initially consits of |V|-1 separate vertices), then we add it to MST. Otherwise we just remove it.
Boruvka's algorithm looks for nearest neighbour of each component (initially vertex). It keeps selecting cheapest edge from each component and adds it to our MST. When we have only one connected component, it's done.
Finding cheapest outgoing edge from each node/component can be done easily in parallel. Then we can just merge new, obtained components and repeat finding phase till we find MST. That's why this algorithm is a good example for parallelism (in case of finding MST).
Regarding parallel processing using Kruskal's algorithm, we need to keep and check edges in strict order, that's why it's hard to achieve explicit parallelism. It's rather sequential and we can't do much about this (even if we still may consider e.g. parallel sorting). Although there were few approaches to implement this method in parallel way, those papers can be found easily to check their results.
Your description is accurate, but one detail can be clarified: Boruvka's algorithm's perspective is each connected component rather than each individual node.
Your intuition about parallelization is also right -- this paper has more details. Excerpt from the abstract:
In this paper we design and implement four parallel MST algorithms (three variations of Boruvka plus our new approach) for arbitrary sparse graphs that for the first time give speedup when compared with the best sequential algorithm.
The important difference between Boruvka's algorithm and Kruskal's or Prim's is that with Boruvka's you don't need to presort the edges or maintain a priority queue.
Boruvka's still incurs the extra log N factor in the cost, but it does it by requiring O(log N) passes over the edges.
You can parallelize Boruvka's algorithm, but you can also parallelize sorting, so I don't know if Boruvka's has any real advantages over Kruskal's in practice.
Can we use Dijkstra's algorithm to find cycles???
Negative cycles
Positive cycles
If we can what, are the changes we have to do?
1) Dijkstra's doesn't work on graphs with negative edges because you can (possibly) find a minimum distance of negative infinity.
2) Um, you normally run it on graphs with cycles (otherwise, you might as well be traversing a tree), so it can handle them just fine.
If your real question is just about finding cycles, look at Finding all cycles in graph
No We cant use Dijkstra algorithm if negative cycles exist as the algorithm works on the shortest path and for such graphs it is undefined.Once you get to a negative cycle, you can bring the cost of your "shortest path" as low as you wish by following the negative cycle multiple times.
This type of restriction is applicable to all sort of algorithms to find shortest path in a graph and this is the same reason that prohibits all negative edges in Dijkstra.
You may modify Dijkstra to find the cycles in the graph but i do not think it is the best practice.
Rather you can possibility Use:
Tarjan's strongly connected components algorithm(Time complexity -O(|E| + |V|))
or Kosaraju's algorithm (uses DFS and is a linear time alogoritham)
or you may follow this link for better idea:
https://en.wikipedia.org/wiki/Strongly_connected_component
Hope I answered your question.
This question has a great answer for detecting cycles in a directed graph. Unfortunately, it does not seem easy to make a Map Reduce version of it.
Specifically, I am interested in a Map Reduce algorithm for removing cycles from a directed graph.
I have evaluated using a breadth first search (BFS) algorithm but an issue I see is that two different edges may be removed simultaneously to cut off a cycle. The impact of this scenario is that too many edges could be removed. It is important that cycles are removed while minimizing the number of edges removed.
Solutions with proofs available are preferred!
Thanks.
You need an iterative map reduce to implement this algorithm. See http://www.iterativemapreduce.org/ for a map-reduce framework that centers around iterative map reduces. Or http://www.johnandcailin.com/blog/cailin/breadth-first-graph-search-using-iterative-map-reduce-algorithm for a worked example of how to do a breadth-first search through a graph with Hadoop using an iterative map reduce.
Well if you want to remove all cycles, then you will end up with a tree. So no matter what algorithm you use, you will remove |E| - (n -1) edges. (if it was correct of course)
However, the question is whether the deletion of edges will lead to a disconnected graph. For this you will need to make an ordering of the edges (let's say lexicographic order). You should then always remove the the largest edge in a cycle. [I guess the proof of correctness is very direct whence: simply use Kruskal algorithm and find that they will be the same ! ]
Any spanning tree algorithm would solve the problem for you. Depending on what you want to optimize (either time or messsage complexity or any other perfomance metric), you will find different algorithms. BFS is the best for time. No algorithm can solve the problem for less than c(logn + m) message for c > 0.
There is an algoritm I like using for DAG's is called YO-YO. The description of the algorithm can be found in : http://www.site.uottawa.ca/~flocchin/CSI4509/8-yoyo11_fr.pdf
Also are there any randomized algorithms for that. I need to find a single cycle as fast as possible, not all cycles.
Which is the best (in time complexity) algorithm for finding a cycle in directed graph?
Tarjan's strongly connected components algorithm. The time complexity if O( | V | + | E | ).
I'm not aware that this is possible to do in the general case, but if you know certain properties of the graph (such as its "distance from cycle-freeness" as described in the paper below), there exist randomized algorithms that with high probability will find a cycle quickly. Specifically, see the first algorithm in section 3 of the linked paper, with the corresponding analysis explaining how to extract the cycle.
As for deterministic algorithms, Mr. Saurav's answer is correct. In the worst case, you'll at least have to scan the entire input in order to correctly determine whether or not there is a cycle, which already requires O(|V| + |E|) time.
[1] http://arxiv.org/abs/1007.4230
the fastest one would be just a depth first graph traversal.
this is because you are not specifying any particular topology and so any other approach could face a bad worst case. Asymptotically O(|E|).
what you do is you label each node by the unique time you enter it as you recurse further and as soon as you find a node that already has a time label, there is your cycle and you halt.
I need to find the complexity of finding all the cycles in a undirected graph consisting of 50 nodes. Moreover, if the graph grows large, will the complexity be changed and what will be it if the network grows considerably large. In addition, if I find only few cycles then how do I find the complexity of finding few cycles in a graph.
Thanking you in Anticipation!
Using depth-first search and proactive marking of nodes, you can find cycles simply by noticing any time that you run into a marked node in your search.
This is an O(V+E) approach, I believe, where V is the number of vertices or nodes and E is the number of edges or connections.
If you put the nodes in a particular branch on a stack, you can also easily determine the cycle path. Just make sure to pop a node out each time you backtrack.
A given graph can have exponential number of cycles (in the size of graph). Consider a bipartite graph where vi is connected to wi+1 % n and wi is connected to vi+1%n.
So unless you have specific kind of graphs, there is no hope for polynomial time solutions.
A solution that runs in exponential time is very easy to build. Consider all permutations of vertices, see if that ordering results in a cycle.
Of course, in practical terms you can come up with solutions that are much faster than that.