Map Reduce algorithm for removing cycles from a graph - algorithm

This question has a great answer for detecting cycles in a directed graph. Unfortunately, it does not seem easy to make a Map Reduce version of it.
Specifically, I am interested in a Map Reduce algorithm for removing cycles from a directed graph.
I have evaluated using a breadth first search (BFS) algorithm but an issue I see is that two different edges may be removed simultaneously to cut off a cycle. The impact of this scenario is that too many edges could be removed. It is important that cycles are removed while minimizing the number of edges removed.
Solutions with proofs available are preferred!
Thanks.

You need an iterative map reduce to implement this algorithm. See http://www.iterativemapreduce.org/ for a map-reduce framework that centers around iterative map reduces. Or http://www.johnandcailin.com/blog/cailin/breadth-first-graph-search-using-iterative-map-reduce-algorithm for a worked example of how to do a breadth-first search through a graph with Hadoop using an iterative map reduce.

Well if you want to remove all cycles, then you will end up with a tree. So no matter what algorithm you use, you will remove |E| - (n -1) edges. (if it was correct of course)
However, the question is whether the deletion of edges will lead to a disconnected graph. For this you will need to make an ordering of the edges (let's say lexicographic order). You should then always remove the the largest edge in a cycle. [I guess the proof of correctness is very direct whence: simply use Kruskal algorithm and find that they will be the same ! ]
Any spanning tree algorithm would solve the problem for you. Depending on what you want to optimize (either time or messsage complexity or any other perfomance metric), you will find different algorithms. BFS is the best for time. No algorithm can solve the problem for less than c(logn + m) message for c > 0.
There is an algoritm I like using for DAG's is called YO-YO. The description of the algorithm can be found in : http://www.site.uottawa.ca/~flocchin/CSI4509/8-yoyo11_fr.pdf

Related

Is there an efficient way to find shortest paths in a functional graph?

My task is to process Q shortest path queries in a functional graph with V nodes. Q and V are integers that can be up to 100000.
My first idea was to use the Floyd-Warshall algorithm to answer queries efficiently, but this algorithm takes O(V^3) time to calculate the shortest paths, which is way too slow.
My second idea runs in O(QV) time, because for every query I start at the starting node and traverse through the graph until I discover a cycle or reach the destination node.
However, this solution is still too slow; it has no chance of quickly processing queries when V and Q become large. I think that there is some pre-processing or another technique that I could use to solve this, but I haven't been able to find any online resources to help guide me. Can somebody please help me out?
A functional graph means that each node has only a single out-edge, so the maximum number of steps between A and B couldn't be more than the number of vertices without encountering a cycle. You should be O(V).
Since there are no choices, you could readily build a CostMap[V][V] which recorded the distance between two nodes, and lazily fill it as you encounter queries; thus successive queries would approach constant time.
There are a a lot of algorithm designed for this purpose, you can look up the Depth First Search (DFS) or Breadth First Search (BFS) algorithm. As well as the Djikstra's algorithm and the A* (A star) algorithm, this last one is often used in pathfinding for video games. They all have their pros and cons and it depends on the architecture of your network but it should suit your needs.

Difference between Boruvka and Kruskal in finding MST

I would like to know the difference between Boruvkas algorithm and Kruskals algorithm.
What they have in common:
both find the minimum spanning tree (MST) in a undirected graph
both add the shortest edge to the existing tree until the MST is found
both look at the graph in it`s entirety, unlike e.g. Prims algorithm, which adds one node after another to the MST
Both algorithmns are greedy
The only difference seems to be, that Boruvkas perspective is each individual node (from where it looks for the cheapest edge), instead of looking at the entire graph (like Kruskal does).
It therefore seems to be, that Boruvka should be relatively easy to do in parallel (unlike Kruskal). Is that true?
In case of Kruskal's algorithm, first of all we want to sort all edges from the cheapest to the most expensive ones. Then in each step we remove min-weight edge and if it doesn't create a cycle in our graph (which initially consits of |V|-1 separate vertices), then we add it to MST. Otherwise we just remove it.
Boruvka's algorithm looks for nearest neighbour of each component (initially vertex). It keeps selecting cheapest edge from each component and adds it to our MST. When we have only one connected component, it's done.
Finding cheapest outgoing edge from each node/component can be done easily in parallel. Then we can just merge new, obtained components and repeat finding phase till we find MST. That's why this algorithm is a good example for parallelism (in case of finding MST).
Regarding parallel processing using Kruskal's algorithm, we need to keep and check edges in strict order, that's why it's hard to achieve explicit parallelism. It's rather sequential and we can't do much about this (even if we still may consider e.g. parallel sorting). Although there were few approaches to implement this method in parallel way, those papers can be found easily to check their results.
Your description is accurate, but one detail can be clarified: Boruvka's algorithm's perspective is each connected component rather than each individual node.
Your intuition about parallelization is also right -- this paper has more details. Excerpt from the abstract:
In this paper we design and implement four parallel MST algorithms (three variations of Boruvka plus our new approach) for arbitrary sparse graphs that for the first time give speedup when compared with the best sequential algorithm.
The important difference between Boruvka's algorithm and Kruskal's or Prim's is that with Boruvka's you don't need to presort the edges or maintain a priority queue.
Boruvka's still incurs the extra log N factor in the cost, but it does it by requiring O(log N) passes over the edges.
You can parallelize Boruvka's algorithm, but you can also parallelize sorting, so I don't know if Boruvka's has any real advantages over Kruskal's in practice.

Most time efficient method of finding all simple paths between all nodes in an undirected graph

To expand on the title, I need all simple (non-cyclical) paths between all nodes in a very large undirected graph.
The most obvious optimization I can think of is that once I have calculated all the paths between a particular pair of nodes I can just reverse them all instead of recalculating when I need to go the other way.
I was looking into transitive closures and the Floyd–Warshall algorithm, but it looks like the best I could do if I went down that route would be to find only the shortest paths between all nodes.
Any ideas? Right now I'm looking at running a DFS on every node in the graph, which seems to me to be significantly less than optimal.
I don't understand the reasoning behind your idea that DFS is significantly less than optimal. In fact, DFS is clearly optimal here.
If you traverse the graph, limiting the branching only to vertices which haven't been visited in this branch so far, then the total number of nodes in the DFS tree will be equal to the number of simple paths from the starting vertex to all other vertices. As all of these paths are a part of your output, the algorithm cannot be meaningfully improved, as you can't reduce complexity below the size of the output.
There is simply no way to output a factorial amount of data in polynomial time, regardless of what the problem is or what algorithm you are using.

Finding all shortest paths from every pair of nodes on a graph

I have about 70k nodes, and 250k edges, and the graph isn't necessarily connected. Obviously using an efficient algorithm is crucial. What do you recommend?
As a side note, I would appreciate advice on how to divide the task up between several machines--is that even possible with this kind of problem?
Thanks
You could use the Floyd-Warshall algorithm. It solves exactly this problem.
The complexity is O(V^3).
There is also Johnson's algorithm with a complexity of O(V^2*log V + VE). The latter is also easy to distribute because it runs Dijkstra's algorithm V times, which can be done in parallel.
MapReduce is a great distributed algorithm for this, though it might be a little too high-powered. If you're interested in that, take a look at this lecture or maybe this blog post for inspiration. (In fact, when I was taught MapReduce, this was one of the first examples.)
For 250k edges and 70k, it seems like the graph is relatively sparse, Dijkstra's algorithm runs in O( E + V log V ) for each node, for a full running time (all sources) of O( VE + V^2 log V ). This should be fast enough, but the usual caveats apply for Dijkstra's. (Negative edges.)
You can also take a look at Johnson's algorithm if your problem deals with negative weights, but not negative cycles. Specifically, it can also be distributed, as it takes the reweighted graph and runs Dijkstra's algorithm from each node.
There are two naive ways of parallelizing this problem:
1) Identify subcomponents and distribute these over different computers. The path length between two nodes from two different components is undefined.
2) Load the graph in different computers and give every computers a list of nodes to calculate all shortest paths. The results for one node are not dependent on the results of another node so you can parallelize this problem.
Upside: not too hard to implement but I would only do it like this if you have to solve this once. If this is a recurring issue then you might want to look at distributed algorithsm.
Use igraph, it's written in C, pretty fast and you can use Python as a wrapper language.
Look at papers/publications which have the following keywords: distributed graph search algorithms. Here's one that may be of help.
There's this ACM account only paper as well: Distributed computation on graphs: shortest path algorithms

Is this minimum spanning tree algorithm correct?

The minimum spanning tree problem is to take a connected weighted graph and find the subset of its edges with the lowest total weight while keeping the graph connected (and as a consequence resulting in an acyclic graph).
The algorithm I am considering is:
Find all cycles.
remove the largest edge from each cycle.
The impetus for this version is an environment that is restricted to "rule satisfaction" without any iterative constructs. It might also be applicable to insanely parallel hardware (i.e. a system where you expect to have several times more degrees of parallelism then cycles).
Edits:
The above is done in a stateless manner (all edges that are not the largest edge in any cycle are selected/kept/ignored, all others are removed).
What happens if two cycles overlap? Which one has its longest edge removed first? Does it matter if the longest edge of each is shared between the two cycles or not?
For example:
V = { a, b, c, d }
E = { (a,b,1), (b,c,2), (c,a,4), (b,d,9), (d,a,3) }
There's an a -> b -> c -> a cycle, and an a -> b -> d -> a
#shrughes.blogspot.com:
I don't know about removing all but two - I've been sketching out various runs of the algorithm and assuming that parallel runs may remove an edge more than once I can't find a situation where I'm left without a spanning tree. Whether or not it's minimal I don't know.
For this to work, you'd have to detail how you would want to find all cycles, apparently without any iterative constructs, because that is a non-trivial task. I'm not sure that's possible. If you really want to find a MST algorithm that doesn't use iterative constructs, take a look at Prim's or Kruskal's algorithm and see if you could modify those to suit your needs.
Also, is recursion barred in this theoretical architecture? If so, it might actually be impossible to find a MST on a graph, because you'd have no means whatsoever of inspecting every vertex/edge on the graph.
I dunno if it works, but no matter what your algorithm is not even worth implementing. Finding all cycles will be the freaking huge bottleneck that will kill it. Also doing that without iterations is impossible. Why don't you implement some standard algorithm, let's say Prim's.
Your algorithm isn't quite clearly defined. If you have a complete graph, your algorithm would seem to entail, in the first step, removing all but the two minimum elements. Also, listing all the cycles in a graph can take exponential time.
Elaboration:
In a graph with n nodes and an edge between every pair of nodes, there are, if I have my math right, n!/(2k(n-k)!) cycles of size k, if you're counting a cycle as some subgraph of k nodes and k edges with each node having degree 2.
#Tynan The system can be described (somewhat over simplified) as a systems of rules describing categorizations. "Things are in category A if they are in B but not in C", "Nodes connected to nodes in Z are also in Z", "Every category in M is connected to a node N and has 'child' categories, also in M for every node connected to N". It's slightly more complicated than this. (I have shown that by creating unstable rules you can model a turning machine but that's beside the point.) It can't explicitly define iteration or recursion but can operate on recursive data with rules like the 2nd and 3rd ones.
#Marcin, Assume that there are an unlimited number of processors. It is trivial to show that the program can be run in O(n^2) for n being the longest cycle. With better data structures, this can be reduced to O(n*O(set lookup function)), I can envision hardware (quantum computers?) that can evaluate all cycles in constant time. giving a O(1) solution to the MST problem.
The Reverse-delete algorithm seems to provide a partial proof of correctness (that the proposed algorithm will not produce a non-minimal spanning tree) this is derived by arguing that mt algorithm will remove every edge that the Reverse-delete algorithm will. However I'm not sure how to show that my algorithm won't delete more than that algorithm.
Hhmm....
OK this is an attempt to finish the proof of correctness. By analogy to the Reverse-delete algorithm, we know that enough edges will be removed. What remains is to show that there will not be to many edges removed.
Removing to many edges can be described as removing all the edges between the side of a binary partition of the graph nodes. However only edges in a cycle are ever removed, therefor, for all edge between partitions to be removed, there needs to be a return path to complete the cycle. If we only consider edges between the partitions then the algorithm can at most remove the larger of each pair of edges, this can never remove the smallest bridging edge. Therefor for any arbitrary binary partitioning, the algorithm can't sever all links between the side.
What remains is to show that this extends to >2 way partitions.

Resources