Sollin's Minimum Spanning Tree Algorithm - algorithm

Yes this is homework. I was wondering if someone could explain the process of Sollin's (or Borůvka's) algorithm for determining a minimum spanning tree. Also if you could explain how to determine the number of iterations in the worst case, that would be great.

On a top level, the algorithm works as follows:
Maintain that you have a number of spanning trees for some subgraphs. Initially, every vertex of the graph is a m.s.t. with no edges.
In each iteration, for each of your spanning trees, find a cheapest edge connecting it to another spanning tree. (This is a simplification.)
The worst case in terms of iterations is that you always merge pairs of trees. In that case, the number of trees you have will halve in each iteration, so the number of iterations is logarithmic in the number of nodes.
Also note that there is a special trick involved in choosing the edges to add: if you were not careful, you might introduce a circle when tree A connects to tree B, tree B connects to tree C and tree C connects to tree A. (This can only happen if all three edges chosen have the same weight. The trick is to have an arbitrary but fixed tie-breaker, like a fixed order of the edges.)
So there, that's my back-of-index-card overview.

I'm using the layman's terminology.
First select a vertex
Check all the edges from that vertex and select one with the minimum
weight
Do this for all the vertices ( some edges may be selected more than
once)
You will get connected components.
From these connected components select one edge with minimum weight.
Your spanning tree with minimum weight will be formed

Related

Minimum vertex cover

I am trying to get a vertex cover for an "almost" tree with 50,000 vertices. The graph is generated as a tree with random edges added in making it "almost" a tree.
I used the approximation method where you marry two vertices, add them to the cover and remove them from the graph, then move on to another set of vertices. After that I tried to reduce the number of vertices by removing the vertices that have all of their neighbors inside the vertex cover.
My question is how would I make the vertex cover even smaller? I'm trying to go as low as I can.
Here's an idea, but I have no idea if it is an improvement in practice:
From https://en.wikipedia.org/wiki/Biconnected_component "Any connected graph decomposes into a tree of biconnected components called the block-cut tree of the graph." Furthermore, you can compute such a decomposition in linear time.
I suggest that when you marry and remove two vertices you do this only for two vertices within the same biconnected component. When you have run out of vertices to merge you will have a set of trees not connected with each other. The vertex cover problem on trees is tractable via dynamic programming: for each node compute the cost of the best answer if that node is added to the cover and if that node is not added to the cover. You can compute the answers for a node given the best answers for its children.
Another way - for all I know better - would be to compute the minimum spanning tree of the graph and to use dynamic programming to compute the best vertex cover for that tree, neglecting the links outside the tree, remove the covered links from the graph, and then continue by marrying vertices as before.
I think I prefer the minimum spanning tree one. In producing the minimum spanning tree you are deleting a small number of links. A tree with N nodes had N-1 links, so even if you don't get back the original tree you get back one with as many links as it. A vertex cover for the complete graph is also a vertex cover for the minimum spanning tree so if the correct answer for the full graph has V vertices there is an answer for the minimum spanning tree with at most V vertices. If there were k random edges added to the tree there are k edges (not necessarily the same) that need to be added to turn the minimum spanning tree into the full graph. You can certainly make sure these new edges are covered with at most k vertices. So if the optimum answer has V vertices you will obtain an answer with at most V+k vertices.
Here's an attempt at an exact answer which is tractable when only a small number of links are added, or when they don't change the inter-node distances very much.
Find a minimum spanning tree, and divide edges into "tree edges" and "added edges", where the tree edges form a minimum spanning tree, and the added edges were not chosen for this. They may not be the edges actually added during construction but that doesn't matter. All trees on N nodes have N-1 edges so we have the same number of added edges as were used during creation, even if not the same edges.
Now pretend you can peek at the answer in the back of the book just enough to see, for one vertex from each added edge, whether that vertex was part of the best vertex cover. If it was, you can remove that vertex and its links from the problem. If not, the other vertex must be so you can remove it and its links from the problem.
You now have to find a minimum vertex cover for a tree or a number of disconnected trees, and we know how to do this - see my other answer for a bit more handwaving.
If you can't peek at the back of the book for an answer, and there are k added edges, try all 2^k possible answers that might have been in the back of the book and find the best. If you are lucky then added link A is in a different subtree from added link B. In that case you can confine the two calculations needed for the two possibilities for added link A (or B) to the dynamic programming calculations for the relevant subtree so you have only doubled the work instead of quadrupled it. In general, if your k added edges are in k different subtrees that don't interfere with each other, the cost is multiplied by 2 instead of 2^k.
Minimum vertex cover is an NP complete algorithm, which means that you can not solve it in a reasonable time even for something like 100 vertices (not to mention 50k).
For a tree there is a polynomial time greedy algorithm which is based on DFS, but the fact that you have "random edges added" screws everything up and makes this algorithm useless.
Wikipedia has an article about approximation algorithm, claims that it reaches factor 2 and claims that no better algorithm is know, which makes it quit unlikely that you will find one.

Recursive minimal spanning tree algorithms

Is this a right algorithm for finding minimal spanning tree.
Divide Graph into 2 equally connected parts. Find its minimal spanning trees. Connect them using the smallest edge that connects them. I am trying to get counterexample of this algorithm, but can't.
Consider a four-node graph, connected in a square, with the left edge having cost 10 and all other edges having cost 1. If you divide the graph into left and right for your recursive step, you will end up with a spanning tree of cost 12, instead of cost 3.
MST is not well-adapted to "divide-and-conquer" algorithms. The closest thing is probably the Reverse-Delete algorithm; whenever you fail to remove an edge (because it would disconnect the graph), you can think of the remaining steps as executing recursively on the two sides of that edge.
You have described a divide and conquer algorithm which will not work when determining an MST. Sneftel provided a good counter-example and recursively dividing the graph into two connected parts would be extremely costly.
Instead, a good approach to finding an MST would be to use a greedy algorithm such as Prim's algorithm. We know a greedy algorithm will work because this problem exhibits optimal substructure. For this algorithm, you will want to represent your graph as an adjacency list. First, start at an arbitrary node and add it to a visited list. Add all edges from this node into a min-heap. Include the cheapest edge in your MST and add the connecting node to your visited list. From that node add all edges to your min-heap and then select the cheapest edge to a node that has yet to be visited. Continue doing so until all nodes are visited. Once that is done you have your MST.
You can use other data structures to store the graph and the visited edges, but the ones I have outlined above will maximize the runtime. If we analyze the run-time with these data structures we can see that the runtime is O(E log V) which is the time to update the cost of the elements and maintaining your heap after an edge has been removed. More specifically O(log V) to fix the heap and that is done E number of times.
I also found this quick 2-minute video that outlines Prim's algorithm with an example: Prim's Algorithm in 2 Minutes
I hope this information is helpful!

Sum of Vertices in Induced Graph - Dynamic Programming

This is a homework question so I'll be glad to get a hint.
I have a graph G, where each vertex v has a weight w(v).
S(G) is the sum of weights of the all the vertexes in the graph.
I need to find an algorithm that determines if there is a group of vertexes A, when G[A] (G's graph induced by A) is a tree, that conducts S(G[A])=S(G[V\A]).
I know that i should go over all vertexes, sum their weights, and then try to find a tree that reaches half of that sum, but i'm not sure how exactly. I'm pretty sure it involves dynamic programming.
Thank you very much,
Yaron.
This is not really a dynamic programming problem, it is a search problem, the key being that you are trying to find a tree.
If you think about it, you already know an algorithm or two that will will tell you the minimum spanning tree. By the same logic, you can make a maximum spanning tree. For example, if you find the maximum spanning tree and the sum of its weights is less than 50% (or whatever the target value is), then you know the problem is impossible.
So, following this logic, you can go along as though you were making a spanning tree and reject any path that goes over the target amount. This strategy is known as "branch and bound". Let's imagine how we could do this with Kruskal's algorithm:
(1) you will have a set of trees; start with each vertex as a separate "tree"
(2) maintain a queue of edges you have not used yet, sorted from least to greatest
(3) maintain a stack of edges that you have used
(4) look for an edge that (a) connects two different trees, and (b) the sum of the two trees is less than (or equal to the target value, ie a solution)
(4a) if no such edge exists, then pop a value from the stack (remove the edge and seperate the trees) and try the next value in the queue
(4b) if such an edge does exist, then add the edge (combine two of the trees), push it onto the stack and go back to step 4
Obviously there are different ways to do the same process. For example, you could use a variant of Prim's algorithm as well.

Time complexity of creating a minimal spanning tree if the number of edges is known

Suppose that the number of edges of a connected graph is known and the weight of each edge is distinct, would it possible to create a minimal spanning tree in linear time?
To do this we must look at each edge; and during this loop there can contain no searches otherwise it would result in at least n log n time. I'm not sure how to do this without searching in the loop. It would mean that, somehow we must only look at each edge once, and decide rather to include it or not based on some "static" previous values that does not involve a growing data structure.
So.. let's say we keep the endpoints of the node in question, then look at the next node, if the next node has the same vertices as prev, then compare the weight of prev and current node and keep the lower one. If the current node's endpoints are not equal to prev, then it is in a different component .. now I am stuck because we cannot create a hash or array to keep track of the component nodes that are already added while look through each edge in linear time.
Another approach I thought of is to find the edge with the minimal weight; since the edge weights are distinct this edge will be part of any MST. Then.. I am stuck. Since we cannot do this for n - 1 edges in linear time.
Any hints?
EDIT
What if we know the number of nodes, the number of edges and also that each edge weight is distinct? Say, for example, there are n nodes, n + 6 edges?
Then we would only have to find and remove the correct 7 edges correct?
To the best of my knowledge there is no way to compute an MST faster by knowing how many edges there are in the graph and that they are distinct. In the worst case, you would have to look at every edge in the graph before finding the minimum-cost edge (which must be in the MST), which takes Ω(m) time in the worst case. Therefore, I'll claim that any MST algorithm must take Ω(m) time in the worst case.
However, if we're already doing Ω(m) work in the worst-case, we could do the following preprocessing step on any MST algorithm:
Scan over the edges and count up how many there are.
Add an epsilon value to each edge weight to ensure the edges are unique.
This can be done in time Ω(m) as well. Consequently, if there were a way to speed up MST computation knowing the number of edges and that the edge costs are distinct, we would just do this preprocessing step on any current MST algorithm to try to get faster performance. Since to the best of my knowledge no MST algorithm actually tries to do this for performance reasons, I would suspect that there isn't a (known) way to get a faster MST algorithm based on this extra knowledge.
Hope this helps!
There's a famous randomised linear-time algorithm for minimum spanning trees whose complexity is linear in the number of edges. See "A randomized linear-time algorithm to find minimum spanning trees" by Karger, Klein, and Tarjan.
The key result in the paper is their "sampling lemma" -- that, if you independently randomly select a subset of the edges with probability p and find the minimum spanning tree of this subgraph, then there are only |V|/p edges that are better than the worst edge in the tree path connecting its ends.
As templatetypedef noted, you can't beat linear-time. That all edge weights are distinct is a common assumption that simplifies analysis; if anything, it makes MST algorithms run a little slower.
The fact that a number of edges (N) is known does not influence the complexity in any way. N is still a finite but unbounded variable, and each graph will have different N. If you place a upper bound on N, say, 1 million, then the complexity is O(1 million log 1 million) = O(1).
The fact that each edge has distinct weight does not influence the program either, because it does not say anything about the graph's structure. Therefore knowledge about current case cannot influence further processing, as we cannot predict how the graph's structure will look like in the next step.
If the number of edges is close to n, like in this case n-6 (after edit), we know that we only need to remove 7 edges as every spanning tree has only n-1 edges.
The Cycle Property shows that the most expensive edge in a cycle does not belong to any Minimum Spanning tree(assuming all edges are distinct) and thus, should be removed.
Now you can simply apply BFS or DFS to identify a cycle and remove the most expensive edge. So, overall, we need to run BFS 7 times. This takes 7*n time and gives us a time complexity of O(n). Again, this is only true if the number of edges is close to the number of nodes.

How to find two disjoint spanning trees of an undirected graph

Is there any applicable approach to find two disjoint spanning trees of an undirected graph or to check if a certain graph has two disjoint spanning trees
This is an example of Matroid union. Consider the graphic matroid where the basis are given by the spanning trees. Now the union of this matroid with itself is again a matroid. Your question is about the size of the basis of this matroid. (whether there exist a basis of size $2(|V|-1)$.
The canonical algorithm for this is Matroid partitioning algorithm. There exist an algorithm which does does the following: It maintains a set of edges with a partitioning into two forests. At each step given a new edge $e$, it decides whether there exist a reshuffling of the current partition into a new partition such that the new edge can be added to the set and the partition remains independent. And if not, it somehow will provide a certificate that it cannot.
For details look at a course in Comb. Optimization or the book by Schriver.
Not sure it helps much in the applicable side but Tutte [1961a] and Nash-Williams [1961] independently characterized graphs having k pairwise edge-disjoint spanning trees:
A graph G has k pairwise edge-disjoint spanning trees iff for every partition of the vertices of G into r sets, there are at least k(r-1) edges of G whose endpoints are in different sets of the partition.
Use k=2 and it may give you a lead for your needs.
According to A Note on Finding Minimum-Cost Edge-Disjoint Spanning Trees, this can be solved in O(k2n2) where k is the number of disjoint spanning trees, and n is the number of vertices.
Unfortunately, all but the first page of the article is behind a paywall.
Assuming that the desire is to find spanning trees with disjoint edge sets, what about:
Given a graph G determining the minimum spanning tree A of G.
Defining B = G - A by deleting all edges from G that also lie in A.
Checking if B is connected.
The nature of a minimum spanning tree somehow makes me intuitively believe that choosing it as one of the two spanning trees gives you maximum freedom in constructing the other (that hopefully turns out to be edge disjunctive).
What do You guys think?
edit
The above algorithm makes no sense as a spanning tree is a tree and therefore needs to be acyclic. But there is no guarantee that B = G - A is acyclic.
However, this observations (thx#Tormer) led me to another idea:
Given a graph G determine the minimum spanning tree A of G.
Define B = (V[G], E[G] \ E[A]) where V[G] describes the vertices of G and E[G] describes the edges of G (A respectively).
Determine, if B has a spanning tree.
It could very well be that the above algorithm fails although G indeed has two edge disjunctive spanning trees - just no one of them is G's minimum spanning tree. I can't judge this (now), so I'm asking for Your opinion if it's wise to always chose the minimum spanning tree as one of the two.

Resources