some questions on MST - algorithm

I am learning the topic of Minimum-Spanning-Tree right now, and I understand the most of it, but I still have some things that I do not understand.
I am dealing with undirected weighted graphs.
First, I know that finding MST costs O(E*log V). Now, I want to optimize it to linear time - O(V+E), when we dealing with planar graphs.
Secondly, I saw an example of n points in the unit-square and I succeed to show that a MST that weights O(sqrt n) is exist. The problem is that I could not find an algorithm to find this MST.
Thanks all,
Or

Boruvka's algorithms runs in O(V) time on planar graphs. For details see
http://www.cs.princeton.edu/~wayne/kleinberg-tardos/pdf/04GreedyAlgorithmsII.pdf
Also, you can compute the Euclidean MST of n points in the plane in O(n log n) time by computing MST of edges in Delauney triangulation.

Related

MST in directed graph

there are prim and Kruskal algorithms to find mst in polynomial time
I wonder, Are there any algorithms to find MST in a directed acyclic graph in linear time?
The equivalent of an MST in a directed graph is called an optimum branching or minimum-cost arborescence and there are several good algorithms for finding one. The most famous is probably the Chu-Edmonds-Liu algorithm, which can be implemented in time O(mn) in a straightforward way and time O(m + n log n) using more clever data structures.
Hope this helps!

Partitioning a graph into two clusters

I have a complete weighted graph G(V, E). I want to partition V into two clusters such that maximum intra-cluster edge length gets minimized. What is the fastest algorithm that solves this problem? I believe this can be solved in O(n^2) time where |V|=n. One approach would be making the graph bipartite. I could not figure out the complete algorithm. Can anyone help me to figure out the complete algorithm?
Two-color (depth-first search, O(n) time) a maximum spanning forest (Prim's algorithm, O(n2) time). Proof of correctness left as an exercise.
For the record, for sparser graphs with only m edges, I'm pretty sure there's an O(m)-time algorithm.

Can we use the BFS on each vertex in order to find the graph's diameter? if so, is this the best solution?

So i found an old topic :
Algorithm for diameter of graph?
which they said the best solution for non sparse graph is O(V^3)
but can't we just use the BFS on each vertex and then find the maximum?
and this way the time complexity will be O(V*(V+E)) = O(V^2 + VE)
am i wrong? because if the number of edges is just a multiplicand of V then this would work better, right?
so i guess my question is :
what is the best time complexity for computing the graph's diameter as of now in 2018
is my method wrong? what am i missing here?
The matrix in question is non-sparse. So it gives a worst case E ~ (V^2)/2 edges. The solution mentioned will thus become O(V^2+V*(V^2)) for non-sparse matrixes.
If the matrix was sparse then it would indeed be faster than O(V^3).
Also given the graph is non-sparse, it is usually represented using adjacency matrix, for faster lookup times. Breadth First Search would thus take O(V^2). This done as you mentioned across all nodes will again lead to O(V^3) computational time complexity.
Finding the diameter can be done by finding all pair shortest paths first and determining the maximum length found. Floyd-Warshall algorithm does this in O(V^3) time. Johnson's algorithm can be implemented to achieve O(V^2 logV + VE) time.

Building MST from a graph with "very few" edges in linear time

I was at an interview and interviewer asked me a question:
We have a graph G(V,E), we can find MST using prim's or kruskal algorithm. But these algorithms do not take into the account that there are "very few" edges in G. How can we use this information to improve time complexity of finding MST? Can we find MST in linear time?
The only thing I could remember was that Kruskal's algorithm is faster in a sparse graphs while Prim's algorithm is faster in really dense graphs. But I couldn't answer him how to use prior knowledge about the number of edges to make MST in linear time.
Any insight or solution would be appreciated.
Kruskal's algorithm is pretty much linear after sorting the edges. If you use a union find structure like disjoint set forest The complexity for processing a single edge will be in the order of lg*(n) where n is the number of vertices and this function grows so slowly that for this case can be considered constant. However the problem is that to sort the edges you still need a O(m * log(m)). Where m is the number of edges.
Prim's algorithm will not be able to take advantage of the fact that the edges are very few.
One approach that you can use is something like a 'reversed' MST approach where you start off with all edges and remove the longest edge until the graph becomes disconnected. You keep doing that until only n - 1 edges are left. Still note that this will be better than Kruskal only if the number of edges to remove k are few enough so that k * n < m * log(m).
Lets say |E| = |V| +c ,c being a small constant. You can run DFS on the graph and every time you detect a circle, remove the largest edge. you must do that c +1 times. O(c+1 * |E|) = O(E) linear time in theory.

Why do Kruskal and Prim MST algorithms have different runtimes for sparse and dense graphs?

I am trying to understand why Prim and Kruskal have different time complexities when it comes to sparse and dense graphs. After using a couple of applets that demonstrate how each works, I am still left a little confused about how the density of the graph affects the algorithms. I hope someone could give me a nudge in the right direction.
Wikipedia gives the complexity of these algorithms in terms of E, the number of edges, and V, the number of vertices, which is a good practice because it lets you do exactly this sort of analysis.
Kruskal's algorithm is O(E log V). Prim's complexity depends on which data structure you use for it. Using an adjacency matrix, it's O(V2).
Now if you plug in V2 for E, behold you get the complexities that you cited in your comment for dense graphs, and if you plug in V for E, lo you get the sparse ones.
Why do we plug in V2 for a dense graph? Well even in the densest possible graph you can't have as many as V2 edges, so clearly E = O(V2).
Why do we plug in V for a sparse graph? Well, you have to define what you mean by sparse, but suppose we call a graph sparse if each vertex has no more than five edges. I would say such graphs are pretty sparse: once you get up into the thousands of vertices, the adjacency matrix would be mostly empty space. That would mean that for sparse graphs, E ≤ 5 V, so E = O(V).
Are these different complexities with respect to the number of vertices by any chance?
there is often, a slightly handwavy, argument that says for a sparse graph, the number of edges E = O(V) where V is the number of verticies, for a dense graph E = O(V^2). as both algotrithms potentially have complexity that depends on E, when you convert this to comlexity that depends on V you get different complexities depending on dense or sparse graphs
edit:
different data structures will also effect the complexity of course wikipedia has a break down on this
Algorithms by Cormen et al does indeed give an analysis, in both cases using a sparse representation of the graph. With Kruskal's algorithm (link vertices in disjoint components until everything joins up) the first step is to sort the edges of the graph, which takes time O(E lg E) and they simply establish that nothing else takes longer than this. With Prim's algorithm (extend the current tree by adding the closest vertex not already on it) they use a Fibonacci heap to store the queue of pending vertices and get O(E + V lgV), because with a Fibonacci tree decreasing the distance to vertices in the queue is only O(1) and you do this at most once per edge.

Resources