Worst-Case Graph for Prim's Algorithm - algorithm

My Algorithms class is talking about Prim's Algorithm as a method of finding Minimum Spanning Trees of weighted graphs. Our professor asked us to try to think of an example of a graph that Prim's Algorithm takes N^2 time to solve (N = number of Vertices). No one in the class could think of one off the top of their head, so I'm asking you. I'm pretty sure Prim's Algorithm = O(N^2), so this would be the worst-case scenario for the algorithm.
What's a good example of a graph that takes N^2 time for Prim's Algorithm to solve?

If I understand your question correctly, the example is trivial.
If the graph is complete, there're O(N^2) edges, so just reading the graph is O(N^2).

Related

MST in directed graph

there are prim and Kruskal algorithms to find mst in polynomial time
I wonder, Are there any algorithms to find MST in a directed acyclic graph in linear time?
The equivalent of an MST in a directed graph is called an optimum branching or minimum-cost arborescence and there are several good algorithms for finding one. The most famous is probably the Chu-Edmonds-Liu algorithm, which can be implemented in time O(mn) in a straightforward way and time O(m + n log n) using more clever data structures.
Hope this helps!

Partitioning a graph into two clusters

I have a complete weighted graph G(V, E). I want to partition V into two clusters such that maximum intra-cluster edge length gets minimized. What is the fastest algorithm that solves this problem? I believe this can be solved in O(n^2) time where |V|=n. One approach would be making the graph bipartite. I could not figure out the complete algorithm. Can anyone help me to figure out the complete algorithm?
Two-color (depth-first search, O(n) time) a maximum spanning forest (Prim's algorithm, O(n2) time). Proof of correctness left as an exercise.
For the record, for sparser graphs with only m edges, I'm pretty sure there's an O(m)-time algorithm.

Building MST from a graph with "very few" edges in linear time

I was at an interview and interviewer asked me a question:
We have a graph G(V,E), we can find MST using prim's or kruskal algorithm. But these algorithms do not take into the account that there are "very few" edges in G. How can we use this information to improve time complexity of finding MST? Can we find MST in linear time?
The only thing I could remember was that Kruskal's algorithm is faster in a sparse graphs while Prim's algorithm is faster in really dense graphs. But I couldn't answer him how to use prior knowledge about the number of edges to make MST in linear time.
Any insight or solution would be appreciated.
Kruskal's algorithm is pretty much linear after sorting the edges. If you use a union find structure like disjoint set forest The complexity for processing a single edge will be in the order of lg*(n) where n is the number of vertices and this function grows so slowly that for this case can be considered constant. However the problem is that to sort the edges you still need a O(m * log(m)). Where m is the number of edges.
Prim's algorithm will not be able to take advantage of the fact that the edges are very few.
One approach that you can use is something like a 'reversed' MST approach where you start off with all edges and remove the longest edge until the graph becomes disconnected. You keep doing that until only n - 1 edges are left. Still note that this will be better than Kruskal only if the number of edges to remove k are few enough so that k * n < m * log(m).
Lets say |E| = |V| +c ,c being a small constant. You can run DFS on the graph and every time you detect a circle, remove the largest edge. you must do that c +1 times. O(c+1 * |E|) = O(E) linear time in theory.

Linear time algorithm for finding value of MST in a graph?

Is there a linear О(n+m) time algorithm for finding just the value r of the minimum spanning tree of a given graph G(V,E)? We do not want to find that MST, just the sum of its edges.
I have searched for solution of the problem, but Kruskal's and Prim's algorithms are with higher complexity because of the comparison structures they use(UnionFind(Kruskal) and PQ(Prim)). Also they find the MST, which is not desired and maybe there is faster way to find only r.
If your edges are integer weighted, there is a linear algorithm from Ferdman and Willard in the following publication:
http://www.sciencedirect.com/science/article/pii/S0022000005800649
There is also a randomize linear time algorithm from Karger, Klein and Tarjan using a comparaison model:
http://dl.acm.org/citation.cfm?doid=201019.201022
I belive that in the comparaison model Chazelle's algorithm using soft heap is the fastest deterministic one, but it's not a linear one (you have a inverse Akermann overhead).
No. There's no linear solution.
You can optimize Kruskal with disjoin-set optimizations and radix/counting sort so that the complexity is O(E alpha(V)) where alpha is a very slow growing inverse Akermann function. For most datasets this will be almost indistinguishable from linear. At this point you can probably gain more at run-time by optimizing the code rather than the algorithm.

Why does A* run faster than Dijkstra's algorithm?

Wikipedia says A* runs in O(|E|) where |E| is the number of edges in the graph. But my friend says A* is just a general case of Dijkstra's algorithm, and Dijkstra's algorithm runs in O(|E| + |V| log |V|). So I am confused about why A* runs faster than Dijkstra's algorithm.
I think the time complexity of A* listed on Wikipedia is incorrect (or at least, it's misleading). That time complexity only seems to count the number of states expanded in the search, rather than the time required to determine which states to explore.
To be efficient, A* search needs to store a priority queue containing what nodes in the fringe need to be explored and it has to be able to call decrease-key on those priorities. The runtime for this is, in the worst-case, O(n log n + m) if implemented using a good priority queue. Therefore, in the worst case, you'd expect A* to degrade to Dijkstra's algorithm. Given a good heuristic, A* will not expand out all the nodes and edges that Dijkstra's algorithm would, which is the main reason why A* is faster.
Of course, the time complexity of A* search needs to factor in the cost of computing the heuristic. Some complex heuristics might not be computable in time O(1), in which case the runtime for A* could actually be worse than Dijkstra's algorithm.
Hope this helps!
Essentially A* is faster because it can use heuristics to make more educated guesses about which route is the best to case, something which Dijkstra's algorithm does not do.

Resources