Graph:: Deletion Contraction Complexity? - algorithm

I am applying the classic deletion contraction algorithm to a Graph G of "n" vertices and "m" edges.
Z(G) = Z(G-e) + Z(G/e)
In Wikipedia,
http://en.wikipedia.org/wiki/Chromatic_polynomial#Deletion.E2.80.93contraction
They say that complexity is: O(1.6180^(n+m)).
Mi main question is why they included the number of vertices in the complexity ?? when is clear that the recursion only depends on the number of edges.
The closest reference to deletion-contraction is fibonacci sequence, which its computing complexity is demonstrated in Herbert S. Wilf's Algorithms and Complexity book
http://www.math.upenn.edu/~wilf/AlgComp3.html
pages 18-19.
All help is welcome.

Look at page 46 of the pdf version. Deletion and contraction each reduce the number of edges by 1, so a recurrence in edges only shows that Z(G) is O(2m), which is worse than O(Fib(n + m)) for all but the sparsest graphs. The improvement in considering vertices as well as edges is that, when a self-loop is formed, we know immediately that the chromatic polynomial is zero.

Related

Can we find a Hamiltonian Cycle in a dense graph with better than O(n^2)?

Given a dense graph (according to ore's theorem, dense means the sum of degrees of any 2 non-adjacent nodes is at least N, where N is the total number of nodes), we can find a Hamiltonian cycle in such a graph using Palmer's algorithm with a time complexity of O(n^2). My question is: Can we do better than this? (in terms of time complexity).

Can we use the BFS on each vertex in order to find the graph's diameter? if so, is this the best solution?

So i found an old topic :
Algorithm for diameter of graph?
which they said the best solution for non sparse graph is O(V^3)
but can't we just use the BFS on each vertex and then find the maximum?
and this way the time complexity will be O(V*(V+E)) = O(V^2 + VE)
am i wrong? because if the number of edges is just a multiplicand of V then this would work better, right?
so i guess my question is :
what is the best time complexity for computing the graph's diameter as of now in 2018
is my method wrong? what am i missing here?
The matrix in question is non-sparse. So it gives a worst case E ~ (V^2)/2 edges. The solution mentioned will thus become O(V^2+V*(V^2)) for non-sparse matrixes.
If the matrix was sparse then it would indeed be faster than O(V^3).
Also given the graph is non-sparse, it is usually represented using adjacency matrix, for faster lookup times. Breadth First Search would thus take O(V^2). This done as you mentioned across all nodes will again lead to O(V^3) computational time complexity.
Finding the diameter can be done by finding all pair shortest paths first and determining the maximum length found. Floyd-Warshall algorithm does this in O(V^3) time. Johnson's algorithm can be implemented to achieve O(V^2 logV + VE) time.

Building MST from a graph with "very few" edges in linear time

I was at an interview and interviewer asked me a question:
We have a graph G(V,E), we can find MST using prim's or kruskal algorithm. But these algorithms do not take into the account that there are "very few" edges in G. How can we use this information to improve time complexity of finding MST? Can we find MST in linear time?
The only thing I could remember was that Kruskal's algorithm is faster in a sparse graphs while Prim's algorithm is faster in really dense graphs. But I couldn't answer him how to use prior knowledge about the number of edges to make MST in linear time.
Any insight or solution would be appreciated.
Kruskal's algorithm is pretty much linear after sorting the edges. If you use a union find structure like disjoint set forest The complexity for processing a single edge will be in the order of lg*(n) where n is the number of vertices and this function grows so slowly that for this case can be considered constant. However the problem is that to sort the edges you still need a O(m * log(m)). Where m is the number of edges.
Prim's algorithm will not be able to take advantage of the fact that the edges are very few.
One approach that you can use is something like a 'reversed' MST approach where you start off with all edges and remove the longest edge until the graph becomes disconnected. You keep doing that until only n - 1 edges are left. Still note that this will be better than Kruskal only if the number of edges to remove k are few enough so that k * n < m * log(m).
Lets say |E| = |V| +c ,c being a small constant. You can run DFS on the graph and every time you detect a circle, remove the largest edge. you must do that c +1 times. O(c+1 * |E|) = O(E) linear time in theory.

Design an algorithm which finds a minimum spanning tree of this graph in linear time

I am working on a problem in which I am given an undirected graph G on n vertices and with m edges, such that each edge e has a weight w(e) ∈ {1, 2, 3}. The task is to design an algorithm which finds a minimum spanning tree of G in linear time (O(n + m)).
These are my thoughts so far:
In the Algorithmic Graph Theory course which I am currently studying, we have covered Kruskal's and Prim's MST Algorithms. Perhaps I can modify these in some way, in order to gain linear time.
Sorting of edges generally takes log-linear (O(mlog(m))) time; however, since all edge weights are either 1, 2 or 3, Bucket Sort can be used to sort the edges in time linear in the number of edges (O(m)).
I am using the following version of Kruskal's algorithm:
Kruskal(G)
for each vertex 𝑣 ∈ 𝑉 do MAKEβˆ’SET(𝑣)
sort all edges in non-decreasing order
for edge 𝑒, 𝑣 ∈ 𝐸 (in the non-decreasing order) do
if FIND 𝑒 β‰  FIND(𝑣) then
colour (𝑒, 𝑣) blue
UNION(𝑒, 𝑣)
od
return the tree formed by blue edges
Also, MAKE-SET(x), UNION(x, y) and FIND(x) are defined as follows:
MAKE-SET(𝒙)
Create a new tree rooted at π‘₯
PARENT(π‘₯)=x
UNION(𝒙, π’š)
PARENT FIND(π‘₯) ≔ 𝐹𝐼𝑁𝐷(𝑦)
FIND(𝒙)
𝑦 ≔ π‘₯
while 𝑦 β‰  PARENT(𝑦) do
𝑦 ≔ PARENT(𝑦)
return y
The issue I have at the moment is that, although I can implement the first two lines of Kruskal's in linear time, I have not managed to do the same for the next four lines of the algorithm (from 'for edge u, ...' until 'UNION (u, v)').
I would appreciate hints as to how to implement the rest of the algorithm in linear time, or how to find a modification of Kruskal's (or some other minimum spanning tree algorithm) in linear time.
Thank you.
If you use the Disjoint Sets data structure with both path compression and union by rank, you get a data structure whose each operation's complexity grows extremely slowly - it is something like the inverse of the Ackermann function, and is not that large for sizes such as the estimated number of atoms in the universe. Effectively, then, each operation is considered constant time, and so the rest of the algorithm is considered linear time as well.
From the same wikipedia article
Since Ξ±(n) is the inverse of this function, Ξ±(n) is less than 5 for all remotely practical values of n. Thus, the amortized running time per operation is effectively a small constant.

Why do Kruskal and Prim MST algorithms have different runtimes for sparse and dense graphs?

I am trying to understand why Prim and Kruskal have different time complexities when it comes to sparse and dense graphs. After using a couple of applets that demonstrate how each works, I am still left a little confused about how the density of the graph affects the algorithms. I hope someone could give me a nudge in the right direction.
Wikipedia gives the complexity of these algorithms in terms of E, the number of edges, and V, the number of vertices, which is a good practice because it lets you do exactly this sort of analysis.
Kruskal's algorithm is O(E log V). Prim's complexity depends on which data structure you use for it. Using an adjacency matrix, it's O(V2).
Now if you plug in V2 for E, behold you get the complexities that you cited in your comment for dense graphs, and if you plug in V for E, lo you get the sparse ones.
Why do we plug in V2 for a dense graph? Well even in the densest possible graph you can't have as many as V2 edges, so clearly E = O(V2).
Why do we plug in V for a sparse graph? Well, you have to define what you mean by sparse, but suppose we call a graph sparse if each vertex has no more than five edges. I would say such graphs are pretty sparse: once you get up into the thousands of vertices, the adjacency matrix would be mostly empty space. That would mean that for sparse graphs, E ≀ 5 V, so E = O(V).
Are these different complexities with respect to the number of vertices by any chance?
there is often, a slightly handwavy, argument that says for a sparse graph, the number of edges E = O(V) where V is the number of verticies, for a dense graph E = O(V^2). as both algotrithms potentially have complexity that depends on E, when you convert this to comlexity that depends on V you get different complexities depending on dense or sparse graphs
edit:
different data structures will also effect the complexity of course wikipedia has a break down on this
Algorithms by Cormen et al does indeed give an analysis, in both cases using a sparse representation of the graph. With Kruskal's algorithm (link vertices in disjoint components until everything joins up) the first step is to sort the edges of the graph, which takes time O(E lg E) and they simply establish that nothing else takes longer than this. With Prim's algorithm (extend the current tree by adding the closest vertex not already on it) they use a Fibonacci heap to store the queue of pending vertices and get O(E + V lgV), because with a Fibonacci tree decreasing the distance to vertices in the queue is only O(1) and you do this at most once per edge.

Resources