A deterministic algorithm for minimum cut of undirected graph? - algorithm

Could someone name a few deterministic algorithm for minimum cut of undirected graph, along with their complexity please?
(By the way I learnt that there is a undirected version of Ford-Fulkerson algorithm by adding a opposing parallel edge for each directed edge, could someone tell me what is the time complexity of this one and maybe give me a bit more reference to read?)
Thanks.

Solving the global minimum cut by computing multiple maximum flows is possible but suboptimal. Using the fastest known algorithm (Orlin for sparse graphs and King-Rao-Tarjan for dense graphs), maxflow can be solved in O(mn). By picking a fixed source vertex and computing maxflow to all other vertices, we get (by the duality) the global mincut in O(mn²).
There exist several algorithms specifically for global mincuts. For algorithms independent of graph structure, the most commonly used are
Nagamochi & Ibaraki, 1992, O(nm + n²log(n)). Does not use flows and gradually shrinks the graph.
Stoer & Wagner, 1997, also O(nm + n²log(n)). Easier to implement. It is implemented in BGL
Hao & Orlin's algorithm can also run very fast in practice, especially when some of the known heuristics are applied.
There are many algorithms that exploit structural properties of input graphs. I'd suggest the recent algorithm of Brinkmeier, 2007 which runs in "O(n² max(log(n), min(m/n,δ/ε))), where ε is the minimal edge weight, and δ is the minimal weighted degree". In particular, when we ignore the weights, we get O(n² log(n)) for inputs with m in o(n log(n)) and O(nm) for denser graphs, meaning its time complexity is never worse than that of N-I or S-W regardless of input.

Related

Dijkstra's algorithm vs relaxing edges in topologically sorted graph for DAG

I was reading Introduction To Algorithms 3rd Edition. There are 3 methods given to solve the problem. My inquiry is about two of them.
The one with no name
The algorithm starts by topologically sorting the dag (see Section 22.4) to impose a linear ordering on the vertices. If the dag contains a path from vertex u to vertex v, then u precedes v in the topological sort. We make just one pass over the vertices in the topologically sorted order. As we process each vertex, we relax each edge that leaves the vertex.
Dijkstra's Algorithm
This is quite well known
As far as the book shows, time complexity of without name one is O(V+E) but of Dijstra's is O(ElogV). We cannot use Dijkstra's on negative weight but we can use the other. What are the advantages of using Dijkstra's Algorithm except it can be used in cyclic ones?
Because the first algorithm you give only works on acyclic graphs, whereas Dijkstra runs on graph with non-negative weight.
The limitations are not the same.
In real-world, many applications can be modelled as graphs with non-negative weights, that's why Dijkstra is so used. Plus, it is very simple to implement. The complexity of Dijkstra is higher because it relies on priority queue, but this does not mean it takes necessarily more time to execute. (nlog(n) time is not that bad, because log(n) is a relatively small number: log(10^80) = 266)
However, this stand for sparse graphs (low density of edges). For dense graphs, other algorithms may be more efficient.

Finding fully connected components?

I'm not sure if I'm using the right term here, but for fully connected components I mean there's an (undirected) edge between every pair of vertices in a component, and no additional vertices can be included without breaking this property.
There're a number algorithms for finding strongly connected components in a graph though (for example Tarjan's algorithm), is there an algorithm for finding such "fully connected components"?
What you are looking for is a list of all the maximal cliques of the graph. It's also called the clique problem. No known polynomial time solution exists for a generic undirected graph.
Most versions of the clique problem are hard. The clique decision problem is NP-complete (one of Karp's 21 NP-complete problems). The problem of finding the maximum clique is both fixed-parameter intractable and hard to approximate. And, listing all maximal cliques may require exponential time as there exist graphs with exponentially many maximal cliques. Therefore, much of the theory about the clique problem is devoted to identifying special types of graph that admit more efficient algorithms, or to establishing the computational difficulty of the general problem in various models of computation.
-https://en.wikipedia.org/wiki/Clique_problem
I was also looking at the same question.
https://en.wikipedia.org/wiki/Bron-Kerbosch_algorithm This turns out to be an algorithm to list it, however, it's not fast. If your graph is sparse, you may want to use the vertex ordering version of the algorithm:
For sparse graphs, tighter bounds are possible. In particular the vertex-ordering version of the Bron–Kerbosch algorithm can be made to run in time O(dn3d/3), where d is the degeneracy of the graph, a measure of its sparseness. There exist d-degenerate graphs for which the total number of maximal cliques is (n − d)3d/3, so this bound is close to tight.[6]

Linear time algorithm for finding value of MST in a graph?

Is there a linear О(n+m) time algorithm for finding just the value r of the minimum spanning tree of a given graph G(V,E)? We do not want to find that MST, just the sum of its edges.
I have searched for solution of the problem, but Kruskal's and Prim's algorithms are with higher complexity because of the comparison structures they use(UnionFind(Kruskal) and PQ(Prim)). Also they find the MST, which is not desired and maybe there is faster way to find only r.
If your edges are integer weighted, there is a linear algorithm from Ferdman and Willard in the following publication:
http://www.sciencedirect.com/science/article/pii/S0022000005800649
There is also a randomize linear time algorithm from Karger, Klein and Tarjan using a comparaison model:
http://dl.acm.org/citation.cfm?doid=201019.201022
I belive that in the comparaison model Chazelle's algorithm using soft heap is the fastest deterministic one, but it's not a linear one (you have a inverse Akermann overhead).
No. There's no linear solution.
You can optimize Kruskal with disjoin-set optimizations and radix/counting sort so that the complexity is O(E alpha(V)) where alpha is a very slow growing inverse Akermann function. For most datasets this will be almost indistinguishable from linear. At this point you can probably gain more at run-time by optimizing the code rather than the algorithm.

Dependence of complexity of graph algorithms on weight of edges?

This might be a silly question, but why doesn't the complexity depend on weight of the edges present in the graph?
There are many different graph algorithms and in some cases the complexities do depend on the edge weights. For example, the Ford-Fulkerson max-flow algorithm has runtime O(mF), where F is the maximum possible flow, which depends on the maximum capacity of the edges. Other algorithms like Dijkstra's algorithm have runtimes that are independent of the edge lengths because it's assumed in the computational model that operations on those weights always take time O(1).
Generally speaking, algorithms with runtimes that depend on the weights/capacities/lengths of the edges in the graph gain their dependency by iterating a number of times based on the capacities/weights/lengths of those edges. If the algorithm only does numeric computations on the weights etc., there typically isn't a dependency because arithmetic operations typically are only considered to take time O(1) unless there's a reason to believe otherwise.
Hope this helps!

Back Tracking Vs. Greedy Algorithm Maximum Independent Set

I implemented a back tracing algorithm using both a greedy algorithm and a back tracking algorithm.
The back tracking algorithm is as follows:
MIS(G= (V,E): a graph): largest set of independent vertices
1:if|V|= 0
then return .
3:end if
if | V|= 1
then return V
end if
pick u ∈ V
Gout←G−{u}{remove u from V and E }
Gn ← G−{ u}−N(u){N(u) are the neighbors of u}
Sout ←MIS(Gout)
Sin←MIS(Gin)∪{u}
return maxsize(Sout,Sin){return Sin if there’s a tie — there’s a reason for this.
}
The greedy algorithm is to iteratively pick the node with the smallest degree, place it in the MIS and then remove it and its neighbors from G.
After running the algorithm on varying graph sizes where the probability of an edge existing is 0.5, I have empirically found that the back tracking algorithm always found a smaller a smaller maximum independent set than the greedy algorithm. Is this expected?
Your solution is strange. Backtracking is usually used to yes/no problems, not optimization. The algorithm you wrote depends heavily on how you pick u. And it definitely is not backtracking because you never backtrack.
Such problem can be solved in a number of ways, e.g.:
genetic programming,
exhaustive searching,
solving the problem on dual graph (maximum clique problem).
According to Wikipedia, this is a NP-hard problem:
A maximum independent set is an independent set of the largest possible size for a given graph G.
This size is called the independence number of G, and denoted α(G).
The problem of finding such a set is called the maximum independent set problem and is an NP-hard optimization problem.
As such, it is unlikely that there exists an efficient algorithm for finding a maximum independent set of a graph.
So, for finding the maximum independent set of a graph, you should test all available states (with an algorithm which its time complexity is exponential). All other faster algorithms (like greedy, genetic or randomize ones), can not find the exact answer. They can guarantee to find a maximal independent set, but not the maximum one.
In conclusion, I can say that your backtracking approach is slower and accurate; but the greedy approach is only an approximation algorithm.

Resources