Consider the problem of cycle covering: Given a graph G, we look for a set C of cycles such that all vertex of V(G) are in at least one cycle of C and the number of cycles in C is minimum.
My task is to show that this problem does not admit an absolute approximation, i.e., there cannot be an algorithm H such that for all instances I of the problem, H(I) <= OPT(I) + k, where OPT(I) is the optimal value for I and k is a number greater or equal to 1. The usual technique is to show that if this algorithm existed, we could solve in polinomial time some NP-hard problem.
Does anyone know which problem could be used for that?
Suppose there is an algorithm H such that there is a positive integer k such that for every graph G, H(G)<=OPT(G)+k holds, where OPT(G) denotes the minimum number of cycles necessary to cover all nodes of G and the runtime of H is polynomially bounded in n, where n is the number of nodes of G.
Given any graph G, create a graph G' which consists out of k+1 isomorphic copies of G; note that the number of nodes in G' is (k+1)n, which is polynomially bounded in n. The following two cases can occur:
If G contains a Hamiltonian cycle, then OPT(G')=k+1 and H(G')<=OPT(G')+k=k+1+k=2k+1.
If G does not contain a Hamiltonian cycle, then OPT(G')>=2k+2>2k+1 hence H(G')>2k+1.
In total, H can be used to decide in a runtime bound ponlynomially bounded in n whether G contains a Hamiltonian cycle; however, as the decision whether G has a Hamiltonian cycle is an NP-complete decision problem, this is impossible unless P=NP holds.
Note: This approach is called 'gap creation', as instances are transformed in such a way that there is a gap in the objective value of
optimal solutions of yes-instances;
suboptimal solutions of yes-instances and feasible solutions of no-instances.
Related
I've encountered the following problem studying for my Algorithm test, with no answer published to it:
Maximum double matching problem- given a bipartite graph G=(V=(LUR),E) describe an algorithm that returns a group of edges M in E s.t for each vertex v in V there are at most 2 edges in M that include v, of a maximum size.
Definition: a "Strong double matching" is a double matching s.t for each vertice v in V there is at least one edge in M that includes v. Given a bipartite graph G=(V=(LUR),E) and strong double matching M, describe an algorithm that returns a strong double matching M' of maximum size. Prove your answer.
so I've already managed to solve
1) using reduction to max-flow: adding vertices's s and t and edges from s to L and edges from R to t each with the capacity of 2, and defining the capacity of each edge between L and R with the infinite capacity. Finding a max flow using Dinic's algorithm and returning all edges with positive flow between L and R.
about 2) i thought about somehow manipulating the network so that there is positive flow from each vertex then using the algorithm from a somehow to construct a maximum solution. Any thoughts? The runtime restriction is O(V^2E) (Dinics runtime)
Here is a solution in O(n^3) using minimum cost flow.
Recall how we make a network for a standard bipartite matching.
For each vertex u from L, add a unit-capacity edge from S to u;
For each edge u-v, where u is from L and v is from R, add an edge from u to v. Note that its capacity does not matter as long as it is at least one;
For each vertex v from R, add a unit-capacity edge from u to R.
Now we keep the central part the same and change left and right parts a bit.
For each vertex u from L, add two unit-capacity edges from S to u, one of them of having cost -1 and another having cost 0;
Same for edges v->S.
Ignoring cost, this is the same network you built yourself. The maximum flow here corresponds to the maximum double-matching.
Now let's find the minimum cost flow of size k. It corresponds to some double-matching, and of those it corresponds to the matching that touches the maximum possible number of vertices, because touching a vertex (that is, pushing at least unit flow through it) decreases the cost by 1. Moreover, touching the vertex for the second time doesn't decrease the cost because the second edge has cost 0.
How we have the solution: for each k = 1, ..., 2n iteratively find the min-cost flow and take the value which corresponds to the minimum cost.
Using Johnson's algorithm (also called Dijkstra's with potentials) gives O(n^2) per iteration, which is O(n^3) overall.
P.S. The runtime of Dinic's algorithm on unit graphs is better, reaching O(E sqrt(V)) on bipartite graphs.
Problem description:
Given a graph G in adjacencyMatrix and adjacencyList, inside which there is a source vertex s and a destination vertex d. Find the shortest path from s to d, with a constraint. The constraint is that the shortest path cost c has a lower bound, i.e., the cost c must be greater than an assigned lower bound N but is the smallest in all the costs of possible paths that are greater or equal N.
I understand with this constraint conventional SSSP algorithm like Bellman ford cannot work correctly. How shall I find a most efficient algorithm for this problem?
I suppose you wanted a walk, since path cannot have cycles.
Unfortunately, the problem can be easily modeled as change-making problem, which is NPC too.
Change-making problem: Given N types of coins of c_i value each, is it possible that number X can be changed with those N coins?
Modelling: Assume all c_i's are even (double all c_i, and also X, if not). Create N + 2 vertices, which the i-th vertex represents the i-th coin for 1 <= i <= N. Also, the (N+1) and (N+2)-th vertices have edge to all coins with cost (c_i / 2). The problem is then equivalent to finding a shortest path of cost at least X, which is NPC. The reduction should be obvious, but if further explanations are needed I can edit my answer.
Given a DAG where are Edges have a Positive Edge Weight. Given a Value N.
Algorithm to calculate a simple (no cycles or node repetitions) Path with the Total weight N?
I am aware of the Algorithm where we have to find a Path of Given Path Length (number of Edges) but somewhat confused about for the Given Path Weight?
Can Dijkstra be modified for this case? Or anything else?
This is NP-complete, so don't expect any reasonably fast (polynomial-time) algorithm. Here's a reduction from the NP-complete Subset Sum problem, where we are given a multiset of n integers X = {x_1, x_2, ..., x_n} and a number k, and asked if there is any submultiset of the n numbers that sum to exactly k:
Create a graph G with n+1 vertices v_1, v_2, ..., v_{n+1}. For each vertex v_i, add edges to every higher-numbered vertex v_j, and give all these edges weight x_i. This graph has O(n^2) edges and can be constructed in O(n^2) time. Clearly it contains no cycles.
Suppose the answer to the Subset Sum problem is YES: That is, there exists a submultiset Y of X such that the numbers in Y total to exactly k. Actually, let Y = {y_1, y_2, ..., y_m} consist of the m <= n indices 1 <= i <= n of the selected elements of X. Then there is a corresponding path in the graph G with exactly the same weight -- namely the path that starts at v_{y_1}, takes the edge to v_{y_2} (which is of weight x_{y_1}), then takes the edge to v_{y_3}, and so on, finally arriving at v_{y_m} and taking a final edge (which is of weight x_{y_m}) to the terminal vertex v_{n+1}.
In the other direction, suppose that there is a simple path in G of total weight exactly k. Since the path is simple, each vertex appears at most once. Thus each edge in the path leaves a unique vertex. For each vertex v_i in the path except the last, add x_i to a set of chosen numbers: these numbers correspond to the edge weights in the path, so clearly they sum to exactly k, implying that the solution to the Subset Sum problem is also YES. (Notice that the position of the final vertex in the path doesn't matter, since we only care about the vertex that it leaves, and all edges leaving a vertex have the same weight.)
A YES answer to either problem implies a YES answer to the other problem, so a NO answer to either problem implies a NO answer to the other problem. Thus the answer to any Subset Sum problem instance can be found by first constructing the specified instance of your problem in polynomial time, and then using any algorithm for your problem to solve that instance -- so if an algorithm exists that can solve any instance of your problem in polynomial time, the NP-hard Subset Sum problem can also be solved in polynomial time.
Below is an algorithm that finds the minimum spanning tree:
MSTNew(G, w)
Z ← empty array
for each edge e in E, taken in random order do
Z ← Z ∪ e
if Z has a cycle c then
let e be a maximum-weight edge on c
Z ← Z − e
return (Z)
Does this algorithm always return the optimal MST solution?
I would say yes. It sort of looks like Kruskals algorithm in disguise - sort-of.
Being fairly new to graph theory, I really don't have much of an idea other than that. Would someone have any ideas or advice?
Yes, IMO the algorithm outputs a Minimum Spanning Tree.
Informal Proof:
At every iteration, we remove only that edge which is the most expensive edge on a cycle. Such an edge can never be included in a MST (by exchange argument). Thus we always exclude those edges which can never be a part of the MST.
Also, the output of the algorithm is always a spanning tree because we are deleting edges only when the new edge results in a cycle.
However, note that this algorithm will be highly inefficient since at each iteration you are not only checking for cycles (as in Kruskal's) but also searching for the maximum cost edge on the cycle.
"Prove that it is NP-Complete to determine given input G and k whether G has both a clique of size k and an independent set of size k. Note that this is 1 problem, not 2; the answer is yes if and only if G has both of these subsets."
We were given this problem in my algorithms course and a large group of students could not figure it out. Here is what we have so far...
We know that both the clique and independent set problems are NP-Complete in of themselves. We also know that the verification of this problem, given some "certificate" is in NP.
The problem is somehow performing a reduction on the above problem (which contains both independent sets and cliques) to either a problem consisting entirely of cliques or independent sets (at least that's what we think we need to do). We don't know how to perform this reduction without losing information needed to reduce the reduction back to its original form.
Hint: Reduce CLIQUE to this problem, by adding some vertices.
Thanks to "Moron" and "Rafal Dowgird" for the hints! Based on that I think I've got a solution. Please correct me if I am incorrect:
Since we already know the the clique and independent-set problems are NP-Complete, we can use that as a foundation for proving our problem. Let's call our problem the Combination Clique Independent Set problem (CCIS).
Suppose we are given a graph G which has a clique C of size k. We can reduce this graph into a graph G' (read: G prime) which has both a clique C' of size k' and independent-set I of size k' by attaching k vertices to each vertex in C. This reduction occurs in polynomial time since the addition of the vertices takes O(n*k) time (n vertices in the graph and k vertices attached to each node).
Note that C=C' and k=k'.
Now suppose we are given a graph G' which has a clique C' of size k' and independent-set I of size k' which is determined to be true. The reduction to the clique problem is trivial since we don't need to modify the graph at all to find only a clique.