I'm having a bit of trouble understanding how one would verify if there is no solution to a given instance of the subset sum problem in polynomial time.
Of course you could easily verify the positive case: simply provide the list of integers which add up to the target sum and check they are all in the original set. (O(N))
How do you verify that the answer "false" is the correct one in polynomial time?
It’s actually not known how to do this - and indeed it’s conjectured that it’s not possible to do so!
The class NP consists of problems where “yes” instances can be verified in polynomial time. The subset sum problem is a canonical example of a problem in NP. Importantly, notice that the definition of NP says nothing about what happens if the answer is “no” - maybe it’ll be easy to show this, or maybe there isn’t an efficient algorithm for doing so.
A counterpart to NP is the class co-NP, which consists of problems where “no” instances can be verified in polynomial time. A canonical example of such a problem is the tautology problem - given a propositional logic formula, is it always true regardless of what values the variables are given? If the formula isn’t a tautology, it’s easy to verify this by having someone tell you how to assign the values to the variables such that the formula is false. But if the formula is always true, it’s unclear how you’d show this efficiently.
Just as the P = NP problem hasn’t been solved, the NP = co-NP problem is also open. We don’t know whether problems where “yes” answers have fast verification are the same as problems where “no” answers have fast verification.
What we do know is that if any NP-complete problem is in co-NP, then NP = co-NP. And since subset sum is NP-complete, there’s no known polynomial time algorithm to verify if the answer to a subset sum instance is “no.”
Related
I have a list L of lists l[i] of elements e. I am looking for an algorithm that finds a minimum set S_min of elements such that at least one member of S_min occurs in each l.
I am not only curious to find a simple algorithm that does this for me, but also to learn what problems of this sort are actually called. I am sure there is something out there
I have implemented brute force algorithms that start with adding all those elements to S_min which occur in sets of len(l[i])=1. The rest is simple trial and error.
The problem you describe ist the vertex cover problem in hypergraphs, an optimization problem which is NP-hard in the general case but admits approximation algorithms for suitably bounded instances.
I have to answer this question as a homework assignment but I am finding very little material to work with. I understand what is a NP-complete problem and what is a restriction. In my opinion, this statement is true, because you can always restrict the problem in order to "make the problem easier". But I'm looking at it with a bird's eye view... Can anyone help me make some progress finding the answer to this question?
Any help will be much appreciated.
Converting my comment into an answer - consider the "empty problem," a problem whose instance set is empty. Since the empty set is a subset of every set, this problem technically counts as a restriction of any language (including languages not in NP). It's also a problem in P; you can build a polynomial-time TM that always rejects its input. Therefore, every problem in NP has a polynomial-time restriction.
What I'm still curious about, though, is whether every NP problem whose instance set is infinite has a polynomial-time restriction whose instance set is also infinite. That's a more interesting question, IMHO, and I don't currently have an answer.
Hope this helps!
As I understand it there are two steps to proving that a problem is NP complete:
Give an algorithm that can verify a solution to the problem in polynomial time. That is, an algorithm whose input is a proposed solution to the problem and whose output is either "yes" or "no" based on whether the input is a valid solution to the problem.
Prove the problem is NP hard - eg, assume you have an oracle that can compute another known NP complete problem in one step. Using that, write an algorithm that solves this problem in polynomial time.
For example, suppose we want to prove that the following problem is NP Complete:
Given a set of integers, S, is it possible to isolate a subset of elements, S', such that the sum of the elements in S' is exactly equal to the sum of the remaining elements in S that are not included in S'?
Step 1: Verification algorithm
Verify_HalfSubset(Set S, Solution sol):
accum = 0
for each element i in sol:
accum+=i
linear search for an element with the same value as i in S.
if found, delete it from s, if not found, return false
end for
accum2 = 0
for each element i in S:
accum2+=i
end for
if accum==accum2 return true, else return false
Clearly this runs in polynomial time: The first for loop runs in O(nm) and the second runs in O(n).
Step 2: Reduction
Assume we have an oracle O(Set S, int I) that computes the subset sum problem in a single step (that is, is there a subset of elements in S that sum up to I)?
Then, we can write a polynomial time algorithm that computes our half-subset problem:
HalfSubset(Set S):
accum = 0
for each s in S:
accum+=S
end for
if(accum%2==1)
// this question forbids "splitting" values to non-integral parts
return NO_ANSWER
end if
half1 = O(S, accum/2)
if(half1 == NO_ANSWER)
return NO_ANSWER
end if
for each i in half1:
linear search for an element with the same value as half1[i] in S
delete it from S.
end for
half2 = S
return (half1 and half2)
Can someone please tell me if I've made any mistakes in this process? This is the one question on my final exam review that I'm not entirely sure I understand completely.
The second portion of your answer is a bit off. What you are saying in step two is that you can reduce this problem to a known NP-complete problem in polynomial time. That is, you are saying that this problem is at most as hard as the NP-complete problem.
What you want to say is that the NP-complete problem can be reduced to your example problem in polynomial time. This would show that, if you could solve this problem in polynomial time, then you could also solve the NP-complete problem in polynomial time, proving that your example problem is NP-complete.
No, this is incorrect. You could set up a situation where you use the NP-Complete oracle to solve the problem, yet still have the problem itself be in P.
What you have to do is show that you can reduce another NP-Complete problem to your problem. That is, provide a polynomial-time algorithm to transform any instance of a particular NP-Complete problem to an instance of your problem such that a solution of your (transformed) problem is also a solution to the given NP-Complete problem. This shows that if you can solve your problem, then you can also solve any NP-Complete problem, meaning your problem is at least as hard as any other NP-Complete problem.
I know that applying Strongly connected components in a Digraph we can check 2-SAT boolean satisfiability if the problem is solvable in polynomial time.
Let's assume the problem is satisfiable.
The question: is there a general algorithm to calculate that solution relying on 2-SAT?
Assuming I understand your question correctly, yes, there is a general algorithm to find a solution (i.e. a satisfying assignment) by using the algorithm for the satisfiability problem.
Suppose I assign a variable xi the value "true" (so that the literal xi is true and ~xi is false) to make a new 2-CNF from the original. If I were to run satisfiability on it (i.e. use the thing with SCCs to find if the new CNF is satisfiable):
What does it tell you about the satisfying assignments in the original CNF if the resulting CNF is now not satisfiable?
What does it tell you about the satisfying assignments in the original CNF if the resulting CNF is still satisfiable?
The idea behind the algorithm is to make the variable "false" if you hit case 1, and "true" if you hit case 2, and loop through every variable (keeping the assignments that you made to variables you've already looked at). Once you can answer the questions I've posed, you'll understand the concept behind it.
I read the article on wikipedia but could not understand what exactly are NP problems. Can anyone tell me about them and also what is relation of them with P Problems?
NP problems are problems that given a proposed solution, you can verify the solution in a polynomial time. For example, if you have a list of University courses and need to create a schedule so that courses won't conflict, it would be a really difficult task (complexity-wise). However, given a proposed schedule, you can easily verify its correctness.
Another important example from the field of encryption: given a number which is the result of multiplying two very large prime numbers, it's very difficult to find those primes based only on the result. However, given two numbers, it's very easy to check the solution (multiply them, compare).
I have intentionally chose examples that are in NP and not in P (i.e. problem that are hard to find the solution for) so you can understand the difference. All problems that are easy to solve, are also easy to verify - just solve and compare. That is, P is a subset of NP.
Not really an answer, because Piccolo's link is more useful, but a HP researcher claims having proven P != NP, here is the paper.
www.hpl.hp.com/personal/Vinay_Deolalikar/Papers/pnp12pt.pdf
It was not accepted yet, but I wish him good luck for the 1M$.