I'm studying about the NP class and one of the slides mentions:
It seems that verifying that something is not present is more difficult than verifying that it is present.
______ _________
Hence, CLIQUE (complement) and SubsetSUM (complement) are not obviously members of NP.
Was it ever proved, whether the complement of CLIQUE is an element of NP?
Also, do you have the proof?
This is an open problem, actually! The complexity class co-NP consists of the complements of all problems in NP. It's unknown whether NP = co-NP right now, and many people suspect the answer is no.
Just as CLIQUE is NP-complete, the complement of CLIQUE is co-NP-complete. (More generally, the complement of any NP-complete problem is co-NP-complete). There's a theorem that if any co-NP-complete problem is in NP, then co-NP = NP,which would be a huge theoretical breakthrough.
If you're interested in learning more about this, check out the Wikipedia article on co-NP and look around online for more resources.
The general intuition here: it's easy to prove a graph contains an N-clique: just show me the clique. It's hard to prove a graph that doesn't have an N-clique in fact doesn't have an N-clique. What property of the graph are you going to exploit to do that?
Sure, for some families of graphs you can -- for example, graphs with sufficiently few edges can't possibly have such a clique. It's entirely possible that all graphs can have similar proofs built around them, although it'd be surprising -- almost as surprising as P=NP.
This is why the complement of languages in NP are not, in general, obviously in NP -- in fact, we have the term "co-NP" (as in "the complement is in NP") to refer to languages like !CLIQUE.
One common approach to make progress in complexity theory, where we haven't made progress against the hard questions, is to show that some specific hard-to-prove result would imply something surprising. Showing that NP=co-NP is a common target of these proofs -- for example, any hard problem in both NP and co-NP probably isn't complete for either, because if it were it is complete for both and thus both are equal, so somehow you have those crazy general graph proofs.
This even generalizes -- you can start talking about what happens if your NTMs (or certificate checkers) have an oracle for an NP complete language like CLIQUE. Obviously both CLIQUE and !CLIQUE is in P^CLIQUE, but now there are (probably) new languages in NP^CLIQUE and co-NP^CLIQUE, and you can keep going further until you have an entire hierarchy of complexity classes -- the "polynomial hierarchy". This hierarchy intuitively goes on forever, but may well collapse at some point or even not exist at all (if P=NP).
The polynomial hierarchy makes this general argument technique more powerful -- showing that some result would make the polynomial hierarchy collapse to the 2nd or 3rd level would make that result pretty surprising. Even showing it collapses at all would be somewhat surprising.
Related
Among P, NP, NP hard and NP-complete, only the NP-complete set is restricted to decision problems (those that have a binary solution). What is the reason for this? Why not define it simply as the intersection of NP and NP-hard? And this leads to another question - there must be problems that are not necessarily decision problems and also have the property that any problem in NP can be reduced to them in polynomial time. Is this then a set encompassing NP-complete? Is there already a name for this set?
EDIT: Per the comment by Matt and also the post: What are the differences between NP, NP-Complete and NP-Hard?, its seems P and NP are defined only for decision problems. That would resolve this question apart from why they would be defined this way. But, this seems to be in contradiction to the book Introduction to Algorithms by Cormen et.al. In chapter 34, the section titled "NP-completeness and the classes P and NP", they simply say: "P consists of those problems that are solvable in polynomial time". They even say later, "NP-completeness applies directly not to optimization problems, but to decision problems" but say no such thing about P and NP.
The classes P and NP are indeed classes of decision problems. I don’t have my copy of CLRS handy, but if they’re claiming that P and NP are all problems solvable in polynomial time or nondeterministic polynomial time, respectively, they are incorrect.
There are some good theoretical reasons why it’s helpful to define P and NP as sets of decision problems. Working with decision problems makes reducibility much easier (reducing one problem to another doesn’t require transforming output), simplifies the model by eliminating issues of how big the answer to a question must be, makes it easier to define nondeterministic computation, etc.
But none of those are “dealbreakers” in the sense that you can define analogous complexity classes that involve computing functions rather than just coming up with yes or no answers. For example, the classes FP and FNP are the function problem versions of P and NP, and the question of whether FP = FNP is similarly open.
I've seen references to cut-and-paste proofs in certain texts on algorithms analysis and design. It is often mentioned within the context of Dynamic Programming when proving optimal substructure for an optimization problem (See Chapter 15.3 CLRS). It also shows up on graphs manipulation.
What is the main idea of such proofs? How do I go about using them to prove the correctness of an algorithm or the convenience of a particular approach?
The term "cut and paste" shows up in algorithms sometimes when doing dynamic programming (and other things too, but that is where I first saw it). The idea is that in order to use dynamic programming, the problem you are trying to solve probably has some kind of underlying redundancy. You use a table or similar technique to avoid solving the same optimization problems over and over again. Of course, before you start trying to use dynamic programming, it would be nice to prove that the problem has this redundancy in it, otherwise you won't gain anything by using a table. This is often called the "optimal subproblem" property (e.g., in CLRS).
The "cut and paste" technique is a way to prove that a problem has this property. In particular, you want to show that when you come up with an optimal solution to a problem, you have necessarily used optimal solutions to the constituent subproblems. The proof is by contradiction. Suppose you came up with an optimal solution to a problem by using suboptimal solutions to subproblems. Then, if you were to replace ("cut") those suboptimal subproblem solutions with optimal subproblem solutions (by "pasting" them in), you would improve your optimal solution. But, since your solution was optimal by assumption, you have a contradiction. There are some other steps involved in such a proof, but that is the "cut and paste" part.
'cut-and-paste' technique can be used in proving both correctness of greedy algorithm (both optimal structure and greedy-choice property' and dynamic programming algorithm correctness.
Greedy Correctness
This lecture notes Correctness of MST from MIT 2005 undergrad algorithm class exhibits 'cut-and-paste' technique to prove both optimal structure and greedy-choice property.
This lecture notes Correctness of MST from MIT 6.046J / 18.410J spring 2015 use 'cut-and-paste' technique to prove greedy-choice property
Dynamic Programming Correctness
A more authentic explanation for 'cut-and-paste' was introduced in CLRS (3rd edtion) Chapter 15.3 Element of dynamic programming at page 379
"4. You show that the solutions to the subproblems used within the optimal solution to the problem must themselves be optimal by using a “cut-and-paste”
technique, You do so by supposing that each of the subproblem solutions is not optimal and then deriving a contradiction. In particular, by “cutting out” the nonoptimal subproblem solution and “pasting in” the optimal one, you show that you can get a better solution to the original problem, thus contradicting your supposition that you already had an optimal solution. If there is more than one subproblem, they are typically so similar that the cut-and-paste argument for one can be modified for the others with little effort."
Cut-and-Paste is a way used in proofing graph theory concepts, Idea is this: Assume you have solution for Problem A, you want to say some edge/node, should be available in solution. You will assume you have solution without specified edge/node, you try to reconstruct a solution by cutting an edge/node and pasting specified edge/node and say new solution benefit is at least as same as previous solution.
One of a most important samples is proving MST attributes (proving that greedy choice is good enough). see presentation on MST from CLRS book.
Proof by contradiction
P is assumed to be false, that is !P is true.
It is shown that !P implies two mutually contradictory assertions, Q and !Q.
Since Q and !Q cannot both be true, the assumption that P is false must be wrong, and P must be true.
It is not a new proof technique per se. It is just a fun way to articulate a proof by contradiction.
An example of cut-and-paste:
Suppose you are solving the shortest path problem in graphs with vertices x_1, x_2... x_n. Suppose you find a shortest path from x_1 to x_n and it goes through x_i and x_j (in that order). Then, clearly, the sub path from x_i to x_j must be a shortest path between x_i and x_j as well. Why? Because Math.
Proof using Cut and Paste:
Suppose there exists a strictly shorter path from x_i to x_j. "Cut" that path, and "Paste" into the overall path from x_1 to x_n. Then, you will have another path from x_1 to x_n that is (strictly) shorter than the previously known path, and contradicts the "shortest" nature of that path.
Plain Old Proof by Contradiction:
Suppose P: The presented path from x_1 to x_n is a shortest path.
Q: The subpath from x_i to x_j is a shortest path.
The, Not Q => not P (using the argument presented above.
Hence, P => Q.
So, "Cut" and "Paste" makes it a bit more visual and easy to explain.
I am reading about P , NP and NP-Complete problems theory. Here is text snippet.
The class NP includes all problems that have polynomial-time
solutions, since obviously the solution provides a check. One would
expect that since it is so much easier to check an answer than to
come up with one from scratch, there would be problems in NP that do
not have polynomial-time solutions. To date no such problem has been
found, so it is entirely possible, though not considered likely by
experts, that nondeterminism is not such an important improvement. The
problem is that proving exponential lower bounds is an extremely
difficult task. The information theory bound technique, which we used
to show that sorting requires (n log n) comparisons, does not seem to
be adequate for the task, because the decision trees are not nearly
large enough.
My question is what does author mean by
by statement "To date no such problem has been found, so it is entirely possible, though
not considered likely by experts, that nondeterminism is not such an important improvement." ?
Another question what does author mean by in last statement by "because the decision trees are not nearly large enough." ?
Thanks!
(1) I think the author means that no NP problem has been found, for which it is proven that it is not in P. Certainly there are problems in NP for which no polynomial solution is known, but that's not the same as knowing that none exists.
If in fact P = NP (that is to say, if in fact there are no NP problems that don't have a polynomial solution), then in some sense a nondeterministic machine is no "more powerful" than a deterministic machine, since they solve the same problems in polynomial time. Then we'd say "nondeterminism is not such an important improvement".
(2) The way that the n log n proof works is that there are n! possible outputs from a sorting function, any one of which might be the correct one according to what order the input was in. Each comparison adds a two-legged branch to the tree of all possible states that a given comparison sort algorithm can get into. In order to sort any input, this "decision tree" must have enough branches to produce any of the n! possible re-orderings of the input, and hence there must be at least log(n!) comparisons. So, the lower bound on runtime comes from the size of the tree.
The author is saying that there are no known NP problems for which we've proved they require a tree so large that it implies a lower bound that is super-polynomial. Any such proof would prove P != NP.
The Author gives the possibility of someone may come up with a solution to NP-Complete problems that is not exponential time.
The second part is little vague, he seems so that the lower bound of search tree which we all agree to be O(n log n) is by information theory and if we use large decision trees, which can furture reduce the lower bounds. This is really vague.
BTW, of all the introductions to NP related buzzword explainations, I find this super confusing, which book/ chapter is this from?
A good text is Micheal Sipser's Theory of Computation or listen to Shai Simonson's lectures.
Would it be an polynomial time algorithm to a specific NP-complete problem, or just abstract reasonings that demonstrate solutions to NP-complete problems exist?
It seems that the a specific algoithm is much more helpful. With it, all we'll have to do to polynomially solve an NP problem is to convert it into the specific NP-complete problem for which the proof has a solution, and we are done.
P = NP: "The 3SAT problem is a classic NP complete problem. In this proof, we demonstrate an algorithm to solve it that has an asymptotic bound of (n^99 log log n). First we ..."
P != NP: "Assume there was a polynomial algorithm for the 3SAT problem. This would imply that .... which by ..... implies we can do .... and then ... and then ... which is impossible. This was all predicated on a polynomial time algorithm for 3SAT. Thus P != NP."
UPDATE: Perhaps something like this paper (for P != NP).
UPDATE 2: Here's a video of Michael Sipser sketching out a proof for P != NP
Call me pessimistic, but it will be like this:
...
∴, P ≠ NP
QED
There are some meta-results about what a P=NP or P≠NP proof can not look like. The details are quite technical, but it is known that the proof cannot be
relativizing, which kind of means that the proof must make use of the exact definition of Turing machine used, because with some modifications ("oracles", like very powerful CISC instructions added to the instruction set) P=NP, and with some other modifications, P≠NP. See also this blog post for a nice explanation of relativization.
natural, a property of several classic circuit complexity proofs,
or algebrizing, a generalization of relativizing.
It could take the form of demonstrating that assuming P ≠ NP leads to a contradiction.
It might not be connected to P and NP in a straightforward way... Many theorems now are based on P!=NP, so proving one assumed fact to be untrue would make a big difference. Even proving something like constant ratio approximation for TS should be enough IIRC. I think, existence of NPI (GI) and other sets is also based on P!=NP, so making any of them equal to P or NP might change the situation completely.
IMHO everything happens now on a very abstract level. If someone proves anything about P=/!=NP, it doesn't have to mention any of those sets or even a specific problem.
Probably it would be in the form of a reduction from an NP problem to a P problem. See the Wikipedia page on reductions.
OR
Like this proof proposed by Vinay Deolalikar.
The most straightforward way is to prove that there is a polynomial time solution to the problems in the class NP-complete. These are problems that are in NP and are reducable to one of the known np problem. That means you could give a faster algorithm to prove the original problem posed by Stephen Cook or many others which have also been shown to be NP-Complete. See Richard Karp's seminal paper and this book for more interesting problems. It has been shown that if you solve one of these problems the entire complexity class collapses. edit: I have to add that i was talking to my friend who is studying quantum computation. Although I had no clue what it means, he said that a certain proof/experiment? in the quantum world could make the entire complexity class, i mean the whole thing, moot. If anyone here knows more about this, please reply.
There have also been numerous attempts to the problem without giving a formal algorithm. You could try to count the set. Theres the Robert/Seymore proof. People have also tried to solve it using the tried and tested diagonlization proof(also used to show that there are problems that you can never solve). Razborov also showed that if there are certain one-way functions then any proof cannot give a resolution. That means that new techniques will be required in order to solve this question.
Its been 38 years since the original paper has been published and there still is no sign of a proof. Not only that but lot of problems that mathematicians had been posing before the notion of complexity classes came in has been shown to be NP. Therefor many mathematicians and computer scientists believe that some of the problems are so fundamental that a new kind of maths may be needed to solve the problem. You have to keep in mind that the best minds human race has to offer have tackled this problem without any success. I think it should be at least decades before somebody cracks the puzzle. But even if there is a polynomial time solution the constants or the exponent could be so large that it would be useless in our problems.
There is an excellent survey available which should answer most of your questions: http://www.scottaaronson.com/papers/pnp.pdf.
Certainly a descriptive proof is the most useful, but there are other categories of proof: it is possible, for example, to provide 'existence proofs' that demonstrate that it is possible to find an answer without finding (or, sometimes, even suggesting how to find) that answer.
Set N equal to the multiplicative identity. Then NP = P. QED. ;-)
It would likely look almost precisely like one of these
Good question; it could take either form. Obviously, the specific algorithm would be more helpful, yes, but there's no determining that that would be the way that a theoretical P=NP proof would occur. Given that the nature of NP-complete problems and how common they are, it would seem that more effort has been put into solving those problems than has been put into solving the theoretical reasoning side of the equation, but that's just supposition.
Any nonconstructive proof that P=NP really is not. It would imply that the following explicit 3-SAT algorithm runs in polynomial time:
Enumerate all programs. On round i, run all programs numbered
less than i for one step. If
a program terminates with a
satisfying input to the formula, return true. If a program
terminates with a formal proof that
no such input exists, return
false.
If P=NP, then there exists a program which runs in O(poly(N)) and outputs a satisfying input to the formula, if such a formula exists.
If P=coNP, there exists a program which runs in O(poly(N)) and outputs a formal proof that no formula exists, if no formula exists.
If P=NP, then since P is closed under complement NP=coNP. So, there exists a program which runs in O(poly(N)) and does both. That program is the k'th program in the enumeration. k is O(1)! Since it runs in O(poly(N)) our brute force simulation only requires
k*O(poly(N))+O(poly(N))^2
rounds once it reaches the program in question. As such, the brute force simulation runs in polynomial time!
(Note that k is exponential in the size of the program; this approach is not really feasible, but it suggests that it would be hard to do a nonconstructive proof that P=NP, even if it were the case.)
An interesting read that is somewhat related to this
To some extent, the form such a proof needs to have depends on your philosophical point of view (= the axioms you deem to be true) - e.g., as a contructivist you would demand the construction of an actual algorithm that requires polynomial time to solve an NP-complete problem. This could be done by using reduction, but not with an indirect proof. Anyhow, it really seems to be very unlikely :)
The proof would deduce a contradiction from to the assumption that at least one element (problem) of NP isn't also an element of P.
Assuming some background in mathematics, how would you give a general overview of computational complexity theory to the naive?
I am looking for an explanation of the P = NP question. What is P? What is NP? What is a NP-Hard?
Sometimes Wikipedia is written as if the reader already understands all concepts involved.
Hoooo, doctoral comp flashback. Okay, here goes.
We start with the idea of a decision problem, a problem for which an algorithm can always answer "yes" or "no." We also need the idea of two models of computer (Turing machine, really): deterministic and non-deterministic. A deterministic computer is the regular computer we always thinking of; a non-deterministic computer is one that is just like we're used to except that is has unlimited parallelism, so that any time you come to a branch, you spawn a new "process" and examine both sides. Like Yogi Berra said, when you come to a fork in the road, you should take it.
A decision problem is in P if there is a known polynomial-time algorithm to get that answer. A decision problem is in NP if there is a known polynomial-time algorithm for a non-deterministic machine to get the answer.
Problems known to be in P are trivially in NP --- the nondeterministic machine just never troubles itself to fork another process, and acts just like a deterministic one. There are problems that are known to be neither in P nor NP; a simple example is to enumerate all the bit vectors of length n. No matter what, that takes 2n steps.
(Strictly, a decision problem is in NP if a nodeterministic machine can arrive at an answer in poly-time, and a deterministic machine can verify that the solution is correct in poly time.)
But there are some problems which are known to be in NP for which no poly-time deterministic algorithm is known; in other words, we know they're in NP, but don't know if they're in P. The traditional example is the decision-problem version of the Traveling Salesman Problem (decision-TSP): given the cities and distances, is there a route that covers all the cities, returning to the starting point, in less than x distance? It's easy in a nondeterministic machine, because every time the nondeterministic traveling salesman comes to a fork in the road, he takes it: his clones head on to the next city they haven't visited, and at the end they compare notes and see if any of the clones took less than x distance.
(Then, the exponentially many clones get to fight it out for which ones must be killed.)
It's not known whether decision-TSP is in P: there's no known poly-time solution, but there's no proof such a solution doesn't exist.
Now, one more concept: given decision problems P and Q, if an algorithm can transform a solution for P into a solution for Q in polynomial time, it's said that Q is poly-time reducible (or just reducible) to P.
A problem is NP-complete if you can prove that (1) it's in NP, and (2) show that it's poly-time reducible to a problem already known to be NP-complete. (The hard part of that was provie the first example of an NP-complete problem: that was done by Steve Cook in Cook's Theorem.)
So really, what it says is that if anyone ever finds a poly-time solution to one NP-complete problem, they've automatically got one for all the NP-complete problems; that will also mean that P=NP.
A problem is NP-hard if and only if it's "at least as" hard as an NP-complete problem. The more conventional Traveling Salesman Problem of finding the shortest route is NP-hard, not strictly NP-complete.
Michael Sipser's Introduction to the Theory of Computation is a great book, and is very readable. Another great resource is Scott Aaronson's Great Ideas in Theoretical Computer Science course.
The formalism that is used is to look at decision problems (problems with a Yes/No answer, e.g. "does this graph have a Hamiltonian cycle") as "languages" -- sets of strings -- inputs for which the answer is Yes. There is a formal notion of what a "computer" is (Turing machine), and a problem is in P if there is a polynomial time algorithm for deciding that problem (given an input string, say Yes or No) on a Turing machine.
A problem is in NP if it is checkable in polynomial time, i.e. if, for inputs where the answer is Yes, there is a (polynomial-size) certificate given which you can check that the answer is Yes in polynomial time. [E.g. given a Hamiltonian cycle as certificate, you can obviously check that it is one.]
It doesn't say anything about how to find that certificate. Obviously, you can try "all possible certificates" but that can take exponential time; it is not clear whether you will always have to take more than polynomial time to decide Yes or No; this is the P vs NP question.
A problem is NP-hard if being able to solve that problem means being able to solve all problems in NP.
Also see this question:
What is an NP-complete in computer science?
But really, all these are probably only vague to you; it is worth taking the time to read e.g. Sipser's book. It is a beautiful theory.
This is a comment on Charlie's post.
A problem is NP-complete if you can prove that (1) it's in NP, and
(2) show that it's poly-time reducible to a problem already known to
be NP-complete.
There is a subtle error with the second condition. Actually, what you need to prove is that a known NP-complete problem (say Y) is polynomial-time reducible to this problem (let's call it problem X).
The reasoning behind this manner of proof is that if you could reduce an NP-Complete problem to this problem and somehow manage to solve this problem in poly-time, then you've also succeeded in finding a poly-time solution to the NP-complete problem, which would be a remarkable (if not impossible) thing, since then you'll have succeeded to resolve the long-standing P = NP problem.
Another way to look at this proof is consider it as using the the contra-positive proof technique, which essentially states that if Y --> X, then ~X --> ~Y. In other words, not being able to solve Y in polynomial time isn't possible means not being to solve X in poly-time either. On the other hand, if you could solve X in poly-time, then you could solve Y in poly-time as well. Further, you could solve all problems that reduce to Y in poly-time as well by transitivity.
I hope my explanation above is clear enough. A good source is Chapter 8 of Algorithm Design by Kleinberg and Tardos or Chapter 34 of Cormen et al.
Unfortunately, the best two books I am aware of (Garey and Johnson and Hopcroft and Ullman) both start at the level of graduate proof-oriented mathematics. This is almost certainly necessary, as the whole issue is very easy to misunderstand or mischaracterize. Jeff nearly got his ears chewed off when he attempted to approach the matter in too folksy/jokey a tone.
Perhaps the best way is to simply do a lot of hands-on work with big-O notation using lots of examples and exercises. See also this answer. Note, however, that this is not quite the same thing: individual algorithms can be described by asymptotes, but saying that a problem is of a certain complexity is a statement about every possible algorithm for it. This is why the proofs are so complicated!
I remember "Computational Complexity" from Papadimitriou (I hope I spelled the name right) as a good book
very much simplified: A problem is NP-hard if the only way to solve it is by enumerating all possible answers and checking each one.
Here are a few links on the subject:
Clay Mathematics statement of P vp NP problem
P vs NP Page
P, NP, and Mathematics
In you are familiar with the idea of set cardinality, that is the number of elements in a set, then one could view the question like P representing the cardinality of Integer numbers while NP is a mystery: Is it the same or is it larger like the cardinality of all Real numbers?
My simplified answer would be: "Computational complexity is the analysis of how much harder a problem becomes when you add more elements."
In that sentence, the word "harder" is deliberately vague because it could refer either to processing time or to memory usage.
In computer science it is not enough to be able to solve a problem. It has to be solvable in a reasonable amount of time. So while in pure mathematics you come up with an equation, in CS you have to refine that equation so you can solve a problem in reasonable time.
That is the simplest way I can think to put it, that may be too simple for your purposes.
Depending on how long you have, maybe it would be best to start at DFA, NDFA, and then show that they are equivalent. Then they understand ND vs. D, and will understand regular expressions a lot better as a nice side effect.