Why co-NP is not a subset of NP - complexity-theory

Someone asked me this question and I found I could not answer it even after spending some time re-reading my college textbooks. Specifically, here is the definition of co-NP in many text books:
Definition 1
"a problem A is in co-NP if and only if there is a polynomial time procedure V (·, ·) and a polynomial bound p() such that x ∈ A if and only if ∀y : |y| ≤ p(|x|), V (x, y) = 1"
Doesn't this mean that if A is in co-NP, then it MUST have a certificate (because every y would be a certificate) and therefore, A is also in NP?
With some thoughts, I am not sure the above definition is correct. Given the following definition of NP:
Definition 2
"a decision problem A is in NP if and only if there is a polynomial time
procedure V (·, ·) and a polynomial time bound p() such that x ∈ A if and only if ∃y.|y| ≤ p(|x|) ∧ V (x, y) = 1"
The straightforward definition for co-NP seems to be:
Definition 3
"a decision problem A is in co-NP if and only if there is a polynomial time bound p() such that x ∈ A if and only if ∀y : |y| ≤ p(|x|) there does NOT exist a polynomial time procedure V(.,.) such that V (x, y) = 1"
However, Definition 3 is not equivalent to Definition 1, because V(.,.) can be undecidable. Am I missing anything? Thanks!

Doesn't this mean that if A is in co-NP, then it MUST have a certificate (because every y would be a certificate) and therefore, A is also in NP?
No. V is not a verifier for problem A in the sense of the definition of NP. For V to be a verifier in that sense, we would need to be able to determine x ∈ A by finding a single y such that V(x, y) = 1. With this V, we need to check all possible values of y.
Your proposed "straightforward definition" of co-NP is wrong. For any problem A, we could pick V to be the procedure that ignores its arguments and immediately returns 1. Thus, by your definition, no problems would be in co-NP.

Related

Why are the set of variables in lambda calculus typically defined as countable infinite?

When reading formal descriptions of the lambda calculus, the set of variables seems to always be defined as countably infinite. Why this set cannot be finite seems clear; defining the set of variables as finite would restrict term constructions in unacceptable ways. However, why not allow the set to be uncountably infinite?
Currently, the most sensible answer to this question I have received is that choosing a countably infinite set of variables implies we may enumerate variables making the description of how to choose fresh variables, say for an alpha rewrite, natural.
I am looking for a definitive answer to this question.
Most definitions and constructs in maths and logic include only the minimal apparatus that is required to achieve the desired end. As you note, more than a finite number of variables may be required. But since no more than a countable infinity is required, why allow more?
The reason that this set is required to be countable is quite simple. Imagine that you had a bag full of the variables. There would be no way to count the number of variables in this bag unless the set was denumerable.
Note that bags are isomorphic to sacks.
Uncountable collections of things seem to usually have uncomputable elements. I'm not sure that all uncountable collections have this property, but I strongly suspect they do.
As a result, you could never even write out the name of those elements in any reasonable way. For example, unlike a number like pi, you cannot have a program that writes out the digits Chaitin's constant past a certain finite number of digits. The set of computable real numbers is countably infinite, so the "additional" reals you get are uncomputable.
I also don't believe you gain anything from the set being uncountably infinite. So you would introduce uncomputable names without benefit (as far as I can see).
Having a countable number of variables, and a computable bijection between them and ℕ, lets us create a bijection between Λ and ℕ:
#v = ⟨0, f(v)⟩, where f is the computable bijection between 𝕍 and ℕ (exists because 𝕍 is countable) and ⟨m, n⟩ is a computable bijection between ℕ2 and ℕ.
#(L M) = ⟨1, ⟨#L, #M⟩⟩
#(λv. L) = ⟨2, ⟨#v, #L⟩⟩
The notation ⌜L⌝ represents c_{#L}, the church numeral representing the encoding of L. For all sets S, #S represents the set {#L | L ∈ S}.
This allows us to prove that lambda calculus is not decidable:
Let A be a non-trivial (not ∅ or Λ) set closed under α and β equality (if L ∈ A and L β= M, M ∈ A). Let B be the set {L | L⌜L⌝ ∈ A}. Assume that set #A is recursive. Then f, for which f(x) = 1 if x ∈ A and 0 if x ∉ A, must be a μ-recursive function. All μ-recursive functions are λ-definable*, so there must be an F for which:
F⌜L⌝ = c_1 ⇔ ⌜L⌝ ∈ A
F⌜L⌝ = c_0 ⇔ ⌜L⌝ ∉ A
By letting G ≡ λn. iszero (F ⟨1, ⟨n, #n⟩⟩) M_0 M_1, where M_0 is any λ-term in B and M_1 is any λ-term not in B. Note that #n is computable and therefore λ-definable.
Now just ask the question "Is G⌜G⌝ in B?". If yes, then G⌜G⌝ = M_1 ∉ B, so G⌜G⌝ could not have been in B (remember that B is closed under β=). If no, then G⌜G⌝ = M_0 ∈ B, so it must have been in B.
This is a contradiction, so A could not have been recursive, therefore no closed-under-β= non-trivial set is recursive.
Note that {L | L β= true} is closed under β= and non-trivial, so it is therefore not recursive. This means lambda calculus is not decidable.
* The proof that all computable functions are λ-definable (we can have a λ-term F such that F c_{n1} c_{n2} ... = c_{f(n1, n2, ...)}), as well as the proof in this answer, can be found in "Lambda Calculi With Types" by Henk Barendregt (section 2.2).

What does this symbol mean in graph theory ≤P?

In this case the p is supposed to be a subscript. Is it supposed to mean less than or equal polynomial time?
A ≤p B means that there is a polynomial-time many-one reduction from A to B, i.e., there exists a polynomial-time computable function f such that, for every string x, we have x in A if and only if f(x) in B.

Finding integral solution of an equation

This is part of a bigger question. Its actually a mathematical problem. So it would be really great if someone can direct me to any algorithm to obtain the solution of this problem or a pseudo code will be of help.
The question. Given an equation check if it has an integral solution.
For example:
(26a+5)/32=b
Here a is an integer. Is there an algorithm to predict or find if b can be an integer. I need a general solution not specific to this question. The equation can vary. Thanks
Your problem is an example of a linear Diophantine equation. About that, Wikipedia says:
This Diophantine equation [i.e., a x + b y = c] has a solution (where x and y are integers) if and only if c is a multiple of the greatest common divisor of a and b. Moreover, if (x, y) is a solution, then the other solutions have the form (x + k v, y - k u), where k is an arbitrary integer, and u and v are the quotients of a and b (respectively) by the greatest common divisor of a and b.
In this case, (26 a + 5)/32 = b is equivalent to 26 a - 32 b = -5. The gcd of the coefficients of the unknowns is gcd(26, -32) = 2. Since -5 is not a multiple of 2, there is no solution.
A general Diophantine equation is a polynomial in the unknowns, and can only be solved (if at all) by more complex methods. A web search might turn up specialized software for that problem.
Linear Diophantine equations take the form ax + by = c. If c is the greatest common divisor of a and b this means a=z'c and b=z''c then this is Bézout's identity of the form
with a=z' and b=z'' and the equation has an infinite number of solutions. So instead of trial searching method you can check if c is the greatest common divisor (GCD) of a and b
If indeed a and b are multiples of c then x and y can be computed using extended Euclidean algorithm which finds integers x and y (one of which is typically negative) that satisfy Bézout's identity
(as a side note: this holds also for any other Euclidean domain, i.e. polynomial ring & every Euclidean domain is unique factorization domain). You can use Iterative Method to find these solutions:
Integral solution to equation `a + bx = c + dy`

Why is P ⊆ co-NP?

I've seen several places that have simply stated that it's known that P is a subset of the intersection of NP and co-NP. Proofs that show that P is a subset of NP are not hard to find. So to show that it's a subset of the intersection, all that's left to be done is show that P is a subset of co-NP. What might a proof of this be like? Thank you much!
The class P is closed under complementation: if L is a language in P, then the complement of L is also in P. You can see this by taking any polynomial-time decider for L and switching the accept and reject states; this new machine now decides the complement of L and does so in polynomial time.
A language L is in co-NP iff its complement is in NP. So consider any language L ∈ P. The complement of L is also in P, so the complement of L is therefore in NP (because P ⊆ NP). Therefore, L is in co-NP. Consequently, P ⊆ co-NP.
Hope this helps!
Think of it this way. Consider the class co-P. Since P is closed under compliment, P=co-P.
It should also be clear that co-P is a subset of co-NP because P is contained in NP. Since P = co-P, it follows that P is contained in co-NP.

How do we know that an NFA has a minimum amount of states?

Is there some kind of proof for this? How can we know that the current NFA has the minimum amount?
As opposed to DFA minimization, where efficient methods exist to not only determine the size of, but actually compute, the smallest DFA in terms of number of states that describes a given regular language, no such general method is known for determining the size of a smallest NFA. Moreover, unless P=PSPACE, no polynomial-time algorithm exists to compute a minimal NFA to recognize a language, as the following decision problem is PSPACE-complete:
Given a DFA M that accepts the regular language L, and an integer k, is there an NFA with ≤ k states accepting L?
(Jiang & Ravikumar 1993).
There is, however, a simple theorem from Glaister and Shallit that can be used to determine lower bounds on the number of states of a minimal NFA:
Let L ⊆ Σ* be a regular language and suppose that there exist n pairs P = { (xi, wi) | 1 ≤ i ≤ n } such that:
xi wi ∈ L for 1 ≤ i ≤ n
xj wi ∉ L for 1 ≤ j, i ≤ n and j ≠ i
Then any NFA accepting L has at least n states.
See: Ian Glaister and Jeffrey Shallit (1996). "A lower bound technique for the size of nondeterministic finite automata". Information Processing Letters 59 (2), pp. 75–77. DOI:10.1016/0020-0190(96)00095-6.

Resources