I know that you cannot give a verification certificate. But, I was just thinking, why can't we give the input to an NDTM deciding SAT, and then reverse the answer? Where is the flaw?
It's actually not known whether the complement of SAT is in NP. If P = NP, then since all P languages are closed under complementation, then the complement of SAT must be in NP (since it's in P). Otherwise, if the complement of SAT is not in NP, then P ≠ NP by using similar logic.
It's suspected that the complement of SAT is not in NP because the complement of SAT consists (ignoring garbage malformed strings) of propositional formulas that are unsatisfiable. It's unclear what information you could nondeterministically guess that would help you determine whether a formula never evaluates to true, whereas in the case of SAT it's easy to nondeterministically guess a satisfying assignment to check whether a formula is indeed satisfiable.
As for the error in your reasoning - an NTM accepts iff there is some accepting branch of the computation. If you flip all "accepts" to "rejects," then you don't flip the overall result of the computation. To flip the computation result, you'd have to have the complemented NTM accept iff every branch accepts, versus if at least one branch accepts.
Hope this helps!
Related
The complement of a context-free language is not always context-free. It is not allowed to just swap the final and non-final states of an NPDA and assume that it produces the complement of the language. Could someone give an example where it goes wrong?
And why does the above described procedure work for regular languages given a DFA? Maybe because DFA and NFA are equivalent and DPDA and NPDA are not?
Well, swapping the final vs non-final states of an NFA doesn't even guarantee you'll get the complement of the language. Consider this rather curious NFA:
----->q0--a-->[q1]
|
a
|
V
q2
This NFA accepts the language {a}. Swapping the final and non-final states, the accepted language becomes {e, a}. These languages are not complementary since they have an intersection.
In exactly the same way, swapping the states of a NPDA is not guaranteed to work either. The difference, as you point out, is that for any NFA, there is some equivalent DFA (indeed, there are lots), and swapping toggling the finality of states will work for those, so the languages are guaranteed to be closed under complementation.
For NPDAs, though, we do not necessarily have equivalent DPDAs (where swapping finality would work fine). Thus, it is possible that the complement of some languages accepted only by NPDAs is not context-free.
Indeed, the context-free language {a^i b^j c^k | i != j or j != k} is accepted only by NPDAs and its complement {strings not of the form a^i b^j c^k or strings of that form with i=j=k) is not context-free.
The grammar which does not specify a unique move from at least one sigma element.
From any state by taking one input we can not determine to which step we will reach so the grammar generating such type of situation is called non deterministic grammar.
[enter image description here][1]
https://i.stack.imgur.com/U6vaJ.jpg
For a particular input the computer will give different output on different execution.
Can’t solve the problem in polynomial time.
Cannot determine the next step of execution due to more than one path the algorithm can take.
I'm having a bit of trouble understanding how one would verify if there is no solution to a given instance of the subset sum problem in polynomial time.
Of course you could easily verify the positive case: simply provide the list of integers which add up to the target sum and check they are all in the original set. (O(N))
How do you verify that the answer "false" is the correct one in polynomial time?
It’s actually not known how to do this - and indeed it’s conjectured that it’s not possible to do so!
The class NP consists of problems where “yes” instances can be verified in polynomial time. The subset sum problem is a canonical example of a problem in NP. Importantly, notice that the definition of NP says nothing about what happens if the answer is “no” - maybe it’ll be easy to show this, or maybe there isn’t an efficient algorithm for doing so.
A counterpart to NP is the class co-NP, which consists of problems where “no” instances can be verified in polynomial time. A canonical example of such a problem is the tautology problem - given a propositional logic formula, is it always true regardless of what values the variables are given? If the formula isn’t a tautology, it’s easy to verify this by having someone tell you how to assign the values to the variables such that the formula is false. But if the formula is always true, it’s unclear how you’d show this efficiently.
Just as the P = NP problem hasn’t been solved, the NP = co-NP problem is also open. We don’t know whether problems where “yes” answers have fast verification are the same as problems where “no” answers have fast verification.
What we do know is that if any NP-complete problem is in co-NP, then NP = co-NP. And since subset sum is NP-complete, there’s no known polynomial time algorithm to verify if the answer to a subset sum instance is “no.”
I am studying the quantum circuit realization of Shor's algorithm about factoring 15 into product of prime numbers using the python package Qiskit. See this website for details.
My question is related to the realization of U-gate in this website. In this website, the realization of U-gate is given in the form
def c_amod15(a, power):
"""Controlled multiplication by a mod 15"""
if a not in [2,7,8,11,13]:
raise ValueError("'a' must be 2,7,8,11 or 13")
U = QuantumCircuit(4)
for iteration in range(power):
if a in [2,13]:
U.swap(0,1)
U.swap(1,2)
U.swap(2,3)
if a in [7,8]:
U.swap(2,3)
U.swap(1,2)
U.swap(0,1)
if a == 11:
U.swap(1,3)
U.swap(0,2)
if a in [7,11,13]:
for q in range(4):
U.x(q)
U = U.to_gate()
U.name = "%i^%i mod 15" % (a, power)
c_U = U.control()
return c_U
My question is that why this U-gate is engineered in such way by swapping qbits. How exactly the value of 'a' will affect the swapping scheme? What if I want to factor 33? How should I change this swapping scheme to factor 33?
The value of a is part of the phase estimation part of Shor's algorithm, where the operation
|y> -> |ay mod N>
is applied. So a influences the arithmetic operation and depending on how you implement the modular multiplication influences the final circuit differently.
The Qiskit textbook implementation seems to only support special values of a, but the software package itself has the general code for all values of a: https://github.com/Qiskit/qiskit-terra/blob/main/qiskit/algorithms/factorizers/shor.py
That code uses the Fourier transform to do the multiplication, so a will influence the phase shifts applied after the Fourier transform. Qiskit's implementation is based on this paper where you can find more information.
To try and answer your question in the comments:
I guess my question is why swapping register bits can give us a gate realizing order finding algorithms.
One way to think of Shor's algorithm is it takes as input:
A circuit* U, and
a starting state |ψ⟩
Shor's algorithm tells us the period of that circuit, i.e. the number of times you need to repeat U to get back to your initial input. We then use a classical algorithm to map factoring to this problem by setting U|y⟩≡|ay mod N⟩ and |ψ⟩=|1⟩.
You can confirm through simulations that the circuit in the Qiskit Textbook has that property, although it doesn't give a method of generating that circuit (I imagine it was educated guessing similar to this answer but you'll need to read that paper in the other answer for a general method).
If we already know the answer using the algorithm, then we could just find any old circuit with the correct period and plug that in. E.g. a single swap gate acting on |1⟩ has period 2. Although this doesn't really count as "doing Shor's algorithm", it's often used to demonstrate the algorithm works 1 2.
*To make the algorithm efficient, the input is really "an efficient way to make circuits for U^(2^x)". Fortunately, we know how to do this for the circuits needed for factoring, but the Qiskit textbook just repeats U inefficiently for sake of demonstration.
Why is the term 'reduce' used when B is at least as hard?
In the context of NP complexity, we say that A is reducible to B in polynomial time, namely, A ≤ B where A is a known hard problem and we try to show that it can be reduced to B, a problem with unknown hardness.
Suppose we prove it successfully, that means that B is at least as hard as A. Then what exactly is reduced? It does not seem to be inline with the meaning of 'reduce' when B is a problem that is harder and less general.
Reduce comes from the latin "reducere", composed of "re" (back) and "ducere" (to lead). In this context, it literally means "bring back, convert", since the problem of deciding if an element x is in A is converted to the problem of deciding if a suitably transformed input f(x) is in B.
Let me observe that the notion of reducibility is used in may different context apart from (NP) complexity. In particular, it originated in computability theory.
hello I am having difficulties to understand the topic of P,NP and Polynomial-time reduction.
I have tried to search it on web and ask some of my friends , but i havent got any good answer .
I wish to ask a general question about this topic :
let A,B be languages ( or set of problems ) in P
let C,D be languages in NP
which of the next are Necessarily true ( may be more than 1)
there is a Polynomial-time reduction from A to B
there is a Polynomial-time reduction from A to C
there is a Polynomial-time reduction from C to A
there is a Polynomial-time reduction from C to D
thanks in advance for your answer.
(1) is true (with the exception of B={} and B={all words}), with the following polynomial reduction:
Let w_t be some word such that B(w_t)=true, and w_f be some word such that B(w_f)=false.
Given a word w: Run A(w). If A(w)=true, return w_t - otherwise, return w_f
The above is polynomial reduction, because all operartions are polynomial and the output of the reduction yields B(f(w))=true if and only if A(w)=true.
(2) is true again, with the same reduction (again, if you can assume there is one w_t and one w_f such as described).
(3) This is wrong, assuming P!=NP.
Assume such a reduction exist, and let it be f:Sigma*->Sigma*.
Examine the problems C=SAT and A is some problem P, and let M_A be a polynomial time algorithm that solves A.
We will show we can solve SAT in polynomial time (but since we assumed P!=NP, it is impossible - contradiction, so f does not exist).
Given an instance of SAT w, run w'=f(w). Run M_A(w'), and answer the same.
The above is clearly polynomial, and it returns always the correct answer - from the definition of polynomial reduction - f(w) is in A if and only if w is in C.
Thus - the above algorithm solves SAT in polynomial time - contradiction.
(4) Is also wrong, since case (3) is covered in it.