It is easy to say that {} and {a,b}* are not P complete because other problems in P can't be reduced to these because {} can't accept anything and {a,b}* cannot reject anything. So, proper mapping can't be done with a reduction function.
But I'm stuck with proving that every other problem in P is P-complete.
You have to be careful when talking about P-completeness because this means different things to different people based on what type of reductions you're allowing. I'm going to assume that you're talking about using polynomial-time reductions. In that case, choose any language L ∈ P other than ∅ or {a, b}*. Now pick any language M in P that you like. Here's a silly reduction from M to L:
Given an input string w, decide whether w in M in polynomial time (this is possible because M ∈ P.)
If w ∈ M, output any string w ∈ L that you'd like (at least one exists because L is nonempty.)
Otherwise, w ∉ M, so output any string w ∉ L that you'd like (at least one exists, because L isn't {a, b}*.
This reduction takes polynomial time because each step takes polynomial time, so it's a polynomial-time reduction from an arbitrary P language to L. Therefore, L is P-complete with respect to polynomial-time reductions.
Generally speaking, when you talk about notions of completeness, you have to make sure that your reductions are given fewer computational resources than the class of solvers that you're using, or you can do weird things like what's described here that make reductions essentially useless.
Related
I have been asked to prove if the following set is decidible, semi-decidible or not semi-decidible:
In other words, it is the set of inputs such that exists a Turing Machine encoded with the natural y with input p that returns its input.
Consider the set K as the set of naturals such that the Turing machine encoded with x and input x stops. This is demonstrated to be a non-decidible set.
I think that what I need is to find a reduction of K to L, but I don't know how to prove that L is decidible, semi-decidible or not semi-decidible.
L may not look decidable at first glance, because there is this nasty unbounded quantifier included, which seems to make necessary a possibly infinite search when you look for a y satisfying the condition for a specific p.
However, the answer is much simpler: There is a turing machine M which always returns its input, i.e. M(p) = p holds for all p in the considered language. Let y be a code of M. Then you can use this same y for all p, showing that L contains all words of the language. Hence L is of course decidable.
In fact, this is an example to demonstrate the principle of extensionality (if two sets have the same elements and one is decidable, then the other is decidable too, even if it doesn't look so).
How is Turing Machine which accepts nothing is not Recursively Enumerable.
We will use an indirect argument to show that the language of encodings of Turing Machines that accept nothing cannot be recursively enumerable.
Lemma 1: if L and its complement are recursively enumerable, then L is recursive.
Proof: let M be a TM that enumerates L and M' be a TM that enumerates the complement of L. Given any string s, we can decide whether s is in L as follows. Begin running M and M', interleaving their executions so that each one eventually gets an arbitrary amount of runtime. If s is in L, M will eventually list it, at which point we know s is in L and we halt-accept. If s is not in L, M' will eventually list it, at which point we know s is not in L and we halt-reject. Thus, for any s, we can halt-accept if s is in L or halt-reject otherwise. Therefore, L and its complement are recursive.
Lemma 2: The language of encodings of Turing Machines that accept something is recursively enumerable.
Proof: The set of all Turing Machine encodings is countable, and so is the set of all possible tape inputs. Thus, the set (M, s) of pairs of machines and inputs is countable. We may therefore assume some ordering of these pairs p1, p2, ..., pk, ... For each pair p = (M, s), begin executing machine M on input s, interleaving the executions of pairs p1, p2, ..., pk, ... so each eventually gets an arbitrary amount of runtime. If pk enters the halt-accept state, we may immediately list M as a TM that accepts something (namely, the corresponding s), and we can even terminate all other running instances checking the same M (and forego starting any new ones). Any machine M that accepts some input will eventually be started and will eventually halt-accept on an input, so all machines are eventually enumerated.
Lemma 3: The language of encodings of Turing Machines that accept nothing is not recursive.
Proof: This is a direct result of Rice's Theorem. The property "accepts nothing" is a semantic property of the language itself and is true for some, but not all, languages; therefore, no TM can decide whether another TM accepts a language with the property or not.
Theorem: The language of encodings of Turing Machines that accept nothing is not recursively enumerable.
Proof: Assume this language is recursively enumerable. We have already proven in Lemma 2 that its complement is recursively enumerable. By Lemma 1, then, both languages are recursive. However, Lemma 3 proves that the language is not recursive. This is a contradiction. The only assumption was that the language is recursively enumerable, so that assumption must have been false: so the language is not recursively enumerable.
More specifically why is there a TM that accepts and halts for any complement language in P?
I understand, that there is a TM that rejects a language L from P, but why must there be a TM that accepts the complement of L?
Simple solution: Let L be the original language with Turing Machine M that accepts the language L. To compute L-complement, create a new machine M' such that M' is the same as M, except we switch all transitions to the accept state of M to a "reject state", and all transitions to a reject state (or a "malformed transition") to the accept state.
The running time for M' is the same as the running time for M. It will accept/reject exactly when M rejects/accepts.
A commenter asked if I could provide intuition for why this does not work for NP vs co-NP. It helps here to start with the Cook-Levin definition of a language L being in NP, which allows a clear definition of a language L' being in co-NP. (Using the definition based on Non-deterministic Turing machines makes the definition of co-NP a bit harder)
In the Cook-Levin definition, a language L is in NP, if we have a "verifying" Turing Machine V such that for all strings S in L, there is a polynomially-length bounded certificate string C such that V accepts the pair (S, C) (think of V either as a two-tape input machine, or else think of it as accepting the encoding of the pair of inputs). In addition of course, we have the requirement that V complete the verification in polynomial time.
As an example, for the 3SAT language, the strings S would be 3SAT problem instance statements, and the certificate C would be the truth-assignments to the variables. The verifier V would look at the truth-assignments and check if each clause of the 3SAT problem instance is verified with that truth assignment.
So put succinctly for a language L in NP is described by its verifying Turing machine V, and we say that:
So to describe the complement language, L' we have:
If we wanted to 'try the same trick' for NP vs co-NP as we did for P vs co-P, the opportunity does not really present itself well. We either need to try this for a deterministic Turing machine that completely solves the language for every instance (and will probably not have a polynomial-time running bound), or we need to see if we can make it work by applying the trick to V. If we simply swap around the results for the verifying machine V, we still need to check every possible certificate C to see if a given string S is truly not accepted by V.
I need help to understand this proof.
"First we show that if we have an enumerator E that enumerates a
language A, a TM M recognizes A. The TM M works in the following way.
PROOF M = "On input w:
1.Run E. Every time that E outputs a string, compare it with w.
If w ever appears in the output of E, accept."
Clearly, M accepts those strings that appear on E's list. "
If w doesn't appear in the output of E it doesn't appear in the E's list.
What is he triyng to say?
You have to prove both parts since it includes "if and only if" phrase.
First, you should show that if there exists an enumerator E which enumerates all strings in the language L, we can construct a recognizer for this language L.
This recognizer works with an input w ( a string ) and runs the E inside. E is an enumerator which generates all strings in L one by one. If the input string is equal to one of these generated strings then ACCEPT. If this language is infinite, then the recognizer may not halt, which is not a problem for a recognizer since it is not a decider.
Second part is, if L is Turing recognizable then there must be a Turing Machine M that recognizes L. An enumerator can be constructed as follows;
for k=1,2,3...
Run M on w1,w2,w3... in parallelized for k steps
if M accepts any of the wi then print wi on the printer.
The reason why we run them in parallelized with limited steps is the same reason why we prefer depth-limited search over depth-first search. It can go through an infinite dead path on the search graph.
Your theorem has two directions: if the "if" and the "only if". The proof is for the "if" direction.
Assuming you have an enumerator E for a language L, can you construct a Turing machine M that recognizes L? Yes, you can. Just define a Turing machine M that, on input string w, checks to see if w is ever in the output of E (which may be infinite). If it is, accept. If it isn't reject.
Since E is an enumerator for L, for any w in L, E eventually outputs w before halting (if it ever halts). Thus, M halts for every string in L. If w is not in L, either M never halts, or M rejects w.
Also, for M to be a decider, not just a recognizer for L, M must always halt.
I've been confused for hours and I cannot figure out how to prove
forall n:nat, ~n<n
in Coq. I really need your help. Any suggestions?
This lemma is in the standard library:
Require Import Arith.
Lemma not_lt_refl : forall n:nat, ~n<n.
Print Hint.
Amongst the results is lt_irrefl. A more direct way of realizing that is
info auto with arith.
which proves the goal and shows how:
intro n; simple apply lt_irrefl.
Since you know where to find a proof, I'll just give a hint on how to do it from first principles (which I suppose is the point of your homework).
First, you need to prove a negation. This pretty much means you push n<n as a hypothesis and prove that you can deduce a contradiction. Then, to reason on n<n, expand it to its definition.
intros h H.
red in H. (* or `unfold lt in H` *)
Now you need to prove that S n <= n cannot happen. To do this from first principles, you have two choices at that point: you can try to induct on n, or you can try to induct on <=. The <= predicate is defined by induction, and often in these cases you need to induct on it — that is, to reason by induction on the proof of your hypothesis. Here, though, you'll ultimately need to reason on n, to show that n cannot be an mth successor of S n, and you can start inducting on n straight away.
After induction n, you need to prove the base case: you have the hypothesis 1 <= 0, and you need to prove that this is impossible (the goal is False). Usually, to break down an inductive hypothesis into cases, you use the induction tactic or one of its variants. This tactic constructs a fairly complex dependent case analysis on the hypothesis. One way to see what's going on is to call simple inversion, which leaves you with two subgoals: either the proof of the hypothesis 1 <= 0 uses the le_n constructor, which requires that 1 = 0, or that proof uses the le_S constructor, which requires that S m = 0. In both cases, the requirement is clearly contradictory with the definition of S, so the tactic discriminate proves the subgoal. Instead of simple inversion H, you can use inversion H, which in this particular case directly proves the goal (the impossible hypothesis case is very common, and it's baked into the full-fledged inversion tactic).
Now, we turn to the induction case, where we quickly come to the point where we would like to prove S n <= n from S (S n) <= S n. I recommend that you state this as a separate lemma (to be proved first), which can be generalized: forall n m, S n <= S m -> n <= m.
Require Import Arith.
auto with arith.