What approach can i take to solve these question:
Prove or disprove the following statements. The universe of discourse is N = {1,2,3,4,...}.
(a) ∀x∃y,y = x·x
(b) ∀y∃x,y = x·x
(c) ∃y∀x,y = x·x.
The best way to solve such problems is first to think about them until you're confident that they can be either proven or disproven.
If they can be disproven, then all you have to do to disprove the statement is provide a counterexample. For instance, for b, I can think of the counterexample y=2. There is no number x in N for which n*n = 2. Thus, there is a counterexample, and the statement is false.
If the statement appears to be true, it may be necessary to use some axioms or tautologies to prove the statment. For instance, it is known that two integers that are multiplied together will always produce another integer.
Hopefully this is enough of an approach to get you going.
To prove something exists, find one example for which it is true.
To prove ∀x F(x), take an arbitrary constant a and prove F(a) is true.
Counterexamples can be used to disprove ∀ statements, but not ∃ statements. To disprove ∃x F(x), prove that ∀x !F(x). So, take an arbitrary constant a and show that F(a) is false.
Related
In our class the following exercise/example was given:
Compute n_0 and c from the formal definition of each Landau symbol to show that :
2^100n belongs O(n^2).
Then in the Solution the following was done:
n_0=2^100 and c=1.
Show for each n>n_0: 2^100*n=<n^2.
It is true that: n_0^2=2^100n_0 and for all n>2^100: n^2-2^100n>n^2 -nn=n^2 - n^2=0.
I have some questions:
We are looking for n_0 and c, but somehow we give values to them? And why those values in particular? Why can't n_0=2? and c=34? Is there a logic behind all of this?
In the last part, I don't see how that expression proves anything, it looks redundant
If you read the definition of big-O notation, it is precisely so that if you can find one such n0 and c that the inequality holds for all numbers greater than n0, then the big-O relation holds.
Of course you can choose another n0, for example a bigger one. As long as you find one, you have proven the relation.
Although for better help with such questions, I recommend Math.stachexchange or CS.stakhexchange.
I am supposed to Prove that 92675*2^n=0(2^n) and use the mathematical definition of 0(f(n)). I came up with following answer not sure if this is the right way to approach it though
Answer: Since 92875 is a constant, we can replace it with K and F(n)=K+2n therefore O(f(n)=O(K+2n) and since K is a constant it can be taken away from the formula and we are therefore left with O(f(n)=O(2n)
Can someone please confirm if this is right or not?
Thanks in advance
Edit: Just realized that I wrote + instead of * and forgot a couple of ^ signs
Answer: Since 92675 is a constant, we can replace it with K and F(n)=K*2^n therefore O(f(n)=O(K*2^n) and since K is a constant it can be taken away from the formula and we are therefore left with O(f(n)=O(2n)
You are supposed to prove exactly that proposition (O(f(n))=O(K*2^n)). You can't use it to prove itself.
The definition of f(x) is O(g(x)) is that, for some constant real numbers k and x_0, |f(x)| <= |k*g(x)| for x>=x_0.
That's why if f(x) = k*g(x) we can say that f(x) is O(g(x)) (|k*g(x)| <= |k*g(x)| for any x). In special, it is also true for g(x)=2^x and k=928675.
I have been trying to acclimate to Prolog and Horn clauses, but the transition from formal logic still feels awkward and forced. I understand there are advantages to having everything in a standard form, but:
What is the best way to define the material conditional operator --> in Prolog, where A --> B succeeds when either A = true and B = true OR B = false? That is, an if->then statement that doesn't fail when if is false without an else.
Also, what exactly are the non-obvious advantages of Horn clauses?
What is the best way to define the material conditional operator --> in Prolog
When A and B are just variables to be bound to the atoms true and false, this is easy:
cond(false, _).
cond(_, true).
But in general, there is no best way because Prolog doesn't offer proper negation, only negation as failure, which is non-monotonic. The closest you can come with actual propositions A and B is often
(\+ A ; B)
which tries to prove A, then goes on to B if A cannot be proven (which does not mean that it is false due to the closed-world assumption).
Negation, however, should be used with care in Prolog.
Also, what exactly are the non-obvious advantages of Horn clauses?
That they have a straightforward procedural reading. Prolog is a programming language, not a theorem prover. It's possible to write programs that have a clear logical meaning, but they're still programs.
To see the difference, consider the classical problem of sorting. If L is a list of numbers without duplicates, then
sort(L, S) :-
permutation(L, S),
sorted(S).
sorted([]).
sorted([_]).
sorted([X,Y|L]) :-
X < Y,
sorted([Y|L]).
is a logical specification of what it means for S to contain the elements of L in sorted order. However, it also has a procedural meaning, which is: try all the permutations of L until you have one that it sorted. This procedure, in the worst case, runs through all n! permutations, even though sorting can be done in O(n lg n) time, making it a very poor sorting program.
See also this question.
I've been confused for hours and I cannot figure out how to prove
forall n:nat, ~n<n
in Coq. I really need your help. Any suggestions?
This lemma is in the standard library:
Require Import Arith.
Lemma not_lt_refl : forall n:nat, ~n<n.
Print Hint.
Amongst the results is lt_irrefl. A more direct way of realizing that is
info auto with arith.
which proves the goal and shows how:
intro n; simple apply lt_irrefl.
Since you know where to find a proof, I'll just give a hint on how to do it from first principles (which I suppose is the point of your homework).
First, you need to prove a negation. This pretty much means you push n<n as a hypothesis and prove that you can deduce a contradiction. Then, to reason on n<n, expand it to its definition.
intros h H.
red in H. (* or `unfold lt in H` *)
Now you need to prove that S n <= n cannot happen. To do this from first principles, you have two choices at that point: you can try to induct on n, or you can try to induct on <=. The <= predicate is defined by induction, and often in these cases you need to induct on it — that is, to reason by induction on the proof of your hypothesis. Here, though, you'll ultimately need to reason on n, to show that n cannot be an mth successor of S n, and you can start inducting on n straight away.
After induction n, you need to prove the base case: you have the hypothesis 1 <= 0, and you need to prove that this is impossible (the goal is False). Usually, to break down an inductive hypothesis into cases, you use the induction tactic or one of its variants. This tactic constructs a fairly complex dependent case analysis on the hypothesis. One way to see what's going on is to call simple inversion, which leaves you with two subgoals: either the proof of the hypothesis 1 <= 0 uses the le_n constructor, which requires that 1 = 0, or that proof uses the le_S constructor, which requires that S m = 0. In both cases, the requirement is clearly contradictory with the definition of S, so the tactic discriminate proves the subgoal. Instead of simple inversion H, you can use inversion H, which in this particular case directly proves the goal (the impossible hypothesis case is very common, and it's baked into the full-fledged inversion tactic).
Now, we turn to the induction case, where we quickly come to the point where we would like to prove S n <= n from S (S n) <= S n. I recommend that you state this as a separate lemma (to be proved first), which can be generalized: forall n m, S n <= S m -> n <= m.
Require Import Arith.
auto with arith.
I wan to use the destruct tactic to prove a statement by cases. I have read a couple of examples online and I'm confused. Could someone explain it better?
Here is a small example (there are other ways to solve it but try using destruct):
Inductive three := zero
| one
| two.
Lemma has2b2: forall a:three, a<>zero /\ a<>one -> a=two.
Now some examples online suggest doing the following:
intros. destruct a.
In which case I get:
3 subgoals H : zero <> zero /\ zero <> one
______________________________________(1/3)
zero = two
______________________________________(2/3)
one = two
______________________________________(3/3)
two = two
So, I want to prove that the first two cases are impossible. But the machine lists them as subgoals and wants me to PROVE them... which is impossible.
Summary:
How to exactly discard the impossible cases?
I have seen some examples using inversion but I don't understand the procedure.
Finally, what happens if my lemma depends on several inductive types and I still want to cover ALL cases?
How to discard impossible cases? Well, it's true that the first two obligations are impossible to prove, but note they have contradicting assumptions (zero <> zero and one <> one, respectively). So you will be able to prove those goals with tauto (there are also more primitive tactics that will do the trick, if you are interested).
inversion is a more advanced version of destruct. Additional to 'destructing' the inductive, it will sometimes generate some equalities (that you may need). It itself is a simple version of induction, which will additionally generate an induction hypothesis for you.
If you have several inductive types in your goal, you can destruct/invert them one by one.
More detailed walk-through:
Inductive three := zero | one | two .
Lemma test : forall a, a <> zero /\ a <> one -> a = two.
Proof.
intros a H.
destruct H. (* to get two parts of conjunction *)
destruct a. (* case analysis on 'a' *)
(* low-level proof *)
compute in H. (* to see through the '<>' notation *)
elimtype False. (* meaning: assumptions are contradictory, I can prove False from them *)
apply H.
reflexivity.
(* can as well be handled with more high-level tactics *)
firstorder.
(* the "proper" case *)
reflexivity.
Qed.
If you see an impossible goal, there are two possibilities: either you made a mistake in your proof strategy (perhaps your lemma is wrong), or the hypotheses are contradictory.
If you think the hypotheses are contradictory, you can set the goal to False, to get a little complexity out of the way. elimtype False achieves this. Often, you prove False by proving a proposition P and its negation ~P; the tactic absurd P deduces any goal from P and ~P. If there's a particular hypothesis which is contradictory, contradict H will set the goal to ~H, or if the hypothesis is a negation ~A then the goal will be A (stronger than ~ ~A but usually more convenient). If one particular hypothesis is obviously contradictory, contradiction H or just contradiction will prove any goal.
There are many tactics involving hypotheses of inductive types. Figuring out which one to use is mostly a matter of experience. Here are the main ones (but you will run into cases not covered here soon):
destruct simply breaks down the hypothesis into several parts. It loses information about dependencies and recursion. A typical example is destruct H where H is a conjunction H : A /\ B, which splits H into two independent hypotheses of types A and B; or dually destruct H where H is a disjunction H : A \/ B, which splits the proof into two different subproofs, one with the hypothesis A and one with the hypothesis B.
case_eq is similar to destruct, but retains the connections that the hypothesis has with other hypotheses. For example, destruct n where n : nat breaks the proof into two subproofs, one for n = 0 and one for n = S m. If n is used in other hypotheses (i.e. you have a H : P n), you may need to remember that the n you've destructed is the same n used in these hypotheses: case_eq n does this.
inversion performs a case analysis on the type of a hypothesis. It is particularly useful when there are dependencies in the type of the hypothesis that destruct would forget. You would typically use case_eq on hypotheses in Set (where equality is relevant) and inversion on hypotheses in Prop (which have very dependent types). The inversion tactic leaves a lot of equalities behind, and it's often followed by subst to simplify the hypotheses. The inversion_clear tactic is a simple alternative to inversion; subst but loses a little information.
induction means that you are going to prove the goal by induction (= recursion) on the given hypothesis. For example, induction n where n : nat means that you'll perform integer induction and prove the base case (n replaced by 0) and the inductive case (n replaced by m+1).
Your example is simple enough that you can prove it as “obvious by case analysis on a”.
Lemma has2b2: forall a:three, a<>zero/\a<>one ->a=two.
Proof. destruct a; tauto. Qed.
But let's look at the cases generated by the destruct tactic, i.e. after just intros; destruct a.. (The case where a is one is symmetric; the last case, where a is two, is obvious by reflexivity.)
H : zero <> zero /\ zero <> one
============================
zero = two
The goal looks impossible. We can tell this to Coq, and here it can spot the contradiction automatically (zero=zero is obvious, and the rest is a first-order tautology handled by the tauto tactic).
elimtype False. tauto.
In fact tauto works even if you don't start by telling Coq not to worry about the goal and wrote tauto without the elimtype False first (IIRC it didn't in older versions of Coq). You can see what Coq is doing with the tauto tactic by writing info tauto. Coq will tell you what proof script the tauto tactic generated. It's not very easy to follow, so let's look at a manual proof of this case. First, let's split the hypothesis (which is a conjunction) into two.
destruct H as [H0 H1].
We now have two hypotheses, one of which is zero <> zero. This is clearly false, because it's the negation of zero = zero which is clearly true.
contradiction H0. reflexivity.
We can look in even more detail at what the contradiction tactic does. (info contradiction would reveal what happens under the scene, but again it's not novice-friendly). We claim that the goal is true because the hypotheses are contradictory so we can prove anything. So let's set the intermediate goal to False.
assert (F : False).
Run red in H0. to see that zero <> zero is really notation for ~(zero=zero) which in turn is defined as meaning zero=zero -> False. So False is the conclusion of H0:
apply H0.
And now we need to prove that zero=zero, which is
reflexivity.
Now we've proved our assertion of False. What remains is to prove that False implies our goal. Well, False implies any goal, that's its definition (False is defined as an inductive type with 0 case).
destruct F.