working on logic - fitch system - logic

Struggling with logic and fitch system,
I am trying, given (p ⇒ ¬q) and (¬q ∧ p ⇒ r) and p, to use the Fitch System in order to prove r.
Any ideas on how I should proceed?

(p ⇒ ¬q)
(¬q ∧ p ⇒ r)
p
¬q (1 3 Implication Elimination)
¬q ^ p (2 4 And Introduction)
r (2 5 Implication Elimination)
---> END

You may also try other formal proof systems that are available as computer-implemented proof checkers. Using the structured proof language of Isabelle you can write your proof like this:
theory Scratch
imports Main
begin
notepad
begin
assume 1: "p ⟶ ¬ q"
and 2: "¬ q ∧ p ⟶ r"
and 3: p
have "¬ q" using 1 and 3 ..
then have "¬ q ∧ p" using 3 ..
with 2 have r ..
end
end

The following proof uses Klement's Fitch-style natural deduction proof checker. Explanation of the rules are available in forallx.
The first three lines are the premises. Line 4 results from conditional elimination (→E), line 5 from conjunction introduction (∧I) and the final line from conditional elimination again.
References
Kevin Klement's JavaScript/PHP Fitch-style natural deduction proof editor and checker http://proofs.openlogicproject.org/
P. D. Magnus, Tim Button with additions by J. Robert Loftis remixed and revised by Aaron Thomas-Bolduc, Richard Zach, forallx Calgary Remix: An Introduction to Formal Logic, Winter 2018. http://forallx.openlogicproject.org/

Related

How can I subtract a multiset from a set with a given multiset?

So I'm trying to define a function apply_C :: "('a multiset ⇒ 'a option) ⇒ 'a multiset ⇒ 'a multiset"
It takes in a function C that may convert an 'a multiset into a single element of type 'a. Here we assume that each element in the domain of C is pairwise mutually exclusive and not the empty multiset (I already have another function that checks these things). apply will also take another multiset inp. What I'd like the function to do is check if there is at least one element in the domain of C that is completely contained in inp. If this is the case, then perform a set difference inp - s where s is the element in the domain of C and add the element the (C s) into this resulting multiset. Afterwards, keep running the function until there are no more elements in the domain of C that are completely contained in the given inp multiset.
What I tried was the following:
fun apply_C :: "('a multiset ⇒ 'a option) ⇒ 'a multiset ⇒ 'a multiset" where
"apply_C C inp = (if ∃s ∈ (domain C). s ⊆# inp then apply_C C (add_mset (the (C s)) (inp - s)) else inp)"
However, I get this error:
Variable "s" occurs on right hand side only:
⋀C inp s.
apply_C C inp =
(if ∃s∈domain C. s ⊆# inp
then apply_C C
(add_mset (the (C s)) (inp - s))
else inp)
I have been thinking about this problem for days now, and I haven't been able to find a way to implement this functionality in Isabelle. Could I please have some help?
After thinking more about it, I don't believe there is a simple solutions for that Isabelle.
Do you need that?
I have not said why you want that. Maybe you can reduce your assumptions? Do you really need a function to calculate the result?
How to express the definition?
I would use an inductive predicate that express one step of rewriting and prove that the solution is unique. Something along:
context
fixes C :: ‹'a multiset ⇒ 'a option›
begin
inductive apply_CI where
‹apply_CI (M + M') (add_mset (the (C M)) M')›
if ‹M ∈ dom C›
context
assumes
distinct: ‹⋀a b. a ∈ dom C ⟹ b ∈ dom C ⟹ a ≠ b ⟹ a ∩# b = {#}› and
strictly_smaller: ‹⋀a b. a ∈ dom C ⟹ size a > 1›
begin
lemma apply_CI_determ:
assumes
‹apply_CI⇧*⇧* M M⇩1› and
‹apply_CI⇧*⇧* M M⇩2› and
‹⋀M⇩3. ¬apply_CI M⇩1 M⇩3›
‹⋀M⇩3. ¬apply_CI M⇩2 M⇩3›
shows ‹M⇩1 = M⇩2›
sorry
lemma apply_CI_smaller:
‹apply_CI M M' ⟹ size M' ≤ size M›
apply (induction rule: apply_CI.induct)
subgoal for M M'
using strictly_smaller[of M]
by auto
done
lemma wf_apply_CI:
‹wf {(x, y). apply_CI y x}›
(*trivial but very annoying because not enough useful lemmas on wf*)
sorry
end
end
I have no clue how to prove apply_CI_determ (no idea if the conditions I wrote down are sufficient or not), but I did spend much thinking about it.
After that you can define your definitions with:
definition apply_C where
‹apply_C M = (SOME M'. apply_CI⇧*⇧* M M' ∧ (∀M⇩3. ¬apply_CI M' M⇩3))›
and prove the property in your definition.
How to execute it
I don't see how to write an executable function on multisets directly. The problem you face is that one step of apply_C is nondeterministic.
If you can use lists instead of multisets, you get an order on the elements for free and you can use subseqs that gives you all possible subsets. Rewrite using the first element in subseqs that is in the domain of C. Iterate as long as there is any possible rewriting.
Link that to the inductive predicate to prove termination and that it calculates the right thing.
Remark that in general you cannot extract a list out of a multiset, but it is possible to do so in some cases (e.g., if you have a linorder over 'a).

Expanding all definitions in Isabelle lemma

How can I tell Isabelle to expand all my definitions, please, because that way the proof is trivial? Unfortunately there is no default expansion or simplification happens, and basically I get back the original expression as the subgoal.
Example:
theory Test
imports Main
begin
definition b0 :: "nat⇒nat"
where "b0 n ≡ (n mod 2)"
definition b1 :: "nat⇒nat"
where "b1 n ≡ (n div 2)"
lemma "(a::nat)≤3 ∧ (b::nat)≤3 ⟶
2*(b1 a)+(b0 a)+2*(b1 b)+(b0 b) = a+b"
apply auto
oops
end
Respose before oops:
proof (prove)
goal (1 subgoal):
1. a ≤ 3 ⟹
b ≤ 3 ⟹ 2 * b1 a + b0 a + 2 * b1 b + b0 b = a + b
My recommendation: unfolding
There is a special keyword unfolding for unpacking definitions at the start of proofs. For your example this would read:
unfolding b0_def b1_def by simp
I consider unfolding the most elegant way. It also helps while writing the proofs. Internally, this is (mostly?) equivalent to using the unfold-method:
apply (unfold b0_def b1_def) by simp
This will recursively (!) use the set of equalities you supply to rewrite the proof goal. (Due to the recursion, you should rather not supply a set of equalities that could generate cycles...)
Alternative: Using the simplifier
In cases with possible loops, the simplifier might be able to reach a nice unfolding without running into these cycles, maybe by interleaving with other simplifications. In such cases, by (simp add: b0_def b1_def), as you've suggested, is great!
Alternative definition: Maybe it's just an abbreviation (and no definition)?
If you find yourself unfolding a definition in every single instance, you could consider, using abbreviation instead of definition. Then, some Isabelle magic will do the packing/unpacking for you without further hints. abbeviation does only affect how the user communicates with Isabelle. It does not introduce new symbols at the object level, and consequently, there would be no b1_def facts and the like.
abbreviation b0 :: "nat⇒nat"
where "b0 n ≡ (n mod 2)"
Usually not recommended: Building something like an abbreviation using the simplifier
If you (for whatever reason) want to have a defined name at the object level, but unfold it in almost every instance, you can also feed the defining equality directly into the simplifier.
definition b0 :: "nat⇒nat"
where [simp]: "b0 n ≡ (n mod 2)"
(Usually there should be little reason for the last option.)
Yes, I keep forgetting that definitions are not used in simplifications by default.
Adding the definitions explicitly to the simplification rules solves this problem:
lemma "(a::nat)≤3 ∧ (b::nat)≤3 ⟶
2*(b1 a)+(b0 a)+2*(b1 b)+(b0 b) = a+b"
by (simp add: b0_def b1_def)
This way the definitions (b0, b1) are correctly used.

Can you prove Excluded Middle is wrong in Coq if I do not import classical logic

I know excluded middle is impossible in the logic of construction. However, I am stuck when I try to show it in Coq.
Theorem em: forall P : Prop, ~P \/ P -> False.
My approach is:
intros P H.
unfold not in H.
intuition.
The system says following:
2 subgoals
P : Prop
H0 : P -> False
______________________________________(1/2)
False
______________________________________(2/2)
False
How should I proceed?
Thanks
What you are trying to construct is not the negation of LEM, which would say "there exists some P such that EM doesn't hold", but the claim that says that no proposition is decidable, which of course leads to a trivial inconsistency:
Axiom not_lem : forall (P : Prop), ~ (P \/ ~ P).
Goal False.
now apply (not_lem True); left.
No need to use the fancy double-negation lemma; as this is obviously inconsistent [imagine it would hold!]
The "classical" negation of LEM is indeed:
Axiom not_lem : exists (P : Prop), ~ (P \/ ~ P).
and it is not provable [otherwise EM wouldn't be admissible], but you can assume it safely; however it won't be of much utility for you.
One cannot refute the law of excluded middle (LEM) in Coq.
Let's suppose you proved your refutation of LEM. We model this kind of situation by postulating it as an axiom:
Axiom not_lem : forall (P : Prop), ~ (P \/ ~ P).
But then we also have a weaker version (double-negated) of LEM:
Lemma not_not_lem (P : Prop) :
~ ~ (P \/ ~ P).
Proof.
intros nlem. apply nlem.
right. intros p. apply nlem.
left. exact p.
Qed.
These two facts together would make Coq's logic inconsistent:
Lemma Coq_would_be_inconsistent :
False.
Proof.
apply (not_not_lem True).
apply not_lem.
Qed.
I'm coming from mathoverflow, but I don't have permission to comment on #Anton Trunov's answer. I think his answer is unjust, or at least incomplete: he hides the following "folklore":
Coq + Impredicative Set + Weak Excluded-middle -> False
This folklore is a variation of the following facts:
proof irrelevance + large elimination -> false
And Coq + Impredicative Set is canonical, soundness, strong normalization, So it is consistent.
Coq + Impredicative Set is the old version of Coq. I think this at least shows that the defense of the LEM based on double negative translation is not that convincing.
If you want to get information about the solutions, you can get it from here https://github.com/FStarLang/FStar/issues/360
On the other hand, you may be interested in the story of how Coq-HoTT+UA went against LEM∞...
=====================================================
Ok, let's have some solutions.
use command-line flag -impredicative-set, or the install old version(<8.0) of coq.
excluded-middle -> proof-irrelevance
proof-irrelevance -> False
Or you can work with standard coq + coq-hott.
install coq-hott
Univalence + Global Excluded-middle (LEM∞) -> False
It is not recommended that you directly click on the code in question without grasping the specific concept.
I skipped a lot about meta-theoretic implementations, such as Univalence not being computable in Coq-HoTT but only in Agda-CuTT, such as the consistency proof for Coq+Impredicative Set/Coq-HoTT.
However, metatheoretical considerations are important. If we just want to get an Anti-LEM model and don't care about metatheory, then we can use "Boolean-valued forcing" in coq to wreak havoc on things that only LEM can introduce, such as "every function about real set is continuous", Dedekind infinite...
But this answer ends there.

Intro rule for "∀r>0" in Isabelle

When I have a goal such as "∀x. P x" in Isabelle, I know that I can write
show "∀x. P x"
proof (rule allI)
However, when the goal is "∀x>0. P x", I cannot do that. Is there a similar rule/method that I can use after proof in order to simplify my goal? I would also be interested in one for the situation where you have a goal of the form "∃x>0. P x".
I'm looking for an Isar proof that uses the proof (rule something) style.
Universal quantifier
To expand on Lars's answer: ∀x>0. P x is just syntactic sugar for ∀x. x > 0 ⟶ P x. As a consequence, if you want to prove a statement like this, you first have to strip away the universal quantifier with allI and then strip away the implication with impI. You can do something like this:
lemma "∀x>0. P x"
proof (rule allI, rule impI)
Or using intro, which is more or less the same as applying rule until it is not possible anymore:
lemma "∀x>0. P x"
proof (intro allI impI)
Or you can use safe, which eagerly applies all introduction rules that are declared as ‘safe’, such as allI and impI:
lemma "∀x>0. P x"
proof safe
In any case, your new proof state is then
proof (state)
goal (1 subgoal):
1. ⋀x. 0 < x ⟹ P x
And you can proceed like this:
lemma "∀x>0. P (x :: nat)"
proof safe
fix x :: nat assume "x > 0"
show "P x"
Note that I added an annotation; I didn't know what type your P has, so I just used nat. When you fix a variable in Isar and the type is not clear from the assumptions, you will get a warning that a new free type variable was introduced, which is not what you want. When you get that warning, you should add a type annotation to the fix like I did above.
Existential quantifier
For an existential quantifier, safe will not work because the intro rule exI is not always safe due to technical reasons. The typical proof pattern for an ∃x>0. P x would be something like:
lemma "∃x>0. P (x :: nat)"
proof -
have "42 > (0 :: nat)" by simp
moreover have "P 42" sorry
ultimately show ?thesis by blast
qed
Or a little more explicitly:
lemma "∃x>0. P (x :: nat)"
proof -
have "42 > 0 ∧ P 42" sorry
thus ?thesis by (rule exI)
qed
In cases when the existential witness (i.e. the 42 in this example) does not depend on any variables that you got out of an obtain command, you can also do it more directly:
lemma "∃x>0. P (x :: nat)"
proof (intro exI conjI)
This leaves you with the goals ?x > 0 and P ?x. Note that the ?x is a schematic variable for which you can put it anything. So you can complete the proof like this:
lemma "∃x>0. P (x :: nat)"
proof (intro exI conjI)
show "42 > (0::nat)" by simp
show "P 42" sorry
qed
As I said, this does not work if your existential witness depends on some variable that you got from obtain due to technical restrictions. In that case, you have to fall back to the other solution I mentioned.
The following works in Isabelle2016-1-RC2:
lemma "∀ x>0. P x"
apply (rule allI)
In general, you can also just use apply rule, which will select the default introduction rule. Same is true for the existential quantifier.

Godel, Escher, Bach Typographical Number Theory (TNT) puzzles and solutions

In chapter 8 of Godel, Escher, Bach by Douglas Hofstader, the reader is challenged to translate these 2 statements into TNT:
"b is a power of 2"
and
"b is a power of 10"
Are following answers correct?:
(Assuming '∃' to mean 'there exists a number'):
∃x:(x.x = b)
i.e. "there exists a number 'x' such that x multiplied x equals b"
If that is correct, then the next one is equally trivial:
∃x:(x.x.x.x.x.x.x.x.x.x = b)
I'm confused because the author indicates that they are tricky and that the second one should take hours to solve; I must have missed something obvious here, but I can't see it!
In general, I would say "b is a power of 2" is equivalent to "every divisor of b except 1 is a multiple of 2". That is:
∀x((∃y(y*x=b & ¬(x=S0))) → ∃z(SS0*z=x))
EDIT: This doesnt work for 10 (thanks for the comments). But at least it works for all primes. Sorry. I think you have to use some sort of encoding sequences after all. I suggest "Gödel's Incompleteness Theorems" by Raymond Smullyan, if you want a detailed and more general approach to this.
Or you can encode Sequences of Numbers using the Chinese Remainder Theorem, and then encode recursive definitions, such that you can define Exponentiation. In fact, that is basically how you can prove that Peano Arithmetic is turing complete.
Try this:
D(x,y)=∃a(a*x=y)
Prime(x)=¬x=1&∀yD(y,x)→y=x|y=1
a=b mod c = ∃k a=c*k+b
Then
∃y ∃k(
∀x(D(x,y)&Prime(x)→¬D(x*x,y)) &
∀x(D(x,y)&Prime(x)&∀z(Prime(z)&z<x→¬D(z,y))→(k=1 mod x)) &
∀x∀z(D(x,y)&Prime(x)&D(z,y)&Prime(z)&z<x&∀t(z<t<x→¬(Prime(t)&D(t,y)))→
∀a<x ∀c<z ((k=a mod x)&(k=c mod z)-> a=c*10))&
∀x(D(x,y)&Prime(x)&∀z(Prime(z)&z>x→¬D(z,y))→(b<x & (k=b mod x))))
should state "b is Power of 10", actually saying "there is a number y and a number k such that y is product of distinct primes, and the sequence encoded by k throug these primes begins with 1, has the property that the following element c of a is 10*a, and ends with b"
Your expressions are equivalent to the statements "b is a square number" and "b is the 10th power of a number" respectively. Converting "power of" statements into TNT is considerably trickier.
There's a solution to the "b is a power of 10" problem behind the spoiler button in skeptical scientist's post here. It depends on the chinese remainder theorem from number theory, and the existence of arbitrarily-long arithmetic sequences of primes. As Hofstadter indicated, it's not easy to come up with, even if you know the appropriate theorems.
In expressing "b is a power of 10", you actually do not need the Chinese Remainder Theorem and/nor coding of finite sequences. You can alternatively work as follows (we use the usual symbols as |, >, c-d, as shortcuts for formulas/terms with obvious meaning):
For a prime number p, let us denote EXP(p,a) some formula in TNT saying that "p is a prime and a is a power of p". We already know, how to build one. (For technical reasons, we do not consider S0 to be a power of p, so ~EXP(p,S0).)
If p is a prime, we define EXPp(c,a) ≖ 〈EXP(p,a) ∧ (c-1)|(a-1)〉. Here, the symbol | is a shortcut for "divides" which can be easily defined in TNT using one existencial quantifier and multiplication; the same holds for c-1 (a-1, resp.) which means "the d such that Sd=c" (Sd=a, resp.).
If EXP(p,c) holds (i.e. c is a power of p), the formula EXPp(c,a) says that "a is a power of c" since a ≡ 1 (mod c-1) then.
Having a property P of numbers (i.e. nonnegative integers), there is a way how to refer, in TNT, to the smallest number with this property: 〈P(a) ∧ ∀c:〈a>c → ~P(a)〉〉.
We can state the formula expressing "b is a power of 10" (for better readability, we omit the symbols 〈 and 〉, and we write 2 and 5 instead of SS0 and SSSSS0):
∃a:∃c:∃d: (EXP(2,a) ∧ EXP(5,c) ∧ EXP(5,d) ∧ d > b ∧ a⋅c=b ∧ ∀e:(e>5 ∧ e|c ∧ EXP5(e,c) → ~EXP5(e,d)) ∧ ∀e:("e is the smallest such that EXP5(c,e) ∧ EXP5(d,e)" → (d-2)|(e-a))).
Explanation: We write b = a⋅c = 2x⋅5y (x,y>0) and choose d=5z>b in such a way that z and y are coprime (e.g. z may be a prime). Then "the smallest e..." is equal to (5z)y = dy ≡ 2y (mod d-2), and (d-2)|(e-a) implies a = 2x = e mod (d-2) = 2y (we have 'd-2 > 2y' and 'd-2 > a', too), and so x = y.
Remark: This approach can be easily adapted to define "b is a power of n" for any number n with a fixed decomposition a1a2...ak, where each ai is a power of a prime pi and pi = pj → i=j.
how about:
∀x: ∀y: (SSx∙y = b → ∃z: z∙SS0 = SSx)
(in English: any factor of b that is ≥ 2 must itself be divisible by 2; literally: for all natural numbers x and y, if (2+x) * y = b then this implies that there's a natural number z such that z * 2 = (2+x). )
I'm not 100% sure that this is allowed in the syntax of TNT and propositional calculus, it's been a while since I've perused GEB.
(edit: for the b = 2n problem at least; I can see why the 10n would be more difficult as 10 is not prime. But 11n would be the same thing except replacing the one term "SS0" with "SSSSSSSSSSS0".)
Here's what I came up with:
∀c:∃d:<(c*d=b)→<(c=SO)v∃e:(d=e*SSO)>>
Which translates to:
For all numbers c, there exists a number d, such that if c times d equals b then either c is 1 or there exists a number e such that d equals e times 2.
Or
For all numbers c, there exists a number d, such that if b is a factor of c and d then either c is 1 or d is a factor of 2
Or
If the product of two numbers is b then one of them is 1 or one of them is divisible by 2
Or
All divisors of b are either 1 or are divisible by 2
Or
b is a power of 2
For the open expression meaning that b is a power of 2, I have ∀a:~∃c:(S(Sa ∙ SS0) ∙ Sc) = b
This effectively says that for all a, S(Sa ∙ SS0) is not a factor of b. But in normal terms, S(Sa ∙ SS0) is 1 + ((a + 1) * 2) or 3 + 2a. We can now reword the statement as "no odd number that is at least 3 is a factor of b". This is true if and only if b is a power of 2.
I'm still working on the b is a power of 10 problem.
I think that most of the above have only shown that b must be a multiple of 4. How about this: ∃b:∀c:<<∀e:(c∙e) = b & ~∃c':∃c'':(ssc'∙ssc'') = c> → c = 2>
I don't think the formatting is perfect, but it reads:
There exists b, such that for all c, if c is a factor of b and c is prime, then c equal 2.
Here is what I came up with for the statement "b is a power of 2"
∃b: ∀a: ~∃c: ((a * ss0) + sss0) * c = b
I think this says "There exists a number b, such that for all numbers a, there does not exist a number c such that (a * 2) + 3 (in other words, an odd number greater than 2) multiplied by c, gives you b." So, if b exists, and can't be zero, and it has no odd divisors greater than 2, then wouldn't b necessarily be 1, 2, or another power of 2?
my solution for b is a power of two is :
∀x: ∃y x.y=b ( isprime(x) => x = SS0 )
isprime() should not be hard to write.

Resources