I have a lemma telling that addition commutes:
Lemma commute: for all x y, add x y = add y x.
Now in my goal, I am trying to prove that:
add (add x (S y)) z = add x (S (add y z))
I would like to use my lemma to rewrite the inner add on the left
add x (S y) to add (S y) x.
However, the command rewrite commute instead rewrites the outer add:
add (add x (S y)) z to add z (add x (S y)).
Question: how to use commute for rewriting inner subexpressions?
You can precise which arguments you want for your lemma with :
rewrite commute with (x := x)(y :=(S y)).
But it is even more common to apply it like a function with :
rewrite (commute x (S y)).
If one of the specified arguments is obvious, you can avoid mentionning it in the first case, or put an underscore in the second, which would give here :
rewrite commute with (y :=(S y)).
and
rewrite (commute _ (S y)).
Related
I have started playing around with Cubical Agda. Last thing I tried doing was building the type of integers (assuming the type of naturals is already defined) in a way similar to how it is done in classical mathematics (see the construction of integers on wikipedia). This is
data dInt : Set where
_⊝_ : ℕ → ℕ → dInt
canc : ∀ a b c d → a + d ≡ b + c → a ⊝ b ≡ c ⊝ d
trunc : isSet (dInt)
After doing that, I wanted to define addition
_++_ : dInt → dInt → dInt
(x ⊝ z) ++ (u ⊝ v) = (x + u) ⊝ (z + v)
(x ⊝ z) ++ canc a b c d u i = canc (x + a) (z + b) (x + c) (z + d) {! !} i
...
I am now stuck on the part between the two braces. A term of type x + a + (z + d) ≡ z + b + (x + c) is asked. Not wanting to prove this by hand, I wanted to use the ring solver made in Cubical Agda. But I could never manage to make it work, even trying to set it up for simple ring equalities like x + x + x ≡ 3 * x.
How can I make it work ? Is there a minimal example to make it work for naturals ? There is a file NatExamples.agda in the library, but it makes you have to rewrite your equalities in a convoluted way.
You can see how the solver for natural numbers is supposed to be used in this file in the cubical library:
Cubical/Tactics/NatSolver/Examples.agda
Note that this solver is different from the solver for commutative rings, which is designed for proving equations in abstract rings and is explained here:
Cubical/Tactics/CommRingSolver/Examples.agda
However, if I read your problem correctly, the equality you want to prove requires the use of other propositional equalities in Nat. This is not supported by any solver in the cubical library (as far as I know, also the standard library doesn't support it). But of course, you can use the solver for all the steps that don't use other equalities.
Just in case you didn't spot this: here is a definition of the integers in math-style using the SetQuotients of the cubical library. SetQuotients help you to avoid the work related to your third constructor trunc. This means you basically just need to show some constructions are well defined as you would in 'normal' math.
I've successfully used the ring solver for exactly the same problem: defining Int as a quotient of ℕ ⨯ ℕ. You can find the complete file here, the relevant parts are the following:
Non-cubical propositional equality to path equality:
open import Cubical.Core.Prelude renaming (_+_ to _+̂_)
open import Relation.Binary.PropositionalEquality renaming (refl to prefl; _≡_ to _=̂_) using ()
fromPropEq : ∀ {ℓ A} {x y : A} → _=̂_ {ℓ} {A} x y → x ≡ y
fromPropEq prefl = refl
An example of using the ring solver:
open import Function using (_$_)
import Data.Nat.Solver
open Data.Nat.Solver.+-*-Solver
using (prove; solve; _:=_; con; var; _:+_; _:*_; :-_; _:-_)
reorder : ∀ x y a b → (x +̂ a) +̂ (y +̂ b) ≡ (x +̂ y) +̂ (a +̂ b)
reorder x y a b = fromPropEq $ solve 4 (λ x y a b → (x :+ a) :+ (y :+ b) := (x :+ y) :+ (a :+ b)) prefl x y a b
So here, even though the ring solver gives us a proof of _=̂_, we can use _=̂_'s K and _≡_'s reflexivity to turn that into a path equality which can be used further downstream to e.g. prove that Int addition is representative-invariant.
I just learned about lambda calculus and I'm having issues trying to reduce
(λx. (λy. y x) (λz. x z)) (λy. y y)
to its normal form. I get to (λy. y (λy. y y) (λz. (λy. y y) z) then get kind of lost. I don't know where to go from here, or if it's even correct.
(λx. (λy. y x) (λz. x z)) (λy. y y)
As #ymonad notes, one of the y parameters needs to be renamed to avoid capture (conflating different variables that only coincidentally share the same name). Here I rename the latter instance (using α-equivalence):
(λx. (λy. y x) (λz. x z)) (λm. m m)
Next step is to β-reduce. In this expression we can do so in one of two places: we can either reduce the outermost application (λx) or the inner application (λy). I'm going to do the latter, mostly on arbitrary whim / because I thought ahead a little bit and think it will result in shorter intermediate expressions:
(λx. (λz. x z) x) (λm. m m)
Still more β-reduction to do. Again I'm going to choose the inner expression because I can see where this is going, but it doesn't actually matter in this case, I'll get the same final answer regardless:
(λx. x x) (λm. m m)
Side note: these two lambda expressions (which are also known as the "Mockingbird" (as per Raymond Smullyan)) are actually α-equivalent, and the entire expression is the (in)famous Ω-combinator. If we ignore all that however, and apply another β-reduction:
(λm. m m) (λm. m m)
Ah, that's still β-reducible. Or is it? This expression is α-equivalent to the previous. Oh dear, we appear to have found ourselves stuck in an infinite loop, as is always possible in Turing-complete (or should we say Lambda-complete?) languages. One might denote this as our original expression equalling "bottom" (in Haskell parlance), denoted ⊥:
(λx. (λy. y x) (λz. x z)) (λy. y y) = ⊥
Is this a mistake? Well, some good LC theory to know is:
if an expression has a β-normal form, then it will be the same β-normal form no matter what order of reductions was used to reach it, and
if an expression has a β-normal form, then normal order evaluation is guaranteed to find it.
So what is normal order? In short, it is β-reducing the outermost expression at every step. Let's take this expression for another spin!
(λx. (λy. y x) (λz. x z)) (λm. m m)
(λy. y (λm. m m)) (λz. (λm. m m) z)
(λz. (λm. m m) z) (λm. m m)
(λm. m m) (λm. m m)
Darn. Looks like this expression has no normal form – it diverges (doesn't terminate).
I am trying to understand which are the right steps to perform the following reduction following the normal order reduction. I cannot understand which is the correct order in which I should perform the reduction, and why, in this expression:
(λn.λs.λz.n s (sz)) (λs.λz.s z)
could you please help me out?
Note: this reduction can also be seen as the function successor
(λn.λs.λz.n s (sz))
applied to the Church numeral 1
(λs.λz.s z)
knowing that the number zero is represented as:
(λs.λz.z)
The normal, AKA leftmost-outermost, reduction order attempts to reduce the leftmost outermost subterms first.
Since you are looking for the outermost terms, you need to determine the main building blocks of your term, remembering that every term is a variable, an abstraction over a term or an application of terms:
(λn.λs.λz.n s (s z)) (λs.λz.s z)
---------LHS-------- ----RHS----
----------APPLICATION-----------
The left-hand side (LHS) of the main term is the leftmost outermost one, so it is the starting point of the reduction. Its outermost abstraction is λn and there is a bound variable n in that term, so it will be substituted with the right-hand term:
λn.λs.λz.n s (s z)
-- -
However, since both LHS and RHS contain s and z variables, you need to rename them in one of them first; I chose to rename the ones in RHS:
λs.λz.s z -> λa.λb.a b
Now you can drop the λn abstraction and substitute the n variable with λa.λb.a b:
λn.λs.λz.n s (s z) -> λs.λz.(λa.λb.a b) s (s z)
-- - -----n-----
It's time to look for the next reduction spot:
λs.λz.(λa.λb.a b) s (s z)
Since lambda calculus is left-associative, this is the same as:
λs.λz.(((λa.λb.a b) s) (s z))
The next leftmost outermost reducible term is (λa.λb.a b) s which reduces to (λb.s b):
λs.λz.(((λa.λb.a b) s) (s z)) -> λs.λz.((λb.s b) (s z))
-- - - -
And the last reducible term is (λb.s b) (s z), where b is substituted with (s z):
λs.λz.((λb.s b) (s z)) -> λs.λz.(s (s z))
-- - ----- -----
Which leads to the final state in normal form:
λs.λz.s (s z)
I want to proof this lemma in Coq:
a : Type
b : Type
f : a -> b
g : a -> b
h : a -> b
______________________________________(1/1)
(forall x : a, f x = g x) ->
(forall x : a, g x = h x) -> forall x : a, f x = h x
I know that Coq.Relations.Relation_Definitions defines transitivity for relations:
Definition transitive : Prop := forall x y z:A, R x y -> R y z -> R x z.
Simply using the proof tactic apply transitivity obviously fails. How can I apply the transitivity lemma to the goal above?
The transitivity tactic requires an argument, which is the intermediate term that you want to introduce into the equality. First call intros (that's almost always the first thing to do in a proof) to have the hypotheses nicely in the environment. Then you can say transitivity (g x) and you're left with two immediate applications of an assumption.
intros.
transitivity (g x); auto.
You can also make Coq guess which intermediate term to use. This doesn't always work, because sometimes Coq finds a candidate that doesn't work out in the end, but this case is simple enough and works immediately. The lemma that transitivity applies is eq_trans; use eapply eq_trans to leave a subterm open (?). The first eauto chooses a subterm that works for the first branch of the proof, and here it also works in the second branch of the proof.
intros.
eapply eq_trans.
eauto.
eauto.
This can be abbreviated as intros; eapply eq_trans; eauto. It can even be abbreviated further to
eauto using eq_trans.
eq_trans isn't in the default hint database because it often leads down an unsuccessful branch.
Ok, I was on the wrong track. Here is the proof of the lemma:
Lemma fun_trans : forall (a b:Type) (f g h:a->b),
(forall (x:a), f x = g x) ->
(forall (x:a), g x = h x) ->
(forall (x:a), f x = h x).
Proof.
intros a b f g h f_g g_h x.
rewrite f_g.
rewrite g_h.
trivial.
Qed.
Is there an extensible, efficient way to write existential statements in Haskell without implementing an embedded logic programming language? Oftentimes when I'm implementing algorithms, I want to express existentially quantified first-order statements like
∃x.∃y.x,y ∈ xs ∧ x ≠ y ∧ p x y
where ∈ is overloaded on lists. If I'm in a hurry, I might write perspicuous code that looks like
find p [] = False
find p (x:xs) = any (\y -> x /= y && (p x y || p y x)) xs || find p xs
or
find p xs = or [ x /= y && (p x y || p y x) | x <- xs, y <- xs]
But this approach doesn't generalize well to queries returning values or predicates or functions of multiple arities. For instance, even a simple statement like
∃x.∃y.x,y,z ∈ xs ∧ x ≠ y ≠ z ∧ f x y z = g x y z
requires writing another search procedure. And this means a considerable amount of boilerplate code. Of course, languages like Curry or Prolog that implement narrowing or a resolution engine allow the programmer to write statements like:
find(p,xs,z) = x ∈ xs & y ∈ xs & x =/= y & f x y =:= g x y =:= z
to abuse the notation considerably, which performs both a search and returns a value. This problem arises often when implementing formally specified algorithms, and is often solved by combinations of functions like fmap, foldr, and mapAccum, but mostly explicit recursion. Is there a more general and efficient, or just general and expressive, way to write code like this in Haskell?
There's a standard transformation that allows you to convert
∃x ∈ xs : P
to
exists (\x -> P) xs
If you need to produce a witness you can use find instead of exists.
The real nuisance of doing this kind of abstraction in Haskell as opposed to a logic language is that you really must pass the "universe" set xs as a parameter. I believe this is what brings in the "fuss" to which you refer in your title.
Of course you can, if you prefer, stuff the universal set (through which you are searching) into a monad. Then you can define your own versions of exists or find to work with the monadic state. To make it efficient, you can try Control.Monad.Logic, but it may involve breaking your head against Oleg's papers.
Anyway, the classic encoding is to replace all binding constructs, including existential and universal quantifiers, with lambdas, and proceed with appropriate function calls. My experience is that this encoding works even for complex nested queries with a lot of structure, but that it always feels clunky.
Maybe I don't understand something, but what's wrong with list comprehensions? Your second example becomes:
[(x,y,z) | x <- xs, y <- xs, z <- xs
, x /= y && y /= z && x /= z
, (p1 x y z) == (p2 x y z)]
This allows you to return values; to check if the formula is satisfied, just use null (it won't evaluate more than needed because of laziness).