canonical form of (M B) (M B) - lambda-calculus

Are there lambda terms M and B with M =/= B, so that M B and (M B) (M B) have the same canonical form?
Is a problem I encountered while I am still new with lambda calculus
I approached this
by having M = λx.x and B = λy.y
Μ Β = (λx.x) (λy.y) ->(β) λy.y
(M B) (M B) = ((λx.x) (λy.y)) ((λx.x) (λy.y)) ->(β) (λy.y) ((λx.x) (λy.y)) ->(β) (λx.x) (λy.y) ->(β) λy.y
and thus getting the same canonical form, but I am not confident that I am correct about (M B) (M B).

The reductions you have posted are correct.
One thing to know is that you can use any name for the bound variable in a lambda term, so the terms \x.x and \y.y are not really different; they are the same term. This is called alpha equivalence.
Another thing to know is that the function \x.x that just returns its argument is called the identity function, and is very useful to know.
Here is an answer to your question where M and B are different. M can be a function that ignores its argument and returns the identity function: \x.\y.y. B can be any term different from M.
Then,
(M B) -> \y.y
And also
((M B) (M B)) -> (\y.y \y.y) -> \y.y

Related

How do you use the Ring solver in Cubical Agda?

I have started playing around with Cubical Agda. Last thing I tried doing was building the type of integers (assuming the type of naturals is already defined) in a way similar to how it is done in classical mathematics (see the construction of integers on wikipedia). This is
data dInt : Set where
_⊝_ : ℕ → ℕ → dInt
canc : ∀ a b c d → a + d ≡ b + c → a ⊝ b ≡ c ⊝ d
trunc : isSet (dInt)
After doing that, I wanted to define addition
_++_ : dInt → dInt → dInt
(x ⊝ z) ++ (u ⊝ v) = (x + u) ⊝ (z + v)
(x ⊝ z) ++ canc a b c d u i = canc (x + a) (z + b) (x + c) (z + d) {! !} i
...
I am now stuck on the part between the two braces. A term of type x + a + (z + d) ≡ z + b + (x + c) is asked. Not wanting to prove this by hand, I wanted to use the ring solver made in Cubical Agda. But I could never manage to make it work, even trying to set it up for simple ring equalities like x + x + x ≡ 3 * x.
How can I make it work ? Is there a minimal example to make it work for naturals ? There is a file NatExamples.agda in the library, but it makes you have to rewrite your equalities in a convoluted way.
You can see how the solver for natural numbers is supposed to be used in this file in the cubical library:
Cubical/Tactics/NatSolver/Examples.agda
Note that this solver is different from the solver for commutative rings, which is designed for proving equations in abstract rings and is explained here:
Cubical/Tactics/CommRingSolver/Examples.agda
However, if I read your problem correctly, the equality you want to prove requires the use of other propositional equalities in Nat. This is not supported by any solver in the cubical library (as far as I know, also the standard library doesn't support it). But of course, you can use the solver for all the steps that don't use other equalities.
Just in case you didn't spot this: here is a definition of the integers in math-style using the SetQuotients of the cubical library. SetQuotients help you to avoid the work related to your third constructor trunc. This means you basically just need to show some constructions are well defined as you would in 'normal' math.
I've successfully used the ring solver for exactly the same problem: defining Int as a quotient of ℕ ⨯ ℕ. You can find the complete file here, the relevant parts are the following:
Non-cubical propositional equality to path equality:
open import Cubical.Core.Prelude renaming (_+_ to _+̂_)
open import Relation.Binary.PropositionalEquality renaming (refl to prefl; _≡_ to _=̂_) using ()
fromPropEq : ∀ {ℓ A} {x y : A} → _=̂_ {ℓ} {A} x y → x ≡ y
fromPropEq prefl = refl
An example of using the ring solver:
open import Function using (_$_)
import Data.Nat.Solver
open Data.Nat.Solver.+-*-Solver
using (prove; solve; _:=_; con; var; _:+_; _:*_; :-_; _:-_)
reorder : ∀ x y a b → (x +̂ a) +̂ (y +̂ b) ≡ (x +̂ y) +̂ (a +̂ b)
reorder x y a b = fromPropEq $ solve 4 (λ x y a b → (x :+ a) :+ (y :+ b) := (x :+ y) :+ (a :+ b)) prefl x y a b
So here, even though the ring solver gives us a proof of _=̂_, we can use _=̂_'s K and _≡_'s reflexivity to turn that into a path equality which can be used further downstream to e.g. prove that Int addition is representative-invariant.

How can I subtract a multiset from a set with a given multiset?

So I'm trying to define a function apply_C :: "('a multiset ⇒ 'a option) ⇒ 'a multiset ⇒ 'a multiset"
It takes in a function C that may convert an 'a multiset into a single element of type 'a. Here we assume that each element in the domain of C is pairwise mutually exclusive and not the empty multiset (I already have another function that checks these things). apply will also take another multiset inp. What I'd like the function to do is check if there is at least one element in the domain of C that is completely contained in inp. If this is the case, then perform a set difference inp - s where s is the element in the domain of C and add the element the (C s) into this resulting multiset. Afterwards, keep running the function until there are no more elements in the domain of C that are completely contained in the given inp multiset.
What I tried was the following:
fun apply_C :: "('a multiset ⇒ 'a option) ⇒ 'a multiset ⇒ 'a multiset" where
"apply_C C inp = (if ∃s ∈ (domain C). s ⊆# inp then apply_C C (add_mset (the (C s)) (inp - s)) else inp)"
However, I get this error:
Variable "s" occurs on right hand side only:
⋀C inp s.
apply_C C inp =
(if ∃s∈domain C. s ⊆# inp
then apply_C C
(add_mset (the (C s)) (inp - s))
else inp)
I have been thinking about this problem for days now, and I haven't been able to find a way to implement this functionality in Isabelle. Could I please have some help?
After thinking more about it, I don't believe there is a simple solutions for that Isabelle.
Do you need that?
I have not said why you want that. Maybe you can reduce your assumptions? Do you really need a function to calculate the result?
How to express the definition?
I would use an inductive predicate that express one step of rewriting and prove that the solution is unique. Something along:
context
fixes C :: ‹'a multiset ⇒ 'a option›
begin
inductive apply_CI where
‹apply_CI (M + M') (add_mset (the (C M)) M')›
if ‹M ∈ dom C›
context
assumes
distinct: ‹⋀a b. a ∈ dom C ⟹ b ∈ dom C ⟹ a ≠ b ⟹ a ∩# b = {#}› and
strictly_smaller: ‹⋀a b. a ∈ dom C ⟹ size a > 1›
begin
lemma apply_CI_determ:
assumes
‹apply_CI⇧*⇧* M M⇩1› and
‹apply_CI⇧*⇧* M M⇩2› and
‹⋀M⇩3. ¬apply_CI M⇩1 M⇩3›
‹⋀M⇩3. ¬apply_CI M⇩2 M⇩3›
shows ‹M⇩1 = M⇩2›
sorry
lemma apply_CI_smaller:
‹apply_CI M M' ⟹ size M' ≤ size M›
apply (induction rule: apply_CI.induct)
subgoal for M M'
using strictly_smaller[of M]
by auto
done
lemma wf_apply_CI:
‹wf {(x, y). apply_CI y x}›
(*trivial but very annoying because not enough useful lemmas on wf*)
sorry
end
end
I have no clue how to prove apply_CI_determ (no idea if the conditions I wrote down are sufficient or not), but I did spend much thinking about it.
After that you can define your definitions with:
definition apply_C where
‹apply_C M = (SOME M'. apply_CI⇧*⇧* M M' ∧ (∀M⇩3. ¬apply_CI M' M⇩3))›
and prove the property in your definition.
How to execute it
I don't see how to write an executable function on multisets directly. The problem you face is that one step of apply_C is nondeterministic.
If you can use lists instead of multisets, you get an order on the elements for free and you can use subseqs that gives you all possible subsets. Rewrite using the first element in subseqs that is in the domain of C. Iterate as long as there is any possible rewriting.
Link that to the inductive predicate to prove termination and that it calculates the right thing.
Remark that in general you cannot extract a list out of a multiset, but it is possible to do so in some cases (e.g., if you have a linorder over 'a).

Functional programming with OCAML

I'm new to functional programming and I'm trying to implement a basic algorithm using OCAML for course that I'm following currently.
I'm trying to implement the following algorithm :
Entries :
- E : a non-empty set of integers
- s : an integer
- d : a positive float different of 0
Output :
- T : a set of integers included into E
m <- min(E)
T <- {m}
FOR EACH e ∈ sort_ascending(E \ {m}) DO
IF e > (1+d)m AND e <= s THEN
T <- T U {e}
m <- e
RETURN T
let f = fun (l: int list) (s: int) (d: float) ->
List.fold_left (fun acc x -> if ... then (list_union acc [x]) else acc)
[(list_min l)] (list_sort_ascending l) ;;
So far, this is what I have, but I don't know how to handle the modification of the "m" variable mentioned in the algorithm... So I need help to understand what is the best way to implement the algorithm, maybe I'm not gone in the right direction.
Thanks by advance to anyone who will take time to help me !
The basic trick of functional programming is that although you can't modify the values of any variables, you can call a function with different arguments. In the initial stages of switching away from imperative ways of thinking, you can imagine making every variable you want to modify into the parameters of your function. To modify the variables, you call the function recursively with the desired new values.
This technique will work for "modifying" the variable m. Think of m as a function parameter instead.
You are already using this technique with acc. Each call inside the fold gets the old value of acc and returns the new value, which is then passed to the function again. You might imagine having both acc and m as parameters of this inner function.
Assuming list_min is defined you should think the problem methodically. Let's say you represent a set with a list. Your function takes this set and some arguments and returns a subset of the original set, given the elements meet certain conditions.
Now, when I read this for the first time, List.filter automatically came to my mind.
List.filter : ('a -> bool) -> 'a list -> 'a list
But you wanted to modify the m so this wouldn't be useful. It's important to know when you can use library functions and when you really need to create your own functions from scratch. You could clearly use filter while handling m as a reference but it wouldn't be the functional way.
First let's focus on your predicate:
fun s d m e -> (float e) > (1. +. d)*.(float m) && (e <= s)
Note that +. and *. are the plus and product functions for floats, and float is a function that casts an int to float.
Let's say the function predicate is that predicate I just mentioned.
Now, this is also a matter of opinion. In my experience I wouldn't use fold_left just because it's just complicated and not necessary.
So let's begin with my idea of the code:
let m = list_min l;;
So this is the initial m
Then I will define an auxiliary function that reads the m as an argument, with l as your original set, and s, d and m the variables you used in your original imperative code.
let rec f' l s d m =
match l with
| [] -> []
| x :: xs -> if (predicate s d m x) then begin
x :: (f' xs s d x)
end
else
f' xs s d m in
f' l s d m
Then for each element of your set, you check if it satisfies the predicate, and if it does, you call the function again but you replace the value of m with x.
Finally you could just call f' from a function f:
let f (l: int list) (s: int) (d: float) =
let m = list_min l in
f' l s d m
Be careful when creating a function like your list_min, what would happen if the list was empty? Normally you would use the Option type to handle those cases but you assumed you're dealing with a non-empty set so that's great.
When doing functional programming it's important to think functional. Pattern matching is super recommended, while pointers/references should be minimal. I hope this is useful. Contact me if you any other doubt or recommendation.

Church numerals in lambda calculus

I need to find a function P such that (using Beta - reduction)
P(g, h, i) ->* (h, i, i+1).
I am allowed to use the successor function succ. From wikipedia I got
succ = λn.λf.λx.f(n f x)
My answer is P = λx.λy.λz.yz(λz.λf.λu.f(z f u))z
but I'm not quite sure about it. My logic was the λx would effectively get rid of the g term, then the λy.λz would bring in the h and i via the yz. Then the succ function would bring in i+1 last. I just don't know if my function actually replicates this.
Any help given is appreciated
#melpomene points out that this question is unanswerable without a specific implementation in mind (e.g. for tuples). I am going to presume that your tuple is implemented as:
T = λabcf.f a b c
Or, if you prefer the non-shorthand:
T = (λa.(λb.(λc.(λf.f a b c))))
That is, a function which closes over a, b, and c, and waits for a function f to pass those variables.
If that is the implementation in mind, and assuming normal Church numerals, then the function you spec:
P(g, h, i) ->* (h, i, i+1)
Needs to:
take in a triple (with a, b, and c already applied)
construct a new triple, with
the second value of the old triple
the third value of the old triple
the succ of the third value of the old triple
Here is such a function P:
P = λt.t (λghi.T h i (succ i))
Or again, if you prefer non-shorthand:
P = (λt.t(λg.(λh.(λi.T h i (succ i)))))
This can be partially cleaned up with some helper functions:
SND = λt.t (λabc.b)
TRD = λt.t (λabc.c)
In which case we can write P as:
P = λt.T (SND t) (TRD t) (succ (TRD t))

reduction steps for successor of 1 with Church numerals

I am trying to understand which are the right steps to perform the following reduction following the normal order reduction. I cannot understand which is the correct order in which I should perform the reduction, and why, in this expression:
(λn.λs.λz.n s (sz)) (λs.λz.s z)
could you please help me out?
Note: this reduction can also be seen as the function successor
(λn.λs.λz.n s (sz))
applied to the Church numeral 1
(λs.λz.s z)
knowing that the number zero is represented as:
(λs.λz.z)
The normal, AKA leftmost-outermost, reduction order attempts to reduce the leftmost outermost subterms first.
Since you are looking for the outermost terms, you need to determine the main building blocks of your term, remembering that every term is a variable, an abstraction over a term or an application of terms:
(λn.λs.λz.n s (s z)) (λs.λz.s z)
---------LHS-------- ----RHS----
----------APPLICATION-----------
The left-hand side (LHS) of the main term is the leftmost outermost one, so it is the starting point of the reduction. Its outermost abstraction is λn and there is a bound variable n in that term, so it will be substituted with the right-hand term:
λn.λs.λz.n s (s z)
-- -
However, since both LHS and RHS contain s and z variables, you need to rename them in one of them first; I chose to rename the ones in RHS:
λs.λz.s z -> λa.λb.a b
Now you can drop the λn abstraction and substitute the n variable with λa.λb.a b:
λn.λs.λz.n s (s z) -> λs.λz.(λa.λb.a b) s (s z)
-- - -----n-----
It's time to look for the next reduction spot:
λs.λz.(λa.λb.a b) s (s z)
Since lambda calculus is left-associative, this is the same as:
λs.λz.(((λa.λb.a b) s) (s z))
The next leftmost outermost reducible term is (λa.λb.a b) s which reduces to (λb.s b):
λs.λz.(((λa.λb.a b) s) (s z)) -> λs.λz.((λb.s b) (s z))
-- - - -
And the last reducible term is (λb.s b) (s z), where b is substituted with (s z):
λs.λz.((λb.s b) (s z)) -> λs.λz.(s (s z))
-- - ----- -----
Which leads to the final state in normal form:
λs.λz.s (s z)

Resources