lambda calculus, expanded and compressed form have different beta-reductions? [closed] - lambda-calculus

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
given
S=\x.\y.\z.x z (y z)
and
K=\x.\y.x
I cannot understand how two beta equivalent forms of the same expression (S K K) yield different results in untyped lambda calculus if I start from the (S K K) form or the equivalent expanded form:
(S K K) = ((S K) K) -> ((\y.(\z.((K z) (y z)))) K) -> (\z.((K z) (K z))) ->
(\z.((\y.z) (K z))) -> (\z.z) -> 4 reductions!
(S K K) = \x.\y.\z.x z (y z) \x.\y.x \x.\y.x -> 0 reductions!
It seems the compressed and the expanded form have different parenthesizations, indeed the first one is parenthsized as:
(S K K) = ((S K) K)
while the second as:
\x.\y.\z.x z (y z) \x.\y.x \x.\y.x =
(\x.(\y.(\z.(((x z) (y z)) (\x.(\y.(x (\x.(\y.x)))))))))
Does anyone have any insight into this???
Thank you

Check out the formal definition of lambda calculus on Wikipedia. An abstraction and an application always have a set of enclosing parentheses.
This means more correct definitions of S and K are:
S = (\x.\y.\z.x z (y z))
and
K = (\x.\y.x)
Substituting these in (S K K) gives the correct result.

In (S K K), some parentheses are implicit. This form is an abbreviation for ((S K) K) since function application is always binary and is considered left-associative.

Related

2D DP with large dimensions

Given a n+1-tuple (a0, a1, ..., an).
We need to calulate F(m, n).
Given:
a0 <= a1 <= ... <= an
F(x, y) = ay * F(x - 1, y) + F(x, y - 1)
F(0, y) = 1 for all y
F(x, 0) = a0x
I was thinking of dp approach but problem i faced is too large 'm' which can be larger than a billion.
Is there any way to solve this out?
I feel this can be converted to a matrix exponentiation problem but not able to figure out how?
I am new to stack overflow and programming too. Any edit suggestions in question and approach/solution for problem will be appreciated.
Your "matrix exponentiation" idea is correct.
Write F(x, _) as a vertical vector. (That is, one entry for each value of y.)
There is a matrix A such that F(x+1, _) = A * F(x, _). Once found, it turns out that F(x+k, _) = A^k * F(x, _).
Now you know F(0, _). You can find A. Then with repeated squaring you can find A^m and now you can answer your question.

Write a loop invariant for partial correctness of Hoare Triple

I am new to the world of logic. I am learning Hoare Logic and Partial & Total correctness of programs. I tried alot to solve the below question but failed.
Write a loop invariant P to show partial correctness for the Hoare triple
{x = ¬x ∧ y = ¬y ∧ x >= 0} mult {z = ¬x * ¬y}
where mult is the following program that computes the product of x and y and stores it in z:
mult:
z := 0;
while (y > 0) do
z := z + x;
y := y - 1;
One of my friend gave me the following answer but I don't know if it correct or not.
Invariant P is (¬x * ¬y = z + ¬x * y) ∧ x = ¬x. Intuitively, z holds the part of the result that is already computed, and (¬x * y) is what remains to compute.
Please teach me step by step of how to solve this question.

Proving equivalence of well-founded recursion

In answer to this question Assisting Agda's termination checker the recursion is proven to be well-founded.
Given the function defined like so (and everything else like in Vitus's answer there):
f : ℕ → ℕ
f n = go _ (<-wf n)
where
go : ∀ n → Acc n → ℕ
go zero _ = 0
go (suc n) (acc a) = go ⌊ n /2⌋ (a _ (s≤s (/2-less _)))
I cannot see how to prove f n == f ⌊ n /2⌋. (My actual problem has a different function, but the problem seems to boil down to the same thing)
My understanding is that go gets Acc n computed in different ways. I suppose, f n can be shown to pass Acc ⌊ n /2⌋ computed by a _ (s≤s (/2-less _)), and f ⌊ n /2⌋ passes Acc ⌊ n /2⌋ computed by <-wf ⌊ n /2⌋, so they cannot be seen identical.
It seems to me proof-irrelevance must be used somehow, to say that it's enough to just have an instance of Acc n, no matter how computed - but any way I try to use it, it seems to contaminate everything with restrictions (eg pattern matching doesn't work, or irrelevant function cannot be applied, etc).

Fixed point of K combinator

The K combinator is K := (λxy.x) and the fixed point combinator is Y := λf.(λx.f x x) (λx.f x x). I tried to calculate YK:
YK = (λx.Kxx)(λx.Kxx) = (λx.x)(λx.x) = (λx.x) = I
So because YK is the fixed point of K:
K(YK) = YK
KI = I
KIe = Ie = e
for any e. But KIe should be equal to I!
You're not starting with the correct definition of the Y-combinator. It should be Y := λf.(λx.f (x x)) (λx.f (x x)) (note the parentheses around x x).
Since lambda-calculus is left-associative, f x x is equal to (f x) x, which obviously doesn't work.
Using the correct definition, we get
Y K := (λf.(λx.f (x x)) (λx.f (x x))) K
(λx.K (x x)) (λx.K (x x))
K ((λx.K (x x)) (λx.K (x x)))
K (Y K)
Since Y K doesn't reduce to I, the following substitution is not allowed.
K (Y K) = Y K
K I = I
So, K I e is simply
K I e := (K I) e
((λx.λy.x) I) e
(λy.I) e
I

Lambda calculus help

So i'm totally stuck on this one part of a problem. It would be awesome if someone could help.........
Show that the term ZZ where Z is λz.λx. x(z z x) satisfies
the requirement for fixed point combinators that ZZM =β M(ZZM).
This is completely trivial.
You just apply the definition of β-reduction two times:
Z Z M = (λz.λx. x(z z x)) Z M > (λx. x(Z Z x)) M > M (Z Z M)
where > is the β-reduction.
Therefore Z Z M β-reduces to M (Z Z M) in two steps, hence Z Z M =β M (Z Z M).

Resources