So, I've spent a lot of time reading and re-reading the ending of chapter 9 in The Little Schemer, where the applicative Y combinator is developed for the length function. I think my confusion boils down to a single statement that contrasts two versions of length (before the combinator is factored out):
A:
((lambda (mk-length)
(mk-length mk-length))
(lambda (mk-length)
(lambda (l)
(cond
((null? l) 0 )
(else (add1
((mk-length mk-length)
(cdr l))))))))
B:
((lambda (mk-length)
(mk-length mk-length))
(lambda (mk-length)
((lambda (length)
(lambda (l)
(cond
((null? l) 0)
(else (add1 (length (cdr l)))))))
(mk-length mk-length))))
Page 170 (4th ed.) states that A
returns a function when we applied it to an argument
while B
does not return a function
thereby producing an infinite regress of self-applications. I'm stumped by this. If B is plagued by this problem, I don't see how A avoids it.
Great question. For the benefit of those without a functioning DrRacket installation (myself included) I'll try to answer it.
First, let's use some sane (short) variable names, easily trackable by a human eye/mind:
((lambda (h) ; A.
(h h)) ; apply h to h
(lambda (g)
(lambda (lst)
(if (null? lst) 0
(add1
((g g) (cdr lst)))))))
The first lambda term is what's known as little omega, or U combinator. When applied to something, it causes that term's self-application. Thus the above is equivalent to
(let ((h (lambda (g)
(lambda (lst)
(if (null? lst) 0
(add1 ((g g) (cdr lst))))))))
(h h))
When h is applied to h, new binding is formed:
(let ((h (lambda (g)
(lambda (lst)
(if (null? lst) 0
(add1 ((g g) (cdr lst))))))))
(let ((g h))
(lambda (lst)
(if (null? lst) 0
(add1 ((g g) (cdr lst)))))))
Now there's nothing to apply anymore, so the inner lambda form is returned — along with the hidden linkages to the environment frames (i.e. those let bindings) up above it.
This pairing of a lambda expression with its defining environment is known as closure. To the outside world it is just another function of one parameter, lst. No more reduction steps left to perform there at the moment.
Now, when that closure — our list-length function — will be called, execution will eventually get to the point of (g g) self-application, and the same reduction steps as outlined above will again be performed (recalculating the same closure). But not earlier.
Now, the authors of that book want to arrive at the Y combinator, so they apply some code transformations to the first expression, to somehow arrange for that self-application (g g) to be performed automatically — so we may write the recursive function application in the normal manner, (f x), instead of having to write it as ((g g) x) for all recursive calls:
((lambda (h) ; B.
(h h)) ; apply h to h
(lambda (g)
((lambda (f) ; 'f' to become bound to '(g g)',
(lambda (lst)
(if (null? lst) 0
(add1 (f (cdr lst)))))) ; here: (f x) instead of ((g g) x)!
(g g)))) ; (this is not quite right)
Now after few reduction steps we arrive at
(let ((h (lambda (g)
((lambda (f)
(lambda (lst)
(if (null? lst) 0
(add1 (f (cdr lst))))))
(g g)))))
(let ((g h))
((lambda (f)
(lambda (lst)
(if (null? lst) 0
(add1 (f (cdr lst))))))
(g g))))
which is equivalent to
(let ((h (lambda (g)
((lambda (f)
(lambda (lst)
(if (null? lst) 0
(add1 (f (cdr lst))))))
(g g)))))
(let ((g h))
(let ((f (g g))) ; problem! (under applicative-order evaluation)
(lambda (lst)
(if (null? lst) 0
(add1 (f (cdr lst))))))))
And here comes trouble: the self-application of (g g) is performed too early, before that inner lambda can be even returned, as a closure, to the run-time system. We only want it to be reduced when the execution gets to that point inside the lambda expression, after the closure was called. To have it reduced before the closure is even created is ridiculous. A subtle error. :)
Of course, since g is bound to h, (g g) is reduced to (h h) and we're back again where we started, applying h to h. Looping.
Of course the authors are aware of this. They want us to understand it too.
So the culprit is simple — it is the applicative order of evaluation: evaluating the argument before the binding is formed of the function's formal parameter and its argument's value.
That code transformation wasn't quite right, then. It would've worked under normal order where arguments aren't evaluated in advance.
This is remedied easily enough by "eta-expansion", which delays the application until the actual call point: (lambda (x) ((g g) x)) actually says: "will call ((g g) x) when called upon with an argument of x".
And this is actually what that code transformation should have been in the first place:
((lambda (h) ; C.
(h h)) ; apply h to h
(lambda (g)
((lambda (f) ; 'f' to become bound to '(lambda (x) ((g g) x))',
(lambda (lst)
(if (null? lst) 0
(add1 (f (cdr lst)))))) ; here: (f x) instead of ((g g) x)
(lambda (x) ((g g) x)))))
Now that next reduction step can be performed:
(let ((h (lambda (g)
((lambda (f)
(lambda (lst)
(if (null? lst) 0
(add1 (f (cdr lst))))))
(lambda (x) ((g g) x))))))
(let ((g h))
(let ((f (lambda (x) ((g g) x)))) ; here it's OK
(lambda (lst)
(if (null? lst) 0
(add1 (f (cdr lst))))))))
and the closure (lambda (lst) ...) is formed and returned without a problem, and when (f (cdr lst)) is called (inside the closure) it is reduced to ((g g) (cdr lst)) just as we wanted it to be.
Lastly, we notice that (lambda (f) (lambda (lst ...)) expression in C. doesn't depend on any of the h and g. So we can take it out, make it an argument, and be left with ... the Y combinator:
( ( (lambda (rec) ; D.
( (lambda (h) (h h))
(lambda (g)
(rec (lambda (x) ((g g) x)))))) ; applicative-order Y combinator
(lambda (f)
(lambda (lst)
(if (null? lst) 0
(add1 (f (cdr lst)))))) )
(list 1 2 3) ) ; ==> 3
So now, calling Y on a function is equivalent to making a recursive definition out of it:
( y (lambda (f) (lambda (x) .... (f x) .... )) )
=== define f = (lambda (x) .... (f x) .... )
... but using letrec (or named let) is better — more efficient, defining the closure in self-referential environment frame. The whole Y thing is a theoretical exercise for the systems where that is not possible — i.e. where it is not possible to name things, to create bindings with names "pointing" to things, referring to things.
Incidentally, the ability to point to things is what distinguishes the higher primates from the rest of the animal kingdom ⁄ living creatures, or so I hear. :)
To see what happens, use the stepper in DrRacket.
The stepper allows you to see all intermediary steps (and to go back and forth).
Paste the following into DrRacket:
(((lambda (mk-length)
(mk-length mk-length))
(lambda (mk-length)
(lambda (l)
(cond
((null? l) 0 )
(else (add1
((mk-length mk-length)
(cdr l))))))))
'(a b c))
Then choose the teaching language "Intermediate Student with lambda".
Then click the stepper button (the green triangle followed by a bar).
This is what the first step looks like:
Then make an example for the second function and see what goes wrong.
Related
When a continuation is called as a procedure, does some jump occur?
For example, I have seen two use cases of continuation:
Are the continuation created by call/cc and the continuation used
for making a call to a function in CPS the same concept, and do both involve jump?
When a continuation captured by a call/cc call is called, the
program execution flow will jump to the call/cc call.
When a function in CPS is called with a continuation, does similar
jump also occur? (I am not sure)
For example,
(letrec ([f (lambda (x) (cons 'a x))]
[g (lambda (x) (cons 'b (f x)))]
[h (lambda (x) (g (cons 'c x)))])
(cons 'd (h '()))) (d b a c)
can be written in CPS as
(letrec ([f (lambda (x k) (k (cons 'a x)))]
[g (lambda (x k)
(f x (lambda (v) (k (cons 'b v)))))]
[h (lambda (x k) (g (cons 'c x) k))])
(h '() (lambda (v) (cons 'd v))))
(lambda (v) (cons 'd v)) is a continuation passed to h, then to
g and f, before it is eventually called. But when it is called,
is there any jump also occurring?
I'm new to Scheme and Lisp in general, and upon learning I've stumbled upon a cryptic syntax used in local procedure binding:
(define mock
(lambda (s)
;; this is what I don't understand
(let splice ([l '()] [m (car s)] [r (cdr s)])
(append
(map (lambda (x) (cons m x)) r)
(if (null? r) '()
(splice (cons m l) (car r) (cdr r)))))))
It took me a while to grasp that splice is a scoped procedure with 3 arities. Rewriting this in an ML-esque style it seems to produce similar output:
(define mock2
(lambda (s)
;; define `splice` first
(define splice
(lambda (la lb lc)
(append
(map (lambda (x) (cons lb x)) lc)
(if (null? lc) '()
(splice (cons lb la) (car lc) (cdr lc))))))
;; bind `splice` and its arguments together and call it with them
(let ([sp splice] [l '()] [m (car s)] [r (cdr s)])
(splice l m r))))
The second version is a bit longer and somewhat look more imperative, but defining splice as a normal procedure inside the scope before binding it in parallel with the arguments (or just chuck them in as-is) and calling it looks saner.
Question is are these two versions replaceable? If yes, could you help explain the first version's syntax of binding local variables (l, m, and r) within the splice binding form?
Calling splice is like re-entering a loop, which is what it is there for. A tail call is a goto anyway. It is often named loop, instead of thinking up some special name for it.
"looks saner" is debatable, and actually with Schemers you'll lose this one, because this is a very popular Scheme construct, called "named let". It is usually re-written with letrec btw, if/when one wants to rewrite it, to understand it better. Internal define can be used as well, but then, why not use (define (mock s) ... in the first place.
So, the usual way to re-write this
(define mock ; or: (define (mock s) ...
(lambda (s)
(let splice ([l '()] [m (car s)] [r (cdr s)])
(append
(map (lambda (x) (cons m x)) r)
(if (null? r) '()
(splice (cons m l) (car r) (cdr r)))))))
is this:
(define mock
(lambda (s)
(letrec ([splice (lambda (l m r) ; or: (define (splice l m r) ...
(append
(map (lambda (x) (cons m x)) r)
(if (null? r) '()
(splice (cons m l) (car r) (cdr r)))))])
(splice '() (car s) (cdr s)))))
and writing it in the named let way saves one from having it defined in one place and called in another, potentially far away. A call enters its body from the start anyway, and named let better reflects that.
This is pretty self-explanatory. The transformation from one form to the other is purely syntactical, and both can be used interchangeably.
I am really new to scheme functional programming. I recently came across Y-combinator function in lambda calculus, something like this Y ≡ (λy.(λx.y(xx))(λx.y(xx))). I wanted to implement it in scheme, i searched alot but i didn't find any implementation which exactly matches the above given structure. Some of them i found are given below:
(define Y
(lambda (X)
((lambda (procedure)
(X (lambda (arg) ((procedure procedure) arg))))
(lambda (procedure)
(X (lambda (arg) ((procedure procedure) arg)))))))
and
(define Y
(lambda (r)
((lambda (f) (f f))
(lambda (y)
(r (lambda (x) ((y y) x)))))))
As you can see, they dont match with the structure of this Y ≡ (λy.(λx.y(xx))(λx.y(xx))) combinator function. How can I implement it in scheme in exactly same way?
In a lazy language like Lazy Racket you can use the normal order version, but not in any of the applicative order programming languages like Scheme. They will just go into an infinite loop.
The applicative version of Y is often called a Z combinator:
(define Z
(lambda (f)
((lambda (g) (g g))
(lambda (g)
(f (lambda args (apply (g g) args)))))))
Now the first thing that happens when this is applied is (g g) and since you can always substitute a whole application with the expansion of it's body the body of the function can get rewritten to:
(define Z
(lambda (f)
((lambda (g)
(f (lambda args (apply (g g) args))))
(lambda (g)
(f (lambda args (apply (g g) args)))))))
I haven't really changed anything. It's just a little more code that does exactly the same. Notice this version uses apply to support multiple argument functions. Imagine the Ackermann function:
(define ackermann
(lambda (m n)
(cond
((= m 0) (+ n 1))
((= n 0) (ackermann (- m 1) 1))
(else (ackermann (- m 1) (ackermann m (- n 1)))))))
(ackermann 3 6) ; ==> 509
This can be done with Z like this:
((Z (lambda (ackermann)
(lambda (m n)
(cond
((= m 0) (+ n 1))
((= n 0) (ackermann (- m 1) 1))
(else (ackermann (- m 1) (ackermann m (- n 1))))))))
3
6) ; ==> 509
Notice the implementations is exactly the same and the difference is how the reference to itself is handled.
EDIT
So you are asking how the evaluation gets delayed. Well the normal order version looks like this:
(define Y
(lambda (f)
((lambda (g) (g g))
(lambda (g) (f (g g))))))
If you look at how this would be applied with an argument you'll notice that Y never returns since before it can apply f in (f (g g)) it needs to evaluate (g g) which in turn evaluates (f (g g)) etc. To salvage that we don't apply (g g) right away. We know (g g) becomes a function so we just give f a function that when applied will generate the actual function and apply it. If you have a function add1 you can make a wrapper (lambda (x) (add1 x)) that you can use instead and it will work. In the same manner (lambda args (apply (g g) args)) is the same as (g g) and you can see that by just applying substitution rules. The clue here is that this effectively stops the computation at each step until it's actually put into use.
On page 66 of "The Seasoned Schemer" it says that (let ...) is an abbreviation for :
(let ((x1 a1) ... (xn an)) b ...) = ((lambda (x1 ... xn) b ...) a1 ... an)
Its used on for example on page 70:
(define depth*
(lambda (l)
(let ((a (add1 (depth* (car l))))
(d (depth* (cdr l))))
(cond
((null? l) 1)
((atom? (car l)) d)
(else (cond
((> d a) d)
(else a)))))))
But that above definition of lambda would suggest that (add1 (depth* (car l)) and (depth* (cdr l)) are evaluated and passed into the lambda represented by (lambda (x1 ... xn) b ...). But this would mean that the list l, which could potentially be empty, would be passed to car and cdr before the null check in (null? l) 1) is ever done.
You're right in stating that (car l) and (cdr l) will get executed before testing if l is null, therefore raising an error if l is indeed null. Keep reading the book, in the following two pages this is explained, and a correct version of depth* is shown.
The let syntactic keyword accepts the following form (ignore 'named-let'):
(define-syntax let
(syntax-rules ()
((let ((identifier expression) ...) body ...)
;; ...)))
At the point where let is used each of expression ... is evaluated. The expressions are evaluated in an unspecified order.
In your case, the expressions for a and d involving depth* will be evaluated before the body. Thus, as you've concluded, l could be '() when car and cdr are invoked.
I have been reading The Seasoned Schemer and i came across this definition of the length function
(define length
(let ((h (lambda (l) 0)))
(set! h (L (lambda (arg) (h arg))))
h))
Later they say :
What is the value of (L (lambda (arg) (h arg)))? It is the function
(lambda (l)
(cond ((null? l) 0)
(else (add1 ((lambda (arg) (h arg)) (cdr l))))))
I don't think I comprehend this fully. I guess we are supposed to define L ourselves as an excercise. I wrote a definition of L within the definition of length using letrec. Here is what I wrote:
(define length
(let ((h (lambda (l) 0)))
(letrec ((L
(lambda (f)
(letrec ((LR
(lambda (l)
(cond ((null? l) 0)
(else
(+ 1 (LR (cdr l))))))))
LR))))
(set! h (L (lambda (arg) (h arg))))
h)))
So, L takes a function as its argument and returns as value another function that takes a list as its argument and performs a recursion on a list. Am i correct or hopelessly wrong in my interpretation? Anyway the definition works
(length (list 1 2 3 4)) => 4
In "The Seasoned Schemer" length is initially defined like this:
(define length
(let ((h (lambda (l) 0)))
(set! h (lambda (l)
(if (null? l)
0
(add1 (h (cdr l))))))
h))
Later on in the book, the previous result is generalized and length is redefined in terms of Y! (the applicative-order, imperative Y combinator) like this:
(define Y!
(lambda (L)
(let ((h (lambda (l) 0)))
(set! h (L (lambda (arg) (h arg))))
h)))
(define L
(lambda (length)
(lambda (l)
(if (null? l)
0
(add1 (length (cdr l)))))))
(define length (Y! L))
The first definition of length shown in the question is just an intermediate step - with the L procedure exactly as defined above, you're not supposed to redefine it. The aim of this part of the chapter is to reach the second definition shown in my answer.