From the R5RS standard:
Values might be defined as follows:
(define (values . things)
(call-with-current-continuation
(lambda (cont) (apply cont things))))
My first interpretation of this was that an expression like (+ (values 1 2)) is equivalent to (apply + '(1 2)) and would yield the result 3. But, according to my tests, this interpretation is not correct. Here is my interpretation of the code above: values is a function taking any number of arguments, bundled into a list called things. Then, the current continuation (the place where values is used) is called with the list things "unbundled".
What am I missing? The example above (+ (values 1 2)) gives an error or 1 depending on the interpreter I used.
See, when you type
(+ (values 1 2))
the continuation of the call to values is actually a single argument to +. So, it is either treated as 1 (the first element to the list, the first value produced by the procedure), or an error. R5RS says in this regard:
Except for continuations created by the call-with-values procedure, all continuations take exactly one value. The effect of passing no value or more than one value to continuations that were not created by call-with-values is unspecified.
On the other hand, call-with-values would correctly bind your list's elements to its consumer argument's formal arguments:
Calls its producer argument with no values and a continuation that, when passed some values, calls the consumer procedure with those values as arguments.
In order to understand what this definition of values means, you need to also understand the definition of call-with-current-continuation, which it is defined in terms of. And helpfully, the documentation for values mentions call-with-values, as an example of how to use the result of values.
So, you could use (values 1 2) in a context like:
(call-with-values (lambda () (values 1 2))
(lambda (x y) (+ x y)))
Related
(+ (values 1) (values 2)) returns 3. What is (+ 1 (values 2 3)) supposed to return? In R7RS-small, is the second value in a (values ...) automatically ignored when only one value is needed? In Guile 3.0.7, (+ 1 (values 2 3)) returns 3, but it gives an error in MIT Scheme 11.2 and Chibi 0.10.0.
(+ (values 1) (values 2)) is fine, and the result should be 3. But it is not the case that unneeded values are automatically ignored in a values expression; in fact the behavior of (+ 1 (values 2 3)) is unspecified in both R6RS and R7RS Scheme.
From the entry for values from R6RS 11.15 [emphasis mine]:
Delivers all of its arguments to its continuation....
The continuations of all non-final expressions within a sequence of
expressions, such as in lambda, begin, let, let*, letrec,
letrec*, let-values, let*-values, case, and cond forms,
usually take an arbitrary number of values.
Except for these and the continuations created by call-with-values,
let-values, and let*-values, continuations implicitly accepting a
single value, such as the continuations of <operator> and
<operand>s of procedure calls or the <test> expressions in
conditionals, take exactly one value. The effect of passing an
inappropriate number of values to such a continuation is undefined.
R7RS has similar language in 6.10:
The effect of passing no values or more than one value to continuations that were not created in one of these ways is unspecified.
The reason that (+ (values 1) (values 2)) is ok is that continuations of operands in procedure calls take exactly one value. (values 1) and (values 2) each provide exactly one value for their respective continuations.
call-with-values is meant for connecting producers of multiple values with procedures which consume those values. The first argument to call-with-values should be a procedure which takes no arguments and produces values to be consumed. The second argument should be a procedure which accepts the number of values produced by the first procedure:
> (call-with-values (lambda () (values 2 3))
(lambda (x y) (+ 1 x y)))
6
Note that the above use of call-with-values requires the consumer to accept the number of values produced by the producer since Scheme requires that implementations raise an error when a procedure does not accept the number of arguments passed to it:
> (call-with-values (lambda () (values 2 3 4))
(lambda (x y) (+ 1 x y)))
Exception: incorrect argument count in call (call-with-values (lambda () (values 2 3 4)) (lambda (x y) (+ 1 x y)))
If it is desirable for extra arguments to be ignored, the consumer must be designed towards that goal. Here the consumer is designed to ignore all but its first argument:
> (call-with-values (lambda () (values 2 3 4))
(lambda (x . xs) (+ 1 x)))
3
(define while
(lambda (rounds)
(if (> rounds 0)
((define compMove (random 3)) (display compMove))
(while (- rounds 1))
)
)
)
Above is program written in scheme, however I get an error saying that invalid context for definition. This is around (define compMove (random 3)).
Your code is not Scheme.
According to the report you are only allowed to have define top level and The first expression of the body of a lambda, let, let*, let-values, let*-values, letrec, or letrec* expression.
In addition you have wrapped two expressions in parentheses as if that would make a block. It doesn't. What it will try to do is to evaluate the operator and then call the result as a procedure. Since define is special it fails. With any other form you would get Application: not a procedure.
Footnote #28 of SICP says the following:
Embedded definitions must come first in a procedure body. The management is not responsible for the consequences of running programs that intertwine definition and use.
What exactly does this mean? I understand:
"definitions" to be referring to both procedure creations and value assignments.
"a procedure body" to be as defined in section 1.1.4. It's the code that comes after the list of parameters in a procedure's definition.
"Embedded definitions must come first in a procedure body" to mean 'any definitions that are made and used in a procedure's body must come before anything else'.
"intertwine definition and use" to mean 'calling a procedure or assigned value before it has been defined;'
However, this understanding seems to contradict the answers to this question, which has answers that I can summarise as 'the error that your quote is referring to is about using definitions at the start of a procedure's body that rely on definitions that are also at the start of that body'. This has me triply confused:
That interpretation clearly contradicts what I've said above, but seems to have strong evidence - compiler rules - behind it.
SICP seems happy to put definition in a body with other definitions that use them. Just look at the sqrt procedure just above the footnote!
At a glance, it looks to me that the linked question's author's real error was treating num-prod like a value in their definition of num rather than as a procedure. However, the author clearly got it working, so I'm probably wrong.
So what's really going on? Where is the misunderstanding?
In a given procedure's definition / code,
"internal definitions" are forms starting with define.
"a procedure body" is all the other forms after the forms which start with define.
"embedded definitions must come first in a procedure body" means all the internal define forms must go first, then all the other internal forms. Once a non-define internal form appears, there can not appear an internal define form after that.
"no intertwined use" means no use of any name before it's defined. Imagine all internal define forms are gathered into one equivalent letrec, and follow its rules.
Namely, we have,
(define (proc args ...)
;; internal, or "embedded", definitions
(define a1 ...init1...)
(define a2 ...init2...)
......
(define an ...initn...)
;; procedure body
exp1 exp2 .... )
Any ai can be used in any of the initj expressions, but only inside a lambda expression.(*) Otherwise it would refer to the value of ai while aj is being defined, and that is forbidden, because any ai names are considered not yet defined while any of the initj expressions are being evaluated.
(*) Remember that (define (foo x) ...x...) is the same as (define foo (lambda (x) ...x...)). That's why the definitions in that sqrt procedure in the book you mention are OK -- they are all lambda expressions, and any name's use inside a lambda expression will only actually refer to that name's value when the lambda expression's value -- a lambda function -- will be called, not when that lambda expression is being evaluated, producing that lambda function which is its value.
The book is a bit vague at first with the precise semantics of its language but in Scheme the above code is equivalent to
(define proc
(lambda (args ...)
;; internal, or "embedded", definitions
(letrec ( (a1 ...init1...)
(a2 ...init2...)
......
(an ...initn...) )
;; procedure body
exp1 exp2 ....
)))
as can be seen explained, for instance, here, here or here.
For example,
;; or, equivalently,
(define (my-proc x) (define my-proc
(lambda (x)
(define (foo) a) (letrec ( (foo (lambda () a))
(define a x) (a x) )
;; my-proc's body ;; letrec's body
(foo)) (foo))))
first evaluates the lambda expression, (lambda () a), and binds the name foo to the result, a function; that function will refer to the value of a when (foo) will be called, so it's OK that there's a reference to a inside that lambda expression -- because when that lambda expression is evaluated, no value of a is immediately needed, just the reference to its future value, under the name a, is present there; i.e. the value that a will have after all the names in that letrec are initialized, and the body of letrec is entered. Or in other words, when all the internal defines are completed and the body proper of the procedure my-proc is entered.
So we see that foo is defined, but is not used during the initializations; a is defined but is not used during the initializations; thus all is well. But if we had e.g.
(define (my-proc x)
(define (foo) x) ; `foo` is defined as a function
(define a (foo)) ; `foo` is used by being called as a function
a)
then here foo is both defined and used during the initializations of the internal, or "embedded", defines; this is forbidden in Scheme. This is what the book is warning about: the internal definitions are only allowed to define stuff, but its use should be delayed for later, when we're done with the internal defines and enter the full procedure's body.
This is subtle, and as the footnote and the question you reference imply, the subtlties can vary depending on a particular language's implementation.
These issues will be covered in far more detail later in the book (Chapters 3 and 4) and, generally, the text avoids using internal definitions so that these issues can be avoided until they are examined in detail.
A key difference between between the code above the footnote:
(define (sqrt x)
(define (good-enough? guess)
(< (abs (- (square guess) x)) 0.001))
(define (improve guess)
(average guess (/ x guess)))
(define (sqrt-iter guess)
(if (good-enough? guess)
guess
(sqrt-iter (improve guess))))
(sqrt-iter 1.0))
and the code in the other question:
(define (pi-approx n)
(define (square x) (* x x))
(define (num-prod ind) (* (* 2 ind) (* 2 (+ ind 1)))) ; calculates the product in the numerator for a certain term
(define (denom-prod ind) (square (+ (* ind 2 ) 1))) ;Denominator product at index ind
(define num (product num-prod 1 inc n))
(define denom (product denom-prod 1 inc n))
is that all the defintions in former are procedure definitions, whereas num and denom are value defintions. The body of a procedure is not evaluated until that procedures is called. But a value definition is evaluated when the value is assigned.
With a value definiton:
(define sum (add 2 2))
(add 2 2) will evaluated when the definition is evaluated, if so add must already be defined. But with a procedure definition:
(define (sum n m) (add n m))
a procedure object will be assigned to sum but the procedure body is not yet evaluated so add does not need be defined when sum is defined, but must be by the time sum is called:
(sum 2 2)
As I say there is a lot sublty and a lot of variation so I'm not sure if the following is always true for every variation of scheme, but within 'SICP scheme' you can say..
Valid (order of evaluation of defines not significant):
;procedure body
(define (sum m n) (add m n))
(define (add m n) (+ m n))
(sum 2 2)
Also valid:
;procedure body
(define (sum) (add 2 2))
(define (add m n) (+ m n))
(sum)
Usually invalid (order of evaluation of defines is significant):
;procedure body
(define sum (add 2 2))
(define (add m n) (+ m n))
Whether the following is valid depends on the implementation:
;procedure body
(define (add m n) (+ m n))
(define sum (add 2 2))
And finally an example of intertwining definition and use, whether this works also depends on the implementation. IIRC, this would work with Scheme described in Chapter 4 of the book if the scanning out has been implemented.
;procedure body
(sum)
(define (sum) (add 2 2))
(define (add m n) (+ m n))
It is complicated and subtle, but the key points are:
value definitions are evaluated differently from procedure definitions,
behaviour inside blocks may be different from outside blocks,
there is variation between scheme implementations,
the book is designed so you don't have to worry much about this until Chapter 3,
the book will cover this in detail in Chapter 4.
You discovered one of the difficulties of scheme. And of lisp. Due to this kind of chicken and egg issue there is no single lisp, but lots of lisps appeared.
If the binding forms are not present in code in a letrec-logic in R5RS and letrec*-logic in R6RS, the semantics is undefined. Which means, everything depend on the will of the implementor of scheme.
See the paper Fixing Letrec: A Faithful Yet Efficient Implementation of Scheme’s Recursive Binding Construct.
Also, you can read the discussions from the mailing list from 1986, when there was no general consensus among different implementors of scheme.
Also, at MIT there were developed 2 versions of scheme -- the student version and the researchers' development version, and they behaved differently concerning the order of define forms.
I'm trying to understand the scheme code of make-counter procedure. It's a higher order procedure (a procedure outputs another procedure) and I'm stuck with it.
(define make-counter
(lambda (n)
(lambda ()
(set! n (+ n 1))
n)))
(define ca (make-counter 0))
(ca)
(ca)
This outputs 1 and 2 respectively as expected. Why do we need 2 nested procedures here? What are their functions individually?
I'd be appreciated if someone explains in details. Thanks from now on.
Indented properly, this is:
(define make-counter
(lambda (n)
(lambda ()
(set! n (+ n 1))
n)))
By the way, you can use a different syntax:
(define (make-counter n)
(lambda ()
(set! n (+ n 1))
n))
make-counter is a function that accepts a number n and returns an object called closure, which acts like a function but contains a state. Different invocations of make-counter will produce different closures, even when given the same n in argument. A closure can be called using the function-call syntax, as you experimented.
When you call the closure, the code that is contained within is executed. In your example, the closure accepts zero arguments, and mutates the variable named n. Again, the binding from n to a value is local to the closure and different for all instances of counters. But inside a particular counter, n always refer to the same memory location.
A call to the set! function changes what n evaluates to, and replaces the previous value with (+ n 1), incrementing the local counter variable.
I have been told that the following expression is intended to evaluate to 0, but that many implementations of Scheme evaluate it as 1:
(let ((cont #f))
(letrec ((x (call-with-current-continuation (lambda (c) (set! cont c) 0)))
(y (call-with-current-continuation (lambda (c) (set! cont c) 0))))
(if cont
(let ((c cont))
(set! cont #f)
(set! x 1)
(set! y 1)
(c 0))
(+ x y))))
I must admit that I cannot tell where even to start with this. I understand the basics of continuations and call/cc, but can I get a detailed explanation of this expression?
This is an interesting fragment. I came across this question because I was searching for discussions of the exact differences between letrec and letrec*, and how these varied between different versions of the Scheme reports, and different Scheme implementations. While experimenting with this fragment, I did some research and will report the results here.
If you mentally walk through the execution of this fragment, two questions should be salient to you:
Q1. In what order are the initialization clauses for x and y evaluated?
Q2. Are all the initialization clauses evaluated first, and their results cached, and then all the assignments to x and y performed afterwards? Or are some of the assignments made before some of the initialization clauses have been evaluated?
For letrec, the Scheme reports say that the answer to Q1 is "unspecified." Most implementations will in fact evaluate the clauses in left-to-right order; but you shouldn't rely on that behavior.
Scheme R6RS and R7RS introduce a new binding construction letrec* that does specify left-to-right evaluation order. It also differs in some other ways from letrec, as we'll see below.
Returning to letrec, the Scheme reports going back at least as far as R5RS seem to specify that the answer to Q2 is "evaluate all the initialization clauses before making any of the assignments." I say "seem to specify" because the language isn't as explicit about this being required as it might be. As a matter of fact, many Scheme implementations don't conform to this requirement. And this is what's responsible for the difference between the "intended" and "observed" behavior wrt your fragment.
Let's walk through your fragment, with Q2 in mind. First we set aside two "locations" (reference cells) for x and y to be bound to. Then we evaluate one of the initialization clauses. Let's say it's the clause for x, though as I said, with letrec it could be either one. We save the continuation of this evaluation into cont. The result of this evaluation is 0. Now, depending on the answer to Q2, we either assign that result immediately to x or we cache it to make the assignment later. Next we evaluate the other initialization clause. We save its continuation into cont, overwriting the previous one. The result of this evaluation is 0. Now all of the initialization clauses have been evaluated. Depending on the answer to Q2, we might at this point assign the cached result 0 to x; or the assignment to x may have already occurred. In either case, the assignment to y takes place now.
Then we begin evaluating the main body of the (letrec (...) ...) expression (for the first time). There is a continuation stored in cont, so we retrieve it into c, then clear cont and set! each of x and y to 1. Then we invoke the retrieved continuation with the value 0. This goes back to the last-evaluated initialization clause---which we've assumed to be y's. The argument we supply to the continuation is then used in place of the (call-with-current-continuation (lambda (c) (set! cont c) 0)), and will be assigned to y. Depending on the answer to Q2, the assignment of 0 to x may or may not take place (again) at this point.
Then we begin evaluating the main body of the (letrec (...) ...) expression (for the second time). Now cont is #f, so we get (+ x y). Which will be either (+ 1 0) or (+ 0 0), depending on whether 0 was re-assigned to x when we invoked the saved continuation.
You can trace this behavior by decorating your fragment with some display calls, for example like this:
(let ((cont #f))
(letrec ((x (begin (display (list 'xinit x y cont)) (call-with-current-continuation (lambda (c) (set! cont c) 0))))
(y (begin (display (list 'yinit x y cont)) (call-with-current-continuation (lambda (c) (set! cont c) 0)))))
(display (list 'body x y cont))
(if cont
(let ((c cont))
(set! cont #f)
(set! x 1)
(set! y 1)
(c 'new))
(cons x y))))
I also replaced (+ x y) with (cons x y), and invoked the continuation with the argument 'new instead of 0.
I ran that fragment in Racket 5.2 using a couple of different "language modes", and also in Chicken 4.7. Here are the results. Both implementations evaluated the x init clause first and the y clause second, though as I said this behavior is unspecified.
Racket with #lang r5rs and #lang r6rs conforms to the spec for Q2, and so we get the "intended" result of re-assigning 0 to the other variable when the continuation is invoked. (When experimenting with r6rs, I needed to wrap the final result in a display to see what it would be.)
Here is the trace output:
(xinit #<undefined> #<undefined> #f)
(yinit #<undefined> #<undefined> #<continuation>)
(body 0 0 #<continuation>)
(body 0 new #f)
(0 . new)
Racket with #lang racket and Chicken don't conform to that. Instead, after each initialization clause is evaluated, it gets assigned to the corresponding variable. So when the continuation is invoked, it only ends up re-assigning a value to the final value.
Here is the trace output, with some added comments:
(xinit #<undefined> #<undefined> #f)
(yinit 0 #<undefined> #<continuation>) ; note that x has already been assigned
(body 0 0 #<continuation>)
(body 1 new #f) ; so now x is not re-assigned
(1 . new)
Now, as to what the Scheme reports really do require. Here is the relevant section from R5RS:
library syntax: (letrec <bindings> <body>)
Syntax: <Bindings> should have the form
((<variable1> <init1>) ...),
and <body> should be a sequence of one or more expressions. It is an error
for a <variable> to appear more than once in the list of variables being bound.
Semantics: The <variable>s are bound to fresh locations holding undefined
values, the <init>s are evaluated in the resulting environment (in some
unspecified order), each <variable> is assigned to the result of the
corresponding <init>, the <body> is evaluated in the resulting environment, and
the value(s) of the last expression in <body> is(are) returned. Each binding of
a <variable> has the entire letrec expression as its region, making it possible
to define mutually recursive procedures.
(letrec ((even?
(lambda (n)
(if (zero? n)
#t
(odd? (- n 1)))))
(odd?
(lambda (n)
(if (zero? n)
#f
(even? (- n 1))))))
(even? 88))
===> #t
One restriction on letrec is very important: it must be possible to evaluate
each <init> without assigning or referring to the value of any <variable>. If
this restriction is violated, then it is an error. The restriction is necessary
because Scheme passes arguments by value rather than by name. In the most
common uses of letrec, all the <init>s are lambda expressions and the
restriction is satisfied automatically.
The first sentence of the "Semantics" sections sounds like it requires all the assignments to happen after all the initialization clauses have been evaluated; though, as I said earlier, this isn't as explicit as it might be.
In R6RS and R7RS, the only substantial changes to this part of the specification is the addition of a requirement that:
the continuation of each <init> should not be invoked more than once.
R6RS and R7RS also add another binding construction, though: letrec*. This differs from letrec in two ways. First, it does specify a left-to-right evaluation order. Correlatively, the "restriction" noted above can be relaxed somewhat. It's now okay to reference the values of variables that have already been assigned their initial values:
It must be possible to evaluate each <init> without assigning or
referring to the value of the corresponding <variable> or the
<variable> of any of the bindings that follow it in <bindings>.
The second difference is with respect to our Q2. With letrec*, the specification now requires that the assignments take place after each initialization clause is evaluated. Here is the first paragraph of the "Semantics" from R7RS (draft 6):
Semantics: The <variable>s are bound to fresh locations, each
<variable> is assigned in left-to-right order to the result of evaluating
the corresponding <init>, the <body> is evaluated in the resulting
environment, and the values of the last expression in <body> are
returned. Despite the left-to-right evaluation and assignment order, each
binding of a <variable> has the entire letrec* expression as its region,
making it possible to define mutually recursive procedures.
So Chicken, and Racket using #lang racket---and many other implementations---seem in fact to implement their letrecs as letrec*s.
The reason for this being evaluated to 1 is because of (set! x 1). If instead of 1 you set x to 0 then it will result in zero. This is because the continuation variable cont which is storing the continuation is actually storing the continuation for y and not for x as it is being set to y's continuation after x's.