apply, apply-primitive-procedure and apply-in-underlying-scheme - scheme

Apply is defined in 4.1.1 The Core of the Evaluator of SICP as:
(define (apply procedure arguments)
(cond ((primitive-procedure? procedure)
(apply-primitive-procedure ;
procedure
arguments));
((compound-procedure? procedure)
.....
(else
....)
I referenced the definition of apply-primitive-procedure in "4.1.4 Rnning the Evaluator as a Program" as:
(define (apply-primitive-procedure proc args)
(apply-in-underlying-scheme
(primitive-implementation proc) args))
So apply-primitive-procedure is implemented by apply-in-underlying-scheme
Nonetheless, refer to the footnotes:
Apply-in-underlying-scheme is the apply procedure we have used in earlier chapters. The metacircular evaluator's apply procedure
([[4.1.1]]) models the working of this primitive. Having two different
things called apply leads to a technical problem in running the
metacircular evaluator, because defining the metacircular evaluator's
apply will mask the definition of the primitive. One way around this is to rename the metacircular =apply= to avoid conflict with the name
of the primitive procedure. We have assumed instead that we have saved
a reference to the underlying =apply= by doing
(define apply-in-underlying-scheme apply)
before defining the metacircular apply. This allows us to access the original version of apply under a different name.
It states the apply-in-underlying-scheme is apply in 4.1.1.
In a summary:
,-> apply -> apply-primitive-procedure -> apply-in-underlying-scheme --.
'----------------------------------------------------------------------'
I guess it not a recursion.
What's wrong with my understanding?

Apply means function application for everything that is not a special form (the special forms are considered in eval). Apply is a recursive function that will ever finish.
Apply is subdivided in 2 cases of procedure application:
-- internal to the system that implements the language
This is the place where is made the transition between the target language and the language that is used to implement the target language (source language).
Here you need to evaluate each parameter (through eval) and convert the resulting object to a similar object in source language before to call the application function of the source language. For some parameters the recursion of eval->apply may happen.
-- a combination created in the target language by using the means of combination that the target language provides.
In this case also you need to recursively call eval for each parameter and use function application in the target language. In this case you do not need to convert the result of eval to an object in the source language.
So in case of combinations there is also recursion in apply, but it is a kind of recursion that will finish (function application function is a primitive-recursive function) because you evaluate a smaller piece each time (operator, operands vs the full initial expression).
I think that you did not notice that apply is a primitive-recursive operator and you were afraid it will not finish.

Related

Is the function argument to `call/cc` written in CPS?

The parameter of call/cc is a procedure taking as its argument a continuation. Is the procedure written in CPS?
No.
CPS-styled functions expect other normal functions as their argument(s), and may call them in tail position. These functions are confusingly called "continuations" in Scheme vernacular. I prefer "contingencies", to disambiguate.
The argument function to call/cc expects an actual undelimited continuation as its argument. That actual continuation is not a function. Calling it with a value returns that value into that continuation's return context which is thus saved along with the continuation -- a feat unheard of w.r.t. simple functions.
A tail-called function returns its result into its calling function's caller's context.
A continuation which is called returns the supplied value to its creating call/cc call's context. It is thus not a function. Thus a function using it is not written in CPS.
Firstly, what is CPS? Continuation-Passing Style is a way of compiling a program which relies on continuations (i.e. uses the call/cc operator) into a target language which doesn't support continuation, but does support lexical closures and tail calls (or some facsimile of tail calls, like the ability to roll back the stack when it gets too deep with actual call frames).
A program transformed with CPS is itself not written in the continuation-passing style. It doesn't have to be; it has a call/cc operator provided by the CPS translator, which gives it access to the current continuation wherever it is needed.
Because CPS is mostly a source-to-source transformation, it is illustrated and taught using hand-written, explicit CPS. Quite often, the Scheme language is used for this. The Scheme language already has call/cc, but when experimenting with explicit, hand-written CPS, we have to pretend that call/cc doesn't exist. Under the CPS paradigm, we can, of course, provide an operator called my-call/cc which is built on our CPS, and has nothing to do with the underlying Scheme's call/cc.
In a CPS-compiled language implementation, every procedure has a continuation argument (with the necessary exception of procedures in the host language's library). The functional argument of call/cc, the function which receives the continuation, is no exception. Being a procedure in a CPS-world, it has to have continuation parameter, so that it is compatible with procedure calls, which pass that argument.
The argument procedure of call/cc can in fact use that continuation argument, and the very first example of the Wikipedia demonstrates this. Here is a copy of that example, in which I renamed the return parameter to c, to reduce confusion:
(define (f c) ;; function used for capturing continuation
(c 2) ;; invoke continuation
3) ;; if continuation returns, return 3.
(display (f (lambda (x) x))) ; displays 3
(display (call/cc f)) ; displays 2
The procedure f's c argument doesn't have to be a continuation; in the first call it's just the dummy function (lambda (x) x). f calls it, and that function returns, and so then control falls through to the 3.
Under CPS, f has a hidden argument, which we can reveal with hand-written CPS:
(define (f c k)
(c 2 k)
(k 3 k))
When 3 is being returned, it's because the hidden continuation is invoked. Since the program can do that, f has to have one.
Note that c and k are different continuations! Maybe that's where the confusion comes in. The c continuation is the caller's current one that is passed by call/cc, and is part of the explicit semantics of call/cc. The k is the hidden one added by the CPS transformer. It's f's own continuation. Every function has one in the CPS-transformed world. Under the CPS paradigm, if f wishes to return, it calls k (which is why I renamed return to c). If f wishes to continue the suspended call/cc, it calls c.
By the way, under automatic CPS, k is not going to be literally called k, because that wouldn't be hygienic: programs can freely bind the k symbol. It will have to be a machine-generated symbol (gensym), or receive some other form of hygienic treatment.
The only function that have to be treated specially under CPS so that they don't have continuation arguments are library functions in the host language/VM that are to be available to the CPS-translated language. When the CPS-translated language calls (display obj) to print an object, that display function either has to be a renamed wrapper that can take the continuation argument (and then ignore it and call the real display function without it), or else the call to display has to be specially handled by the CPS translator to omit the continuation argument.
Lastly, why can CPS implement continuations in a language/VM that doesn't natively provide them? The reason is that all function calls in CPS-transformed programs are tail calls, and so never return. The tricky part in implementing continuations is capturing the entire call stack so that when a continuation is resumed, it can return. Such a feature can only be added at the language implementation level. In the 1970's, InterLisp used "spaghetti stacks" to implement this: stack frames are garbage collected heap objects, pointing to parent frames. But what if functions don't do such a thing as returning? Then the need to add a spaghetti stack to the implementation goes away. Note that the spaghetti stack has not exactly gone away: we have something equivalent under CPS, namely chains of captured lexical environments. A continuation is a lambda, which captures the surrounding procedure's k parameter, which is itself a lambda that has captured its parent's k parameter, ... and so on: it's a chain of environments, similar to stack frames, in which the hidden k parameters are frame pointers. But the host language already gave us lexically capturing lambdas; we have just leveraged these lambdas to de facto represent the continuation spaghetti stack, and so we didn't have to go down to the implementation level to do anything.

How to determine if a variable exists in Chicken Scheme?

Is there a way in Chicken Scheme to determine at run-time if a variable is currently defined?
(let ((var 1))
(print (is-defined? var)) ; #t
(print (is-defined? var)) ; #f
EDIT: XY problem.
I'm writing a macro that generates code. This generated code must call the macro in mutual recursion - having the macro simply call itself won't work. When the macro is recursively called, I need it to behave differently than when it is called initially. I would use a nested function, but uh....it's a macro.
Rough example:
(defmacro m (nested)
(if nested
BACKQUOTE(print "is nested")
BACKQUOTE(m #t)
(yes, I know scheme doesn't use defmacro, but I'm coming from Common Lisp. Also I can't seem to put backquotes in here without it all going to hell.)
I don't want the INITIAL call of the macro to take an extra argument that only has meaning when called recursively. I want it to know by some other means.
Can I get the generated code to call a macro that is nested within the first macro and doesn't exist at the call site, maybe? For example, generating code that calls (,other-macro) instead of (macro)?
But that shouldn't work, because a macro isn't a first-class object like a function is...
When you write recursive macros I get the impression that you have an macro expansion (m a b ...) that turns into a (m-helper a (b ...)) that might turn into (let (a ...) (m b ...)). That is not directly recursive since you are turning code into code that just happens to contain a macro.
With destructuring-bind you really only need to keep track of two variables. One for car and one for cdr and with an implicit renaming macro the stuff not coming from the form is renamed and thus hygenic:
(define-syntax destructuring-bind
(ir-macro-transformer
(lambda (form inject compare?)
(define (parse-structure structure expression optional? body)
;;actual magic happens here. Returns list structure with a mix of parts from structure as well as introduced variables and globals
)
(match form
[(structure expression) . body ]
`(let ((tmp ,expression))
,(parse-structure structure 'tmp #f body))))))
To check if something from input is the same symbol you use the supplied compare? procedure. eg. (compare? expression '&optional).
There's no way to do that in general, because Scheme is lexically scoped. It doesn't make much sense to ask if a variable is defined if an referencing an undefined variable is an error.
For toplevel/global variables, you can use the symbol-utils egg but it is probably not going to work as you expect, considering that global variables inside modules are also rewritten to be something else.
Perhaps if you can say what you're really trying to do, I can help you with an alternate solution.

How to call a method object with standard functions

How does one call a method object as a function?
Closer-mop and clos packages both provide method-function for turning a method object into a function. However, is there a way to do it without including another package? And if not, which package? (Using SBCL), but if a package is needed then how does the discrimination function do it?
Here is an example of using find-method to get a method object. The question is then how to call method-to-be-called.
(defclass a () ((x :accessor x :initform 0)))
(defgeneric inc (i))
(defmethod inc ((i a)) (incf (x i)))
(defvar r (make-instance 'a))
;; ... in a land far far away:
(defvar method-to-be-called (find-method #'inc '() '(a)))
(funcall method-to-be-called r);; crashes and burns
As a secondary question, the docs say that the discrimination function first tries to compute-applicable-methods-by-class to find a method object, and if that fails, it uses compute-applicable-methods. Why do this two layer approach? Is it correct to assume the find-method is doing this two layer approach, so it is better to use find-method ?
-- Appendix --
In the comments below Rainer Joswig pointed out that this find-method form is implementation dependent:
(find-method #'inc '() '(a))) ; works on sbcl 1.3.1
He says the specifier list should be classes and suggests instead:
(find-method #'inc '() (list (find-class 'a))))
So I thought to just put my class in there:
(find-method #'inc '() (list a)) ; crashes and burns
Apparently (defclass a ... ) does not set a to a class. In fact it doesn't set it to anything!
* (defclass a () ((x :accessor x :initform 0)))
#<STANDARD-CLASS COMMON-LISP-USER::A>
* a
...
The variable A is unbound.
However, this works:
* (defvar ca (defclass a () ((x :accessor x :initform 0))))
CA
* (defmethod inc ((i a)) (incf (x i)))
WARNING: Implicitly creating new generic function COMMON-LISP-USER::INC.
#<STANDARD-METHOD COMMON-LISP-USER::INC (A) {1005EE8263}>
enter code here
* (find-method #'inc '() (list ca))
#<STANDARD-METHOD COMMON-LISP-USER::INC (A) {1005EE8263}>
*
So a class is the return value from the defclass, not the value of the symbol that is provided to defclass.
(find-method #'inc '() '(a))
Above does not work. We need a list of classes, not a list of symbols.
(funcall (method-function (find-method #'inc
'()
(list (find-class 'a))))
r)
Since the function method-function belongs to the MOP, many implementations provide it and it is in some implementation specific package. CLOSER-MOP makes it available, too.
But usually, if you are already trying extracting method functions, then you are probably using CLOS the wrong way or you are really knowing what you are doing...
How does one call a method object as a function?
Honest question: why do you want to do that? Did you specify how the method's function is built in the first place, or not?
Even with closer-mop, I believe that the function returned by closer-mop:method-function is, at most, consistent with closer-mop:make-method-lambda in terms of its lambda-list, so perhaps you can use a package to know what you can count on portably.
A method's function does not have to be a function with the same lambda-list as the generic function, and usually it isn't due to next-method-p and call-next-method. Some implementations might use dynamic bindings for the next method list, so these might have a method lambda-list congruent with the generic function. Just don't count on it, generically.
I believe SBCL is not one of these implementations, the next method list is passed to the method's function to support next-method-p and call-next-method.
Why do this two layer approach?
Because it allows memoizing (or caching) based on the list of classes, when possible. If the generic function is called again with arguments of the same classes, and the generic function has not been updated (see "Dependent Maintenance Protocol" in the MOP), it can reuse the last result without further processing, for instance, by keeping the results in a hash table which keys are lists of classes.
However, if compute-applicable-methods-using-classes returns a false second value, then compute-applicable-methods is used. The reason is that no method could be found using classes alone, and this means some method has a non-class specializer.
This is different than saying there are no applicable methods, for instance, if all methods are specialized on classes and there are no applicable methods, compute-applicable-methods-using-classes should return an empty list and a true second value. There's no point in calling compute-applicable-methods, it won't (or rather, it shouldn't, if well implemented) find anything further.
It's still possible to perform memoization when compute-applicable-methods is used, but the memoization is no longer as trivial as, for instance, using a list of classes as a key in a hash table. Perhaps you could use a tree structure where you'd try to look up a method for each specializer (instance, then class) sequentially for each argument, until a tree node matched the whole specializable parameter list.
With non-standard specializers, you'd have to change the search order for each node. Unless such specializer's priority is not strictly before, between or after eql and a class, then you're in uncharted areas.
Actually, you'll have to change compute-applicable-methods-using-classes to recognize the non-standard specializers and return false early, and you'll have to change compute-applicable-methods to process these specializers, anyway, so perhaps you'll have a good knowledge on, if possible, how to memoize with the results of compute-applicable-methods anyway.
Is it correct to assume the find-method is doing this two layer approach, so it is better to use find-method ?
No, the purpose of find-method is to find a specific method, not an applicable method. It does not use compute-applicable-methods-using-classes or compute-applicable-methods at all. In fact, it couldn't use the latter ever, as it takes actual arguments instead of specializers.
For the particular case of method-function, closer-mop for SBCL simply reexport the existing symbol from sb-pcl, as seen in closer-mop-packages.lisp. The whole file make use of read-time conditionals (see 1.5.2.1 Use of Implementation-Defined Language Features).
That means that if you are working with SBCL, you might call sb-pcl:method-function (PCL means Portable Common Loops).
The generic function compute-applicable-methods-by-class allows you to know which methods are applicable given classes. This is useful if you don't have actual instances on which you can operate.
It seems also that compute-applicable-methods-using-classes allows the implementation to memoize the applicable methods when the second return value is true. This generic method does not allow you to find applicable methods specialized with eql specializers.
I am speculating here, but it makes sense to fall back on compute-applicable-methods to allow for example eql-specializers or because it is slightly easier to define a method for compute-applicable-methods.
Note the paragraph about consistency:
The following consistency relationship between compute-applicable-methods-using-classes and compute-applicable-methods must be maintained: for any given generic function and set of arguments, if compute-applicable-methods-using-classes returns a second value of true, the first value must be equal to the value that would be returned by a corresponding call to compute-applicable-methods. The results are undefined if a portable method on either of these generic functions causes this consistency to be violated.
I don't think there is a find-method-using-classes generic function specified anywhere.

Identifiers and Binding in Scheme - how to interpret the function?

I am reading DrRacket document http://docs.racket-lang.org/guide/binding.html
There is a function
(define f
(lambda (append)
(define cons (append "ugly" "confusing"))
(let ([append 'this-was])
(list append cons))))
> (f list)
'(this-was ("ugly" "confusing"))
I see that we define function f, inside we define lambda that takes (append), why ?
Procedure (body) for lambda is another function called cons, that appends two strings.
I don't understand this function at all.
Thanks !
The section that you're referring to demonstrates lexical scope in Racket. As in other Scheme implementations, the main point is that you can "shadow" every binding in the language. Unlike most "mainstream" languages, there are no real keywords that are "sacred" in the sense that they can never be shadowed by a local binding.
Note that a really good tool to visualize what is bound where is DrRacket's "check syntax" button: click it, and you'll see your code with highlights that shows which parts are bindings, which are special forms -- and if you hover the mouse over a specific name, you'll see an arrow that tells you where it came from.
Scheme takes some getting used to :)
f is assigned the function returned by the lambda.
lambda defines the function that takes a parameter (called append).
(define cons (append "ugly" "confusing")) is not a function per se, but calls append with the two strings as parameter and assigns the result to cons.
inside the let block, append is re-assigned a different value, the symbol this-was.
the let block creates a list of append (which now contains 'this-was) and cons (which contains '("ugly" "confusing") from 3 above
since 5 is the last statement that value is returned by the whole function which is called f
f is called with the parameter list (the list function). which gets passed as the parameter append. And this is why 3 above creates a list '("ugly" "confusing") which gets assigned to cons.
Hope that cleared up things a bit.
Cheers!

Specifics of call/cc

This is related to What is call/cc?, but I didn't want to hijack this question for my own purposes, and some of its arguments like the analogy to setjmp/longjmp evade me.
I think I have a sufficient idea about what a continuation is, I think of it as a snapshot of the current call stack. I don't want to go into the discussion why this might be interesting or what you can do with continuations. My question is more specifically, why do I have to provide a function argument to call/cc? Why doesn't call/cc just return the current continuation, so I could do whatever I please with it (store it, call it, you name it)? In a link from this other question (http://community.schemewiki.org/?call-with-current-continuation-for-C-programmers), it talks about "Essentially it's just a clean way to get the continuation to you and keep out of the way of subsequent jumps back to the saved point.", but I'm not getting it. It seems unnecessarily complicated.
If you use a construct like Jay shows, then you can grab the continuation, but in a way, the value that is grabbed is already spoiled because you're already inside that continuation. In contrast, call/cc can be used to grab the continuation that is still pending outside of the current expression. For example, one of the simplest uses of continuations is to implement a kind of an abort:
(call/cc (lambda (abort)
(+ 1 2 (abort 9))))
You cannot do that with the operation you describe. If you try it:
(define (get-cc) (call/cc values))
(let ([abort (get-cc)]) (+ 1 2 (abort 9)))
then you get an error about applying 9 as a procedure. This happens because abort jumps back to the let with the new value of 9 -- which means that you're now doing a second round of the same addition expression, except that now abort is bound to 9...
Two additional related notes:
For a nice an practical introduction to continuations, see PLAI.
call/cc is a little complex in that it takes in a function -- a conceptually easier to use construct is let/cc which you can find in some implementations like PLT Scheme. The above example becomes (let/cc abort (+ 1 2 (abort 9))).
That would be less versatile. If you want that behavior, you can just do:
(call/cc (lambda (x) x))
You could take a look at the example usages of continuations in "Darrell Ferguson and Dwight Deugo. "Call with Current Continuation Patterns". 8th Conference on Pattern Languages of Programs. September 2001." (http://library.readscheme.org/page6.html) and try to rewrite them using a call/cc-return, defined as above.
I suggest starting by asking yourself: what does it mean to be a first-class continuation?
The continuation of an expression essentially consists of two pieces of data: first, the closure (i.e., environment) of that expression; and second, a representation of what should be done with the result of the expression. A language with first-class continuations, then, is one which has data structures encapsulating these parts, and which treats these data structures just as it would any other.
call/cc is a particularly elegant way to realise this idea: the current continuation is packaged up as a procedure which encapsulates what-is-to-be-done-with-the-expression as what the procedure does when applied to the expression; to represent the continuation this way simply means that the closure of this procedure contains the environment at the site it was invoked.
You could imagine realising the idea of first-class continuations in other ways. They wouldn't be call/cc, and it's hard for me to imagine how such a representation could be simpler.
On a parting note, consider the implementation of let/cc that Eli mentioned, which I prefer to call bind/cc:
(define-syntax bind/cc
(syntax-rules ()
((bind/cc var . body)
(call/cc (lambda (var) . body)))))
And as an exercise, how would you implement call/cc based on bind/cc?
Against common SO netiquette I'm answering my own question, but more as the editor than the provider of the answer.
After a while I started a similar question over at LtU. After all, these are the guys that ponder language design all day long, aren't they, and one of the answers finally kicked in with me. Now things mentioned here, e.g. by Eli or in the original question, make much more sense to me. It's all about what gets included in the continuation, and where the applied continuation sets in.
One of the posters at LtU wrote:
"You can see exactly how call/cc allows you to "keep out of the way." With em or get/cc you need to do some kind of test to determine if you have a back-jump or just the initial call. Basically, call/cc keeps the use of the continuation out of the continuation, whereas with get/cc or em, the continuation contains its use and so (usually) you need to add a test to the beginning of the continuation (i.e. immediately following get/cc / em) to separate the "using the continuation parts" from the "rest of the continuation" parts."
That drove it home for me.
Thank you guys anyway!

Resources