Lazy evaluation in WebAssembly - lazy-evaluation

Would it be a kind of lazy evaluation if const values are passed through functions instead of creating them in place?
Simple example:
(module
(func $let3.5 (result f64) f64.const 3.5)
(func $let2.5 (result f64) f64.const 2.5)
(func $addLazyNumbers (result f64)
(call $let3.5)
(call $let2.5)
f64.add
)
(export "addLazyNumbers" (func $addLazyNumbers))
)
And then call module.exports.addLazyNumbers() in JS (returns 6).
In a more complex example, there would be blocks with br_ifs and some of the values would not be needed.
Or is this just a overhead because the values 3.5 and 2.5 are already in memory after compilation?

Or is this just a overhead because the values 3.5 and 2.5 are already in memory after compilation?
It is just overhead, yes.
Engines compile consts directly as part of the executable code, and adding functions in-between only means that now the engine has to do more work both during compilation phase (to encode the new functions), as well as during execution (to actually invoke them and get results back).

Related

apply, apply-primitive-procedure and apply-in-underlying-scheme

Apply is defined in 4.1.1 The Core of the Evaluator of SICP as:
(define (apply procedure arguments)
(cond ((primitive-procedure? procedure)
(apply-primitive-procedure ;
procedure
arguments));
((compound-procedure? procedure)
.....
(else
....)
I referenced the definition of apply-primitive-procedure in "4.1.4 Rnning the Evaluator as a Program" as:
(define (apply-primitive-procedure proc args)
(apply-in-underlying-scheme
(primitive-implementation proc) args))
So apply-primitive-procedure is implemented by apply-in-underlying-scheme
Nonetheless, refer to the footnotes:
Apply-in-underlying-scheme is the apply procedure we have used in earlier chapters. The metacircular evaluator's apply procedure
([[4.1.1]]) models the working of this primitive. Having two different
things called apply leads to a technical problem in running the
metacircular evaluator, because defining the metacircular evaluator's
apply will mask the definition of the primitive. One way around this is to rename the metacircular =apply= to avoid conflict with the name
of the primitive procedure. We have assumed instead that we have saved
a reference to the underlying =apply= by doing
(define apply-in-underlying-scheme apply)
before defining the metacircular apply. This allows us to access the original version of apply under a different name.
It states the apply-in-underlying-scheme is apply in 4.1.1.
In a summary:
,-> apply -> apply-primitive-procedure -> apply-in-underlying-scheme --.
'----------------------------------------------------------------------'
I guess it not a recursion.
What's wrong with my understanding?
Apply means function application for everything that is not a special form (the special forms are considered in eval). Apply is a recursive function that will ever finish.
Apply is subdivided in 2 cases of procedure application:
-- internal to the system that implements the language
This is the place where is made the transition between the target language and the language that is used to implement the target language (source language).
Here you need to evaluate each parameter (through eval) and convert the resulting object to a similar object in source language before to call the application function of the source language. For some parameters the recursion of eval->apply may happen.
-- a combination created in the target language by using the means of combination that the target language provides.
In this case also you need to recursively call eval for each parameter and use function application in the target language. In this case you do not need to convert the result of eval to an object in the source language.
So in case of combinations there is also recursion in apply, but it is a kind of recursion that will finish (function application function is a primitive-recursive function) because you evaluate a smaller piece each time (operator, operands vs the full initial expression).
I think that you did not notice that apply is a primitive-recursive operator and you were afraid it will not finish.

Is the function argument to `call/cc` written in CPS?

The parameter of call/cc is a procedure taking as its argument a continuation. Is the procedure written in CPS?
No.
CPS-styled functions expect other normal functions as their argument(s), and may call them in tail position. These functions are confusingly called "continuations" in Scheme vernacular. I prefer "contingencies", to disambiguate.
The argument function to call/cc expects an actual undelimited continuation as its argument. That actual continuation is not a function. Calling it with a value returns that value into that continuation's return context which is thus saved along with the continuation -- a feat unheard of w.r.t. simple functions.
A tail-called function returns its result into its calling function's caller's context.
A continuation which is called returns the supplied value to its creating call/cc call's context. It is thus not a function. Thus a function using it is not written in CPS.
Firstly, what is CPS? Continuation-Passing Style is a way of compiling a program which relies on continuations (i.e. uses the call/cc operator) into a target language which doesn't support continuation, but does support lexical closures and tail calls (or some facsimile of tail calls, like the ability to roll back the stack when it gets too deep with actual call frames).
A program transformed with CPS is itself not written in the continuation-passing style. It doesn't have to be; it has a call/cc operator provided by the CPS translator, which gives it access to the current continuation wherever it is needed.
Because CPS is mostly a source-to-source transformation, it is illustrated and taught using hand-written, explicit CPS. Quite often, the Scheme language is used for this. The Scheme language already has call/cc, but when experimenting with explicit, hand-written CPS, we have to pretend that call/cc doesn't exist. Under the CPS paradigm, we can, of course, provide an operator called my-call/cc which is built on our CPS, and has nothing to do with the underlying Scheme's call/cc.
In a CPS-compiled language implementation, every procedure has a continuation argument (with the necessary exception of procedures in the host language's library). The functional argument of call/cc, the function which receives the continuation, is no exception. Being a procedure in a CPS-world, it has to have continuation parameter, so that it is compatible with procedure calls, which pass that argument.
The argument procedure of call/cc can in fact use that continuation argument, and the very first example of the Wikipedia demonstrates this. Here is a copy of that example, in which I renamed the return parameter to c, to reduce confusion:
(define (f c) ;; function used for capturing continuation
(c 2) ;; invoke continuation
3) ;; if continuation returns, return 3.
(display (f (lambda (x) x))) ; displays 3
(display (call/cc f)) ; displays 2
The procedure f's c argument doesn't have to be a continuation; in the first call it's just the dummy function (lambda (x) x). f calls it, and that function returns, and so then control falls through to the 3.
Under CPS, f has a hidden argument, which we can reveal with hand-written CPS:
(define (f c k)
(c 2 k)
(k 3 k))
When 3 is being returned, it's because the hidden continuation is invoked. Since the program can do that, f has to have one.
Note that c and k are different continuations! Maybe that's where the confusion comes in. The c continuation is the caller's current one that is passed by call/cc, and is part of the explicit semantics of call/cc. The k is the hidden one added by the CPS transformer. It's f's own continuation. Every function has one in the CPS-transformed world. Under the CPS paradigm, if f wishes to return, it calls k (which is why I renamed return to c). If f wishes to continue the suspended call/cc, it calls c.
By the way, under automatic CPS, k is not going to be literally called k, because that wouldn't be hygienic: programs can freely bind the k symbol. It will have to be a machine-generated symbol (gensym), or receive some other form of hygienic treatment.
The only function that have to be treated specially under CPS so that they don't have continuation arguments are library functions in the host language/VM that are to be available to the CPS-translated language. When the CPS-translated language calls (display obj) to print an object, that display function either has to be a renamed wrapper that can take the continuation argument (and then ignore it and call the real display function without it), or else the call to display has to be specially handled by the CPS translator to omit the continuation argument.
Lastly, why can CPS implement continuations in a language/VM that doesn't natively provide them? The reason is that all function calls in CPS-transformed programs are tail calls, and so never return. The tricky part in implementing continuations is capturing the entire call stack so that when a continuation is resumed, it can return. Such a feature can only be added at the language implementation level. In the 1970's, InterLisp used "spaghetti stacks" to implement this: stack frames are garbage collected heap objects, pointing to parent frames. But what if functions don't do such a thing as returning? Then the need to add a spaghetti stack to the implementation goes away. Note that the spaghetti stack has not exactly gone away: we have something equivalent under CPS, namely chains of captured lexical environments. A continuation is a lambda, which captures the surrounding procedure's k parameter, which is itself a lambda that has captured its parent's k parameter, ... and so on: it's a chain of environments, similar to stack frames, in which the hidden k parameters are frame pointers. But the host language already gave us lexically capturing lambdas; we have just leveraged these lambdas to de facto represent the continuation spaghetti stack, and so we didn't have to go down to the implementation level to do anything.

How to call a method object with standard functions

How does one call a method object as a function?
Closer-mop and clos packages both provide method-function for turning a method object into a function. However, is there a way to do it without including another package? And if not, which package? (Using SBCL), but if a package is needed then how does the discrimination function do it?
Here is an example of using find-method to get a method object. The question is then how to call method-to-be-called.
(defclass a () ((x :accessor x :initform 0)))
(defgeneric inc (i))
(defmethod inc ((i a)) (incf (x i)))
(defvar r (make-instance 'a))
;; ... in a land far far away:
(defvar method-to-be-called (find-method #'inc '() '(a)))
(funcall method-to-be-called r);; crashes and burns
As a secondary question, the docs say that the discrimination function first tries to compute-applicable-methods-by-class to find a method object, and if that fails, it uses compute-applicable-methods. Why do this two layer approach? Is it correct to assume the find-method is doing this two layer approach, so it is better to use find-method ?
-- Appendix --
In the comments below Rainer Joswig pointed out that this find-method form is implementation dependent:
(find-method #'inc '() '(a))) ; works on sbcl 1.3.1
He says the specifier list should be classes and suggests instead:
(find-method #'inc '() (list (find-class 'a))))
So I thought to just put my class in there:
(find-method #'inc '() (list a)) ; crashes and burns
Apparently (defclass a ... ) does not set a to a class. In fact it doesn't set it to anything!
* (defclass a () ((x :accessor x :initform 0)))
#<STANDARD-CLASS COMMON-LISP-USER::A>
* a
...
The variable A is unbound.
However, this works:
* (defvar ca (defclass a () ((x :accessor x :initform 0))))
CA
* (defmethod inc ((i a)) (incf (x i)))
WARNING: Implicitly creating new generic function COMMON-LISP-USER::INC.
#<STANDARD-METHOD COMMON-LISP-USER::INC (A) {1005EE8263}>
enter code here
* (find-method #'inc '() (list ca))
#<STANDARD-METHOD COMMON-LISP-USER::INC (A) {1005EE8263}>
*
So a class is the return value from the defclass, not the value of the symbol that is provided to defclass.
(find-method #'inc '() '(a))
Above does not work. We need a list of classes, not a list of symbols.
(funcall (method-function (find-method #'inc
'()
(list (find-class 'a))))
r)
Since the function method-function belongs to the MOP, many implementations provide it and it is in some implementation specific package. CLOSER-MOP makes it available, too.
But usually, if you are already trying extracting method functions, then you are probably using CLOS the wrong way or you are really knowing what you are doing...
How does one call a method object as a function?
Honest question: why do you want to do that? Did you specify how the method's function is built in the first place, or not?
Even with closer-mop, I believe that the function returned by closer-mop:method-function is, at most, consistent with closer-mop:make-method-lambda in terms of its lambda-list, so perhaps you can use a package to know what you can count on portably.
A method's function does not have to be a function with the same lambda-list as the generic function, and usually it isn't due to next-method-p and call-next-method. Some implementations might use dynamic bindings for the next method list, so these might have a method lambda-list congruent with the generic function. Just don't count on it, generically.
I believe SBCL is not one of these implementations, the next method list is passed to the method's function to support next-method-p and call-next-method.
Why do this two layer approach?
Because it allows memoizing (or caching) based on the list of classes, when possible. If the generic function is called again with arguments of the same classes, and the generic function has not been updated (see "Dependent Maintenance Protocol" in the MOP), it can reuse the last result without further processing, for instance, by keeping the results in a hash table which keys are lists of classes.
However, if compute-applicable-methods-using-classes returns a false second value, then compute-applicable-methods is used. The reason is that no method could be found using classes alone, and this means some method has a non-class specializer.
This is different than saying there are no applicable methods, for instance, if all methods are specialized on classes and there are no applicable methods, compute-applicable-methods-using-classes should return an empty list and a true second value. There's no point in calling compute-applicable-methods, it won't (or rather, it shouldn't, if well implemented) find anything further.
It's still possible to perform memoization when compute-applicable-methods is used, but the memoization is no longer as trivial as, for instance, using a list of classes as a key in a hash table. Perhaps you could use a tree structure where you'd try to look up a method for each specializer (instance, then class) sequentially for each argument, until a tree node matched the whole specializable parameter list.
With non-standard specializers, you'd have to change the search order for each node. Unless such specializer's priority is not strictly before, between or after eql and a class, then you're in uncharted areas.
Actually, you'll have to change compute-applicable-methods-using-classes to recognize the non-standard specializers and return false early, and you'll have to change compute-applicable-methods to process these specializers, anyway, so perhaps you'll have a good knowledge on, if possible, how to memoize with the results of compute-applicable-methods anyway.
Is it correct to assume the find-method is doing this two layer approach, so it is better to use find-method ?
No, the purpose of find-method is to find a specific method, not an applicable method. It does not use compute-applicable-methods-using-classes or compute-applicable-methods at all. In fact, it couldn't use the latter ever, as it takes actual arguments instead of specializers.
For the particular case of method-function, closer-mop for SBCL simply reexport the existing symbol from sb-pcl, as seen in closer-mop-packages.lisp. The whole file make use of read-time conditionals (see 1.5.2.1 Use of Implementation-Defined Language Features).
That means that if you are working with SBCL, you might call sb-pcl:method-function (PCL means Portable Common Loops).
The generic function compute-applicable-methods-by-class allows you to know which methods are applicable given classes. This is useful if you don't have actual instances on which you can operate.
It seems also that compute-applicable-methods-using-classes allows the implementation to memoize the applicable methods when the second return value is true. This generic method does not allow you to find applicable methods specialized with eql specializers.
I am speculating here, but it makes sense to fall back on compute-applicable-methods to allow for example eql-specializers or because it is slightly easier to define a method for compute-applicable-methods.
Note the paragraph about consistency:
The following consistency relationship between compute-applicable-methods-using-classes and compute-applicable-methods must be maintained: for any given generic function and set of arguments, if compute-applicable-methods-using-classes returns a second value of true, the first value must be equal to the value that would be returned by a corresponding call to compute-applicable-methods. The results are undefined if a portable method on either of these generic functions causes this consistency to be violated.
I don't think there is a find-method-using-classes generic function specified anywhere.

Use of 'optimize' proclamation

To enhance efficiency of my Lisp program, I want to insert this line into my code:
(optimize (speed 3) (safety 0) (debug 0) (space 0))
Currently I think I should put it at the top of each file. Is it a good idea or should I insert this line in one specific place? (I use ASDF for system definition.)
Another part of my question: is (safety 0) safe? Few of my functions use explicit declarations of variable types, what will happen to the others? Should I omit (safety 0) to avoid problems that might occur due to missing type-checking?
I would avoid setting compilation policies globally, since "as with other defining macros, it is unspecified whether or not the compile-time side-effects of a declaim persist after the file has been compiled". If you really want to use a global policy per file, you can use the locally special form at top-level (subforms of a top-level locally remain top-level forms)
(locally (declare (optimize speed #| ... whatever ... |#))
(defun compute-foo (x)
(1+ x))
(defun compute-bar (y)
(* (compute-foo y) y)))
instead of
(declaim (optimize speed #| ... whatever ... |#))
...
or even
(proclaim '(optimize speed #| ... whatever ... |#))
I tend to use declarations sparingly, usually only local within a function, i.e.,
(defun compute-foo (x)
(declare (fixnum x))
(1+ x))
Many modern Lisp compilers (like SBCL) have become pretty good at figuring types out. Further, I'd never use (safety 0) globally, since it may be dangerous, in particular during development, when things haven't really settled down and mistakes are common.

In Scheme the purpose of (let ((cdr cdr))

I've been studying Scheme recently and come across a function that is defined in the following way:
(define remove!
(let ((null? null?)
(cdr cdr)
(eq? eq?))
(lambda ... function that uses null?, cdr, eq? ...)
What is the purpose of binding null? to null? or cdr to cdr, when these are built in functions that are available in a function definition without a let block?
In plain R5RS Scheme, there is no module system -- only the toplevel. Furthermore, the mentality is that everything can be modified, so you can "customize" the language any way you want. But without a module system this does not work well. For example, I write
(define (sub1 x) (- x 1))
in a library which you load -- and now you can redefine -:
(define - +) ; either this
(set! - +) ; or this
and now you unintentionally broke my library which relied on sub1 decrementing its input by one, and as a result your windows go up when you drag them down, or whatever.
The only way around this, which is used by several libraries, is to "grab" the relevant definition of the subtraction function, before someone can modify it:
(define sub1 (let ((- -)) (lambda (x) (- x 1))))
Now things will work "more fine", since you cannot modify the meaning of my sub1 function by changing -. (Except... if you modify it before you load my library...)
Anyway, as a result of this (and if you know that the - is the original one when the library is loaded), some compilers will detect this and see that the - call is always going to be the actual subtraction function, and therefore they will inline calls to it (and inlining a call to - can eventually result in assembly code for subtracting two numbers, so this is a big speed boost). But like I said in the above comment, this is more coincidental to the actual reason above.
Finally, R6RS (and several scheme implementations before that) has fixed this and added a library system, so there's no use for this trick: the sub1 code is safe as long as other code in its library is not redefining - in some way, and the compiler can safely optimize code based on this. No need for clever tricks.
That's a speed optimization. Local variable access is usually faster than global variables.

Resources