Can R7RS-small implementations allow only one define-library per file? - scheme

Are compliant R7RS-small implementations allowed to impose a restriction on the number of define-library per file? Some R7RS-small implementations such as Guile 3.0.7 only allow one define-library per file. Is this a deviation from the standard, or is it allowed by R7RS-small?

In R7RS define-library is just a form, similar to library in R6RS. I don't see any allowances in either case that conforming implementations may constrain a file to contain only one such form.
But the Guile documentation has something to say on the matter. In 7.7 R7RS Support:
Happily, the syntax for R7RS modules was chosen to be compatible with R6RS, and so Guile’s documentation there applies.
In 7.7.1 Incompatibilities with the R7RS:
As the R7RS is a much less ambitious standard than the R6RS (see Guile and Scheme), it is very easy for Guile to support. As such, Guile is a fully conforming implementation of R7RS, with the exception of the occasional bug and a couple of unimplemented features....
Then in 7.6.1 Incompatibilities with the R6RS
Multiple library forms in one file are not yet supported. This is because the expansion of library sets the current module, but does not restore it. This is a bug.

Yes, I think they can (and, perhaps, should).
If you look at the formal syntax & semantics in r7rs.pdf then
A program is one or more import declarations followed by one or more commands or definitions. Commands and definitions don't include define-library.
A library is exactly one define-library form.
So from that you can conclude that a program doesn't include define-library forms, and a library includes exactly one such form.
Now that document doesn't say how all this maps into files at all, so it's up to the implementation to define that. I think it would be perfectly possible for an implementation to say that the mapping of files to library files should be 1-1, so any given library file contains exactly one library. It would also be possible to have files which contained mixtures of a program and one or more libraries, of course.
In the case where libraries are in their own files (which is obviously the more interesting case in terms of allowing reuse) something has to turn a library name into a file. And that would make it reasonably natural to put exactly one library in each file.
If it was me, I'd allow files which contain a mixture of a program and one or more libraries directly present, but for files which were just libraries I'd allow just one in each file.

Related

Including ComplexExpr and ComplexFunc classes in Halide API

The ComplexExpr and ComplexFunc classes in the links below seem very convenient to work with complex numbers. Is there a plan to include them into the official Halide API? Or is there a reason why they are not included?
https://github.com/halide/Halide/blob/master/apps/fft/complex.h
https://github.com/halide/Halide/blob/be1269b15f4ba8b83df5fa0ef1ae507017fe1a69/apps/fft/funct.h
Speaking as a Halide developer...
Or is there a reason why they are not included?
We haven't included these historically since we didn't want to bless a particular representation for complex numbers. There are a few valid ways of dealing with them and the headers in question are just one.
Is there a plan to include them into the official Halide API?
We've started talking about packaging some of this type of code into a set of header-only "Halide tools" libraries, so named to avoid the normative implication of calling it something like "stdlib". So as of right now, there is no concrete plan, but the odds are nonzero.
In the meantime, the code is MIT licensed, so you should feel free to use those files, regardless.

Will go compilers ignore unused functions

If there is a function from an external package that is not used at all in my project, will the compiler remove the function from the generated machine code?
This question could be targeted at any language compiler in general. But, I think the behaviour may vary language to language. So, I am interested in knowing what does go compilers do.
I would appreciate any help on understanding this.
The language spec does not mention this anywhere, and from a correctness point of view this is irrelevant.
But know that the current version does remove certain constructs that the compiler can prove is not used and will not change the runtime behaviour of the app.
Quoting from The Go Blog: Smaller Go 1.7 binaries:
The second change is method pruning. Until 1.6, all methods on all used types were kept, even if some of the methods were never called. This is because they might be called through an interface, or called dynamically using the reflect package. Now the compiler discards any unexported methods that do not match an interface. Similarly the linker can discard other exported methods, those that are only accessible through reflection, if the corresponding reflection features are not used anywhere in the program. That change shrinks binaries by 5–20%.
Methods are a "harder" case than functions because methods can be listed and called with reflection (unlike functions), but the Go tools do what they can even to remove unused methods too.
You can see examples and proof of removed / unlinked code in this answer:
How to remove unused code at compile time?
Also see other relevant questions:
Splitting client/server code
Call all functions with special prefix or suffix in Golang

#ifndef in Common Lisp

In C, to make sure we don't re-include headers that are included we use the following structure:
#ifndef UTILS
#define UTILS
#include "my_utils.h"
#endif
I've broken my Lisp program into separate files; multiple files sometimes use the same file (e.g., my_utilities is used by multiple files). When I run the program, I get warnings that I am redefining things (calling load of the same file multiple times).
This would be fixed by doing something similar to #ifndef in C. What is the Common Lisp equivalent or standard method of doing this?
I am fairly new to Lisp. Let me know if there are best practices (perhaps, a different method of structuring my programs?) that I am missing.
The question you asked
The direct analogue of preprocessor conditions like #if in C is the
#+ read-time conditionalization facility.
The question you wanted to ask
To avoid multiple loading of a file, you can either use the standard
(but deprecated)
provide/require facility,
or an add-on system like ASDF.
For Common Lisp applications and libraries it is preferred to use a system management tool. Like ASDF or whatever your implementation may provide. A system is a collection of files with dependencies and various actions (load, compile, ...).
You can always check the state of the runtime and do something.
Example:
(unless (fboundp 'foobar)
(require "foo")
(load "bar"))
(unless (find-package 'foobar)
(require "foo")
(load "bar"))
PROVIDE and REQUIRE are built-in functions for exactly that. If you require a module it will be loaded, unless already provided. Unfortunately this functionality is underspecified in the standard, but implementations may provide useful functionality.
Common Lisp runtimes have a list of features on the list *features*. You can use that to advertise and check functionality.
Example:
In your library:
(push :cool-new-graphics-library cl:*features*)
In your application code:
(when (member :cool-new-graphics-library cl:*features*)
(funcall (find-symbol "DRAW-SPACE-SHIP" "CNGL")
:death-star))
Common Lisp provides a way to conditionalize that a read time. The following code will only be read when the :cool-new-graphics-library feature is present, and thus it only then will be executed later:
#+cool-new-graphics-library(cngl:draw-space-ship :death-star)
Common Lisp also allows you to express some logic:
#+(and lispworks cool-new-graphics-library)
(cngl:draw-space-ship :enterprise)
#-cool-new-graphics-library(warn "no cool graphics library available")
Note that you can force Lisp to execute code at compile-time:
(eval-when (:load-toplevel :compile-toplevel :execute)
#+(and lispworks cool-new-graphics-library)
(cngl:draw-space-ship :enterprise)
#-cool-new-graphics-library(warn "no cool graphics library available")
)
For this to work the EVAL-WHEN has to be at the toplevel in a file. That means it will not work deep in nested forms. It does work inside a toplevel PROGN,LOCALLY, MACROLET and SYMBOL-MACROLET, though.
Thus EVAL-WHEN allows you to run code which is part of the file which is currently compiled. This code than can look for loaded systems, provided modules, available functions, and more.

How many times does a Common Lisp compiler recompile?

While not all Common Lisp implementations do compilation to machine code, some of them do, including SBCL and CCL.
In C/C++, if the source files don't change, the binary output of a C/C++ compiler will also not change, assuming the underlying system remains the same.
In a Common Lisp compiler, the compilation is not under the user's direct control, unlike C/C++. My question is that if the Lisp source files haven't changed, under what circumstances will a CL compiler compile the code more than once, and why? If possible, a simple illustrative example would be helpful.
I think that the question is based on some misconceptions. The compiler doesn't compile files, and it's not something that the user has no control over. The compiler is quite readily available through the compile function. The compiler operates on code, not on files. E.g., you can type at the REPL
CL-USER> (compile nil (list 'lambda (list 'x) (list '+ 'x 'x)))
#<FUNCTION (LAMBDA (X)) {100460E24B}>
NIL
NIL
There's no file involved at all. However, there is also a compile-file function, but notice that its description is:
compile-file transforms the contents of the file specified by
input-file into implementation-dependent binary data which are placed
in the file specified by output-file.
The contents of the file are compiled. Then that compiled file can be loaded. (You can also load uncompiled source files, too.) I think your question might boil down to asking under what circumstances would compile-file generate a file with different contents. I think that's really implementation dependent, and it's not really predictable. I don't know that your characterization of compilers for other languages necessarily holds either:
In C/C++, if the source files don't change, the binary output of a
C/C++ compiler will also not change, assuming the underlying system
remains the same.
What if the compiler happens to include a timestamp into the output in some data segment? Then you'd get different binary output every time. It's true that some common scripted compilation/build systems (e.g., make and similar) will check whether previous output can be reused based on whether the input files have changed in the meantime. That doesn't really say what the compiler does, though.
The rules are pretty much the same, but in Common Lisp, it's not a practice to separate declarations from implementation, so usually you must recompile every dependency to be sure. This is a shared practical consequence of dynamic environments.
Imagining there was such separation in place, the following are blantant examples (clearly not exhaustive) of changes that require recompiling specific dependent files, as the output may be different:
A changed package definition
A changed macro character or a change in its code
A changed macro
Adding or removing a inline or notinline declaration
A change in a global type or function type declaration
A changed function used in #., defvar, defparameter, defconstant, load-time-value, eql specializer, make-load-form generated code, defmacro et al (e.g. setf expanders)...
A change in the Lisp compiler, or in the base image
I mean, you can see it's not trivial to determine which files need to be recompiled. Sometimes, the answer is "all subsequent files", e.g. changing the " (double-quotes) macro-character, which might affect every literal string, or the compiler evolved in a non-backwards compatible way. In essence, we end where we started: you can only be sure with a full recompile and not reusing fasls across compilations. And sometimes it's faster than determining the minimum set of files that need to be recompiled.
In practice, you end up compiling single definitions a lot in development (e.g. with Slime) and not recompiling files when there's a fasl as old or younger than the source file. Many times, you reuse files from e.g. Quicklisp. But for testing and deployment, I advise clearing all fasls and recompiling everything.
There have been efforts to automate minimum dependency compilation with SBCL, but I think it's too slow when you change the interim projects more often that not (it involves a lot of forking, so in Windows it's either infeasible or very slow). However, it may be a time saver for base libraries that rarely change, if at all.
Another approach is to make custom base images with base libraries built-in, i.e. those you always load. It'll save both compilation and load times.

Suggestions for using attributes beyond [[noreturn]]?

Coming from the discussions about the use of vendor specific attributes in another question I asked myself, "what rules should we tell people for using attributes that are not listed in the standard"?
The two attributes that are defined are [[ noreturn ]] and [[ carries_dependencies ]]. The standard leaves open how compilers should react on unknown attributes -- thus, by the standard they may stop with an error message. This is not what e.g. GCC does, it emits a warning and continues. This is probably a behavior to be expected by the most-common compilers. For this reason I would have like to read a "should" in the standard, but we don't have it.
The paper N2553 brings up flexible attributes. It lists further attributes used by GCC (
unused, weak) and MSVC (dllimport). for OpenMP, the widely supported parallelizing framework, scoped attributes are suggested, eg. omp::for(clause, clause), omp::parallel(clause,clause). So, it is very likely that we will se some vendor specific attributes very soon after they support the syntax at all, indeed.
Therefore, when we now go "out in the world" and tell people about C++11, what should the advice be about using attributes?
Only use noreturn and carries_dependencies
Use your compilers old syntax instead, eg. __attribute__((noreturn)) and define a macro when you port the code (the current situation)
Use those attributes your favorite compiler supports freely, knowing this code might not be portable to another standard-conforming compiler, because if the standard allows a compiler to stop with an error, you have to consider this will happen. This sounds a bit like advocating writing non-portable code.
Or, my guess, expect the most-used compilers to warn about unknown attributes, so you can use vendor-specific attributes, keeping in mind that in rare cases you may get problems.
Note the slight difference in the last two bullet-items. While both say "use those attributes you need", item3's message is "do not care about other compilers", while item4 implicitly rephrases the standard texts "implementation defined behavior" to "the compiler should emit a diagnostic message".
What could be the suggestion for an upcoming Best Practice here?
The best practice — the only one that is reasonably portable in practical terms, never mind ambiguity in the Standard — is to use macros. It will be many years before we can forget about compilers that don't support attributes.
The number of compilers and the number of custom __keywords__ defined by those compilers will always be increasing, and it makes sense for the language to define a way to contain the damage. It doesn't need to revolutionize the way people write unportable code, or make unportable code portable (although standard attributes do that). There is a benefit simply to giving caffeine-addled compiler backend engineers a sandbox for when they want to extend the grammar.
It is a bit alarming, though, that no attribute tokens are reserved to the implementation, or to the language besides the ones currently standard. So there will be trouble when they decide to standardize more of them.

Resources