How to undefine a variable in Scheme? Is this possible?
You're touching a nerve here. Scheme doesn't have a very clear standard notion of how top-level environments work. Why? Because the Scheme standards represent a compromise between two sets of people with very different ideas of how Scheme should work:
The interpretive crowd, who sees the top-level environment as you describe above: a runtime hash-table where bindings are progressively added as program interpretation proceeds.
Then there's the compilation crowd, who sees the top-level environment as something that must be fully computable at compilation time (i.e., a compiler must be able to conclusively identify all of the names that will be bound in the top-level environment).
Your "how do I undefine a variable" question only makes sense in the first model.
Note that the interpretive model, where a program's top-level bindings depend on what code paths get taken, makes efficient compilation of Scheme code much harder for many reasons. For example, how can a Scheme compiler inline a procedure invocation if the name of the procedure is a top-level binding that may not just change during runtime, but even disappear into nothingness?
I'm firmly in the compilation camp here, so what I would recommend to you is to avoid writing code that relies on the ability to add or remove top-level bindings at runtime, or even that requires the use of top-level variables (though those are often unavoidable). Some Scheme systems (e.g., Racket) are able to produce reasonably good compiled code, but if you make those assumptions you'll trip them up in that regard.
In Scheme, variables are defined with either lambda, or one of the various lets. If you want one of them to be 'undefined' then all you need to do is leave the scope that they're in. Of course, that's not really undefining them, it's just that the variable is no longer bound to its previous definition.
If you're making top level definitions, using (define), then technically you're defining a function. Since Scheme is functional, functions never really go away. I suppose that technically, it's stored in some sort of environment function somewhere, so if you were intimately familiar with your implementation (and it's not safeguarded somehow) you could probably overwrite it with your own definition of the globabl environment. Barring that, I'd say that your best bet would be to redefine the function to return the null list- that's really as empty as you get.
Scheme (R7RS) has no standard compliant way to remove a top-level binding.
If you evaluate a non existing variable, you get an error:
(eval 'a)
; => ERROR: undefined variable: a
If you define it, the variable gets added to the top-level environment.
(define a 1)
(eval 'a)
; => 1
As from now no matter what you do, you will not get an error, if you access the variable.
If you set it to false, you will get false:
(set! a #f)
(eval 'a)
; => #f
Even if you set it to something unspecified, it is unlikely that you get an error:
(set! a (if #f #t))
(eval 'a)
; =>
But Schemes may have a non-standard way to remove a top-level binding. MIT Scheme provides the function unbind-variable.
As stated in the other answers there is no standard way of manipulating the namespace in Scheme. For a specific implementation there might be a solution.
In Racket the top-level variables are stored in a namespace. You can remove a variable using namespace-undefined-variable.
There is no way of removing a local variable.
http://docs.racket-lang.org/reference/Namespaces.html?q=namespace#%28def.%28%28quote.~23~25kernel%29._namespace-undefine-variable%21%29%29
(set! no-longer-needed #f)
Does this achieve the effect you want? You can also use define at the top level.
guile> (define nigel "lead guitar")
guile> nigel
"lead guitar"
guile> (define nigel #f)
guile> nigel
#f
guile>
You could then re-define the variable. This all depends on the scope of the variables, of course: see Greg's answer.
You cannot unbind a variable in standard Scheme. You could set! the variable to 'undefined, I guess, or you could write a metainterpreter which reifies environments, allowing you to introduce your own notion of undefining variables.
I think, if your point is to do the equivalent of "free" or de-allocate, then no you're pretty much out of luck. you can't de-allocate a variable. you CAN re-define it to something small, like #f, but once you've done (define foo 'bar) the variable foo will exist in some form until you end the program.
On the other hand, if you use let, or letrec, of course, the name only exists until the relevant close paren...
I think your question is not stupid. In AutoLISP has unexisting (undefined) variable apriori supposted value "nil" (even if the variable does not exist in memory - it means - if it is not in a table of variables - then the value is "nil" - "false"). It means also false. And it is also empty list. If you program some kind of list processing function, it is enough to make initial test only by:
(if input-list ....)
When you want to explicitly undefine any variable, you may do this:
(setq old-var nil); or: (setq old-var ())
I like it. The keyword "setq" means "define". What is better on bounding and unbounding variables in other dialects? You must test if they exist, if they are lists, you need garbage-collector, you may not undefine variable to explicitly free memory. Following command can not be written if variable "my-list" is not defined:
(define my-list (cons 2 my-list))
So I think the AutoLISP way is for programming much better. Possibilities, that I written, you may use there. Unfortunately the AutoLISP works in some CAD engineering graphical systems only.
Related
I am new to Lisp, using SBCL 1.2.11 from the terminal.
Could any one help me figure out where I should start looking to get rid of the above error? I think it is causing me the following error:
(setf x (list 'a 'b 'c))
; No debug variables for current frame: using EVAL instead of EVAL-IN-FRAME.
; (SETF X (LIST 'A 'B 'C)) ; ==> ; (SETQ X (LIST 'A 'B 'C))
; ; caught WARNING: ; undefined variable: X ; ; compilation unit finished
; Undefined variable: ; X ; caught 1 WARNING condition (A B C)
I should not be seeing the comments, is that right?
Thank you so much!
[I've added this answer as there seem to be no others and in the hope that it may help.]
There are two problems here:
you're doing something which is not legal Common Lisp (however commonly it is done);
SBCL is warning about this in a slightly uninformative way.
There's an important bit of terminology which is common in Lisp but I think less common in other languages. That terminology is binding: a binding is, loosely, an association between a name of some kind and a value. Associated with a binding are a scope -- where it is visible -- and an extent, when it is visible. (I am not going to talk about scope and extent because it's a big subject and this answer is already too long.) Almost all programming languages have these three notions but they often call them different things in confusing ways.
The thing often called a variable is a name which is associated with a value: it is, in fact, a binding. And of course the term 'binding' originates from 'variable binding'. But not all bindings are variables -- the term is now more general and more precise (although I'm not giving anything like a precise definition here).
There are two families of constructs which deal with bindings:
constructs which establish bindings;
constructs which modify bindings.
These two things are different in CL. In common with many programming languages there are special constructs which create bindings, and other constructs which modify (mutate) them. Constructs which modify bindings require the bindings to exist so they can be modified.
Constructs which establish bindings
In CL These are things like let: (let ((x 1) y) ...) establishes local bindings for x and y visible in its lexical scope (usually). defvar and friends establish global bindings. defun and friends establish global bindings in a different namespace (the function namespace, as opposed to the variable namespace) in CL, and flet / labels establish local function bindings. There are other constructs, and in particular the set of constructs is effectively user-extensible in the usual Lisp way.
Constructs which modify bindings
The traditional construct to modify a variable binding is setq: setf is a macro which allows you to modify bindings (and other things it calls 'places' such as elements of arrays) in a more general, and user-extensible way. So (setf x 2) modifies the binding of x.
The mistake
The mistake you are making is that you can't just say (setf a ...) unless there is an existing binding of a. This code is illegal CL:
(defun foo ()
(setf a 1)
...)
Instead you need to establish a binding for a:
(defun foo ()
(let ((a ...))
...
(setf a 1)
...))
Well, the same thing is true at the top-level: you can't just say:
> (setf x (list 'a 'b 'c))
because *you're trying to modify a binding of x which does not exist.
And this is what SBCL is telling you, in a rather uninformative (I think) way: it's telling you that there is no existing binding of x.
Solutions and nonsolutions
Unfortunately CL doesn't really offer a very good solution to this problem. One way to 'solve' it is to do this:
> (defvar x ...)
[...]
> (setf x (list 'a 'b 'c))
But this is an undesirable solution: (defvar x ...) turns x into a globally special variable -- a variable which is dynamically scoped. This changes the semantics of any binding of x, for instance in a later function, in ways which can be unexpected. This is undesirable. If you want to do it, at least make sure your special variables follow the *star* convention so it's obvious which they are.
CL, out-of-the-box doesn't offer what you might want, which is 'top-level lexical variables' -- variable bindings you can declare at the top-level which don't do this unfortunate globally-special thing. However Lisp is so flexible that you can actually add this to it if you want.
But still this is kind of a clunky solution: the whole point of having a conversational language is that you don't need to spend your life painfully declaring everything when you talk to the implementation: you want just to be able to say (setf x 1) and have that work.
And in many implementations it does work: the top-level interactive environment lets you just say that, and just does the right thing, while when you, for instance, compile a file, you will get a compile-time warning and a run-time error (possibly warning) if you do the same thing. However this relaxed behaviour in the interactive environment is quite clearly outside the standard.
SBCL doesn't do that because, I think, it doesn't really have a top-level interactive interpreter but rather compiles everything. So you get these warnings. An SBCL person might want to correct me, but I think it is reasonably safe to ignore them when you're typing at the system (but not when you are compiling files) and treat SBCL like other implementations.
A way not to fix the problem
One way that might seem sensible to fix this problem is just not to have special constructs to create bindings: a binding is created by the first assignment. This is what Python does, for instance. Superficially this seems like a clever trick: there's less typing and fuss.
But what is the scope of these implicitly-created bindings meant to be? (Python says 'the whole function, including bits of it which get run before the first assignment', which is, well, interesting.) And, worse, how do you distinguish between something which is an assignment to a variable defined in an outer scope and the identical construct which is creating a binding in an inner scope: well you do that with a special 'global' construct ... which doesn't really work: Python now has a 'nonlocal' construct as well. And how do you tell whether a variable is bound or not at compile time (or even, really, at run-time) to give good warnings?
The trick which seemed like a good idea actually makes things more complicated in many cases: it's reasonable for a quick-and-dirty scripting language, but less reasonable for a large-scale-systems language, I think.
I'm writing a simple lisp interpreter from scratch. I have a global environment that top level variables are bound in during evaluation of all the forms in a file. When all the forms in the file have been evaluated, the top level env and all of the key value data structs inside of it are freed.
When the evaluator encounters a lambda form, it creates a PROC object that contains 3 things: a list of arguments to be bound in a local frame when the procedure is applied, the body of the function, and a pointer to the environment it was created in. For example:
(lambda (x) x)
would produce something internally like:
PROC- args: x,
body: x,
env: pointer to top level env
When the PROC is applied, a new environment is created for the frame and the local bindings are staged there to allow the body to be evaluated with the appropriate bindings. This frame environment contains a pointer to its closure to allow variable lookup inside of THAT. In this case, that would be the global environment. After the PROC body is evaluated, I can free all the cells associated with it including its frame environment, and exit with no memory leaks.
My problem is with higher order functions. Consider this:
(define conser
(lambda (x)
(lambda (y) (cons x y))))
A function that takes one argument and produces another function that will cons that argument to something you pass into it. So,
(define aconser (conser '(1)))
Would yield a function that cons'es '(1) to whatever is passed into it. ex:
(aconser '(2)) ; ((1) 2)
My problem here is that aconser must retain a pointer to the environment it was created in, namely that of conser when is was produced via the invocation (conser '(1)). When aconser the PROC is applied, its frame must point to the frame of conser that existed when aconser was defined, so I can't free the frame of conser after applying it. I don't know how/the best way to both free the memory associated with a lambda frame when it is applied and also support this kind of persistent higher order function.
I can think of some solutions:
some type of ARC
copying the enclosing environment into the frame of the evaluated PROC when it is produced
This seems to be what is being implied here. So, instead of saving a pointer in the PROC object to its closure, I would... copy the closure environment and store a pointer to that directly in the cell? Would this not just be kicking the can one level deeper and result in the same problem?
recursively substituting the labels at read time inside of the body of the higher order function
I am worried I might be missing something very simple here, and also I am curious as to how this procedure is supported in other implementations of lisp and other languages with closures in general. I have not had much luck searching for answers because the question is very specific, perhaps even to this implementation (that I am admittedly just pulling out of my hat as a learning project) and much of what I am able to find simply explains the particulars of closures from the language being implemented's perspective, not from the language that the language is being implemented in's.
Here is a link to the relevant line in my source, if it is helpful, and I am happy to elaborate if this question is not detailed enough to describe the problem thoroughly. Thanks!
The way this is handled usually in naive interpreters is to use a garbage-collector (GC) and allocate your activation frames in the GC'd heap. So you never explicitly free those frames, you let the GC free them when applicable.
In more sophisticated implementations, you can use a slightly different approach:
when a closure is created, don't store a pointer to the current environment. Instead, copy the value of those variables which are used by the closure (it's called the free variables of the lambda).
and change the closure's body to use those copies rather than look in the environment for those variables. It's called closure conversion.
Now you can treat your environment as a normal stack, and free activation frames as soon as you exit a scope.
You still need a GC to decide when closures can be freed.
this in turn requires an "assignment conversion": copying the value of variables implies a change of semantics if those variables get modified. So to recover the original semantics, you need to look for those variables which are "copied into a closure" as well as "modified", and turn them into "reference cells" (e.g. a cons cell where you keep the value in the car), so that the copy doesn't copy the value any more, but just copies a reference to the actual place where the value is kept. [ Side note: such an implementation obviously implies that avoiding setq and using a more functional style may end up being more efficient. ]
The more sophisticated implementation also has the advantage that it can provide a safe for space semantics: a closure will only hold on to data to which it actually refers, contrary to the naive approach where closures end up referring to the whole surrounding environment and hence can prevent the GC from collecting data that is not actually referenced but just happened to be in the environment at the time it was captured by the closure.
In Common Lisp, when I want to use different pieces of code depending on Common Lisp implementations, I can use *features* and the provided notation of #+ and #- to check the availability of a given feature and proceed accordingly. So for example (taken from Peter Seibel's PCL):
(defun foo ()
#+allegro (do-one-thing)
#+sbcl (do-another-thing)
#+clisp (something-else)
#+cmu (yet-another-version)
#-(or allegro sbcl clisp cmu) (error "Not implemented"))
Is anyone aware of a similar mechanism for Scheme? There are sometimes subtle differences between different implementations of Scheme, which, when you're trying to be portable, would be nice to abstract away. One such case that comes to my mind is Racket not providing mutable pairs by default. Instead of writing e.g. (set-cdr! lst '(1 2 3)) you would have to use set-mcdr! and only after you ran (require racket/mpair). Of course, such things could be abstracted by functions and/or macros, but I think the Common Lisp approach is neat in this aspect.
The closest thing there is, is cond-expand (aka SRFI 0), which is available on some Schemes but not others (Racket, for example, doesn't have it, and your code won't compile if you try to use it). For those Schemes that do have it, it looks like a cond form, except you test for booleans that tell you things about the compiler/interpreter. On some Schemes you can detect which Scheme you're running on, while on others you can only check for SRFIs:
(cond-expand (chicken
'bok-bok-bok!)
((and guile srfi-3432)
'this-guile-is-full-of-SRFI!)
(else
'(might be MIT Scheme, whose cond-expand only tests for SRFIs)))
I am confused about eval. I looked at the specification of eval in schemers.org. It says
procedure: (eval expression environment-specifier)
It indicates to me that environment-specifier is mandatory requirement. However, when I tested eval using two interpreters -- the one at repl.it and Elk Scheme -- both of them work without environment-specifier. My question is: Are they both non-conformant interpreters or did I read the documentation at schmers.org wrong?
And then..
Elk Scheme has no problem evaluating (eval 5) and (eval (list + 5 6)) but the Scheme interpreter at repl.it is not able to evaluate them. The later will evaluate (eval `(+ 5 6)) fine but not the first two expressions. My question is: Is the behavior of the repl.it interpreter conformant?
How do other Scheme interpreters deal with the first two expressions?
The Scheme report is what you need to follow to write compatible programs. Other than that the implementations themselves can have their own syntax and procedures and it won't interfere since a standard conforming program wouldn't use them. Not all Scheme reports had eval mandatory so you need to find out which report it should conform to and if it needs some switch to follow the standard. eg. ikarus needs --r6rs-script as a switch to run R6RS programs correctly. I think Elk is R4RS so eval is not specified in that report and BiwaScheme seems to have references to R6RS in it's source so it should take a second argument. that it works is no proof it's conformant so you should dig a little in their documentation.
Also, everything defined as undefined in the report you might actually choose something and it's still according to report. E.g. I've seen define return the object that was bound and all the ! procedures return the object it mutated and it's all according to the report since any value is equal to the reports undefined value.
Also, very few errors in the report actually demands a error being signaled. Since it's not mandatory to signal errors its up to the developer to make sure they do not do anything that is considered an error. The implementations can actually return something like a incredibly wrong value, crash or if it's a nice implementation it signals an error. Any one of those are equally fine for the report. In fact this is one of them:
(define (test) "hello")
(string-set! (test) 0 #\H) ; might signal an error
(test) ; might evaluate to "hello", "Hello"
In most Scheme implementations you won't get any error and it will probably make test return "Hello". The report states this specifically to be an error so I guess it means you should never make a program like this to any Scheme interpreter since the outcome is undefined.
As the Wikipedia article explains, begin in Scheme is a library form that can be rewritten using more fundamental forms like lambda.
But how do you rewrite a begin, especially considering the following?
x
===> error: undefined identifier: x
(begin (define x 28) x)
===> 28
x
===> 28
You cannot. The thing is that begin has two roles: one is sequence a bunch of side-effectful expressions, and the other is that it is used to "splice" macro results. The fact that you can use begin with a definition as in the above is a result of this second feature, and you cannot write it yourself.
If you really want to follow the whole story, then you could define begin as the simple macro which makes it do only the sequencing aspect (and it can indeed be implemented as such, though usually it isn't). But, you need to add explicit recognition of begins to splice definitions (toplevel or internal). This means that a macro implementation is fine, but it cannot really be a library because the core expander should know about it. (And because the language is lexically scoped, there is no good way for the core expander to identify begins that are not defined in the core language.)
To summarize all of this, you could say that R5RS is only wrong in classifying begin as "library syntax", since it can't be defined in a library... but even that's not entirely accurate since R5RS defines "library syntax" as just "derived expressions". The real wrong point is, therefore, the fact that one of begins two faces is implemented elsewhere, in the expander (for definition contexts).
Note also that R6RS clarifies the whole deal: the two faces of begin are made explicit, and it is now part of the core language, not a "library form", and not even a derived form.
You are still welcome to try writing a version of begin which satisfies its first role: sequencing.
(define-syntax sequencing
(syntax-rules ()
[(_ expression) expression]
[(_ expression expressions ...)
((lambda (ignored) (sequencing expressions ...)) expression)]))
Here is the post from which I took that snippet. It provides better context if you are interested, and you might be.