Scheme: variable required in this context - scheme

I use mit-scheme compiler for learning Scheme.
The program which I write must compute equation roots via Vieta theorem.
(define (roots p q x-begin x-end)
(let ((x1 0.0) (x2 0.0))
(set! (x1 x-begin)) ; Error here Variable required in this context: (x1 x-begin)
(set! (x2 x-begin))
; ...
)
)
I guess that error concerned with static scope in Scheme.
What i do wrong?
P.S. Sorry for my english.

I'm not sure how you intend to calculate the roots but I can provide some advise regarding Scheme syntax, this is incorrect:
(set! (x1 x-begin))
It should be:
(set! x1 x-begin)
In general, using set! should be avoided whenever possible: in Scheme we try real hard to write programs that follow the functional-programming paradigm, and that includes not reassigning variables.

Related

SICP stream integrator

In SICP chapter 3.5 the code for a integrator is given
(define (solve f y0 dt)
(define y (integral dy y0 dt))
(define dy (stream-map f y))
y)
a signal-processing system for solving the differential equation dy/dt=f(y)
where f
is a given function. The figure shows a mapping component, which applies f
to its input signal, linked in a feedback loop to an integrator in a manner very similar to that of the analog computer circuits that are actually used to solve such equations.
What is the real life useage of the given program?

How does this Scheme code return a value?

This code is taken from Sussman and Wisdom's Structure and Interpretation of Classical Mechanics, its purpose is to derive (close to) the smallest positive floating point the host machine supports.
https://github.com/hnarayanan/sicm/blob/e37f011db68f8efc51ae309cd61bf497b90970da/scmutils/src/kernel/numeric.scm
Running it in DrRacket results in 2.220446049250313e-016 on my machine.
My question, what causes this to even return a value? This code is tail recursive, and it makes sense at some point the computer can no longer divide by 2. Why does it not throw?
(define *machine-epsilon*
(let loop ((e 1.0))
(if (= 1.0 (+ e 1.0))
(* 2 e)
(loop (/ e 2)))))
*machine-epsilon*
This code is tail recursive, and it makes sense at some point the computer can no longer divide by 2. Why does it not throw?
No, the idea is different: at some point the computer still can divide by 2, but the result (e) becomes indistinguishable from 0 [upd: in the context of floating-point addition only - very good point mentioned in the comment] (e + 1.0 = 1.0, this is exactly what if clause is checking). We know for sure that the previous e was still greater than zero "from the machine point of view" (otherwise we wouldn't get to the current execution point), so we simply return e*2.
This form of let-binding is syntactic sugar for recursion.
You may avoid using too much syntax until you master the language and write as much as possible using the kernel language, to focus on essential problem. For example, in full SICP text, never is specified this syntactic sugar for iteration.
The r6rs definition for iteration is here.
The purpose of this code is not to find the smallest float that the machine can support: it is to find the smallest float, epsilon such that (= (+ 1.0 epsilon) 1.0) is false. This number is useful because it's the upper bound on the error you get from adding numbers In particular what you know is that, say, (+ x y) is in the range [(x+y)*(1 - epsilon), (x+y)*(1 + epsilon)], where in the second expression + &c mean the ideal operations on numbers.
In particular (/ *machine-epsilon* 2) is a perfectly fine number, as is (/ *machine-epsilon* 10000) for instance, and (* (/ *machine-epsilon* x) x) will be very close to *machine-epsilon* for many reasonable values of x. It's just the case that (= (+ (/ *machine-epsilon* 2) 1.0) 1.0) is true.
I'm not familiar enough with floating-point standards, but the number you are probably thinking of is what Common Lisp calls least-positive-double-float (or its variants). In Racket you can derive some approximation to this by
(define *least-positive-mumble-float*
;; I don't know what float types Racket has if it even has more than one.
(let loop ([t 1.0])
(if (= (/ t 2) 0.0)
t
(loop (/ t 2)))))
I am not sure if this is allowed to raise an exception: it does not in practice and it gets a reasonable-looking answer.
It becomes clearer when you get rid of the confusing named let notation.
(define (calculate-epsilon (epsilon 1.0))
(if (= 1.0 (+ 1.0 epsilon))
(* epsilon 2)
(calculate-epsilon (/ epsilon 2))))
(define *machine-epsilon* (calculate-epsilon))
Is what the code does actually.
So now we see for what the named let expression is good.
It defines locally the function and runs it. Just that the name of the function as loop was very imprecise and confusing and the naming of epsilon to e is a very unhappy choice. Naming is the most important thing for readable code.
So this example of SICP should be an example for bad naming choices. (Okay, maybe they did it by intention to train the students).
The named let defines and calls/runs a function/procedure. Avoiding it would lead to better code - since clearer.
In common lisp such a construct would be much clearer expressed:
(defparameter *machine-epsilon*
(labels ((calculate-epsilon (&optional (epsilon 1.0))
(if (= 1.0 (+ 1.0 epsilon))
(* epsilon 2)
(calculate-epsilon (/ epsilon 2)))))
(calculate-epsilon)))
In CLISP implementation, this gives: 1.1920929E-7

Racket - lang plai - define-type and type-case explanations

Can someone try and explain these two functions: "define-type" and "type-case" in the PLAI scheme in racket? I'm a noob programmer and I don't really understand the documentation on the racket website. If anyone could provide examples, it would greatly be appreciated. Thanks.
Here is a little example of how to use define-type and type-case:
#lang plai
; A ListOfNumbers are either
; is either an empty list of numbers
; or is constructed to two things a, and, d,
; where a is a number and d is a list of numbers.
(define-type ListOfNumbers
(Empty)
(Cons (a number?) (d ListOfNumbers?)))
; construct a list of numbers as an example
(define a-list (Cons 42 (Cons 43 (Empty))))
a-list ; prints: (Cons 42 (Cons 43 (Empty)))
(type-case ListOfNumbers a-list
(Empty () "the list is empty")
(Cons (a d) (~a "the first number in the list is " a)))
; prints: "the first number in the list is 42"
I'm not super experienced with Lisp/Scheme/Racket, but it looks like this question is still unanswered after 5 years, so I'll give it a shot.
First of all, note that not everything is a function. For example, when you use define to define a function or some other value, define is not acting as a function. A function is something that takes some input, and then returns some output. define does not do this. Instead, it changes the environment that you're programming in such a way that a new name exists that can be used to refer to some value.
So for example, in...
(define cadr
(lambda (x)
(car (cdr x))))
... define modifies the programing environment so that the function cadr now exists. cadr is a function (if you invoke it with some input, it will yield some output), but define itself is not a function (you're not invoking define with some input in order to get some output).
With that distinction hopefully cleared up, define-type is not a function. It is similar to define in that it modifies the programming environment to make it so that you have new names to refer to certain values. It is used to define a new type, along with some functions the allow you to work with that type.
An example taken from the Racket documentation:
> (define-type Shape
[circle (radius : number)]
[rectangle (width : number)
(height : number)])
> (define (area [s : Shape])
(type-case Shape s
[circle (r) (* (* r r) 3.14)]
[rectangle (w h) (* w h)]))
> (area (circle 1))
- number
3.14
> (area (rectangle 2 3))
- number
6
Here it defines a new type Shape which it says has two variants: circle and rectangle. It further says that in the case of the circle variant, the interesting piece of data is its radius, which is a number; and in the rectangle variant, there's two pieces of data (or "fields"), which are its width and height (both numbers).
It then defines a new function area, which is expected to take a single input of type Shape (the type we just declared earlier). The type-case expression is used to specify how to compute the area of a Shape depending on which variant we're dealing with. If we're dealing with a circle then we can compute the area by squaring the radius and multiplying it by Pi. If we're dealing with a rectangle, then we can compute the area by multiplying its width by its height.
Earlier, I said define-type is not a function, but by virtue of using it, it defines a new type and a bunch of functions that allow us to work with that type. So what are these new functions it defines? See this example:
> (define c (circle 10))
> c
- Shape
(circle 10)
> (circle? c)
- boolean
#t
> (circle-radius c)
- number
10
> (define r (rectangle 2 3))
> (+ (rectangle-width r) (rectangle-height r))
- number
5
Here we then use define to modify the programming environment so that the name c refers to a circle we created with radius 10. circle? is a function that automatically got created when we did the define-type in the earlier example, and it returns whether or not the shape we're dealing with is a circle variant (as opposed to a rectangle variant). Similar, the circle-radius, rectangle-width and rectangle-height functions were automatically defined for us when we used define-type, which allow us to access the fields inside of the data type.

Algorithm evaluating user-defined functions

Hello I have some homework that consists of extending a lisp interpreter. We are to build three primitives with pre-evaluated arguments ( for exemple <= ), and three primitives who do their own evaluation ( for example if ).
I went beyond the call of duty and created the only fun function in the bounds of this exercice : (defun) [it's the common lisp keyword for defining a user-function].
I would like to know if my algorithm for managing a user-defined function call is worthwhile.
In pseudo code, here it goes :
get list of parameters # (x y z)
get list of arguments # (1 2 3)
get body of function # (+ x (* y z))
for each parameter, arg # x
body = replace(parameter, argument, body) # (+ 1 (* y z))
# (+ 1 (* 2 z))
# (+ 1 (* 2 3))
eval(body) # 7
Are there better ways to accomplish this?
Thanks.
EDIT: replace() is a function recursing on sub-lists of body.
I never found better, no one proposed better, the question generated no interest whatever, and I'm on a rampage to close my opened questions, so here is the answer :
my algorithm was good enough.

Why do function calls slow things down in clojure?

I've been playing around with the Is Clojure is Still Fast? (and prequel Clojure is Fast) code. It seemed unfortunate that inlining the differential equation (f) is one of the steps taken to improving performance. The cleanest/fastest thing I've been able to come up without doing this is the following:
; As in the referenced posts, for giving a rough measure of cycles/iteration (I know this is a very rough
; estimate...)
(def cpuspeed 3.6) ;; My computer runs at 3.6 GHz
(defmacro cyclesperit [expr its]
`(let [start# (. System (nanoTime))
ret# ( ~#expr (/ 1.0 ~its) ~its )
finish# (. System (nanoTime))]
(println (int (/ (* cpuspeed (- finish# start#)) ~its)))))
;; My solution
(defn f [^double t ^double y] (- t y))
(defn mysolveit [^double t0 ^double y0 ^double h ^long its]
(if (> its 0)
(let [t1 (+ t0 h)
y1 (+ y0 (* h (f t0 y0)))]
(recur t1 y1 h (dec its)))
[t0 y0 h its]))
; => 50-55 cycles/it
; The fastest solution presented by the author (John Aspden) is
(defn faster-solveit [^double t0 ^double y0 ^double h ^long its]
(if (> its 0)
(let [t1 (+ t0 h)
y1 (+ y0 (* h (- t0 y0)))]
(recur t1 y1 h (dec its)))
[t0 y0 h its]))
; => 25-30 cycles/it
The type hinting in my solution helps quite a bit (it's 224 cycles/it without type hinting on either f or solveit), but it's still nearly 2x slower than the inlined version. Ultimately this performance is still pretty decent, but this hit is unfortunate.
Why is there such a performance hit for this? Is there a way around it? Are there plans to find ways of improvingthis? As pointed out by John in the original post, it seems funny/unfortunate for function calls to be inefficient in a functional language.
Note: I'm running Clojure 1.5 and have :jvm-opts ^:replace [] in a project.clj file so that I can use lein exec/run without it slowing things down (and it will if you don't do this I discovered...)
Benchmarking in the presence of a JIT compiler is tricky; you really must allow for a warm-up period, but then you also can't just run it all in a loop, since it may then be proved a no-op and optimized away. In Clojure, the usual solution is to use Hugo Duncan's Criterium.
Running a Criterium benchmark for (solveit 0.0 1.0 (/ 1.0 1000000) 1000000) for both versions of solveit results in pretty much exactly the same timings on my machine (mysolveit ~3.44 ms, faster-solveit ~3.45 ms). That's in a 64-bit JVM run with -XX:+UseConcMarkSweepGC, using Criterium 0.4.2 (criterium.core/bench). Presumably HotSpot just inlines f. In any case, there's no performance hit at all.
Adding to the already good answers, the JVM JIT most often does inline the primitive function calls when warmed up, and in this case, when you bench it with a warmed JIT you see the same results. Just wanted to say Clojure also has an inlining feature though for cases where that yields benefits.
(defn f
{:inline-arities #{2}
:inline (fn [t y] `(- (double ~t) (double ~y)))}
^double [^double t ^double y]
(- t y))
Now Clojure will compile away the calls to f, inlining the function at compile time. Whereas the JIT will inline the function at runtime as needed otherwise.
Also note that I added a ^double type hint to the return of f, if you don't do that, it gets compiled to return Object, and a cast needs to be added, I'm not sure if that really affects performance much, but if you want a fully primitive function that takes primitives and return primitives you need to type hint the return as well.

Resources