Regarding the quote (') in Scheme - scheme

My understanding is that the single quote ' in Scheme is used to tell Scheme that what follows is a symbol and not a variable. Hence, it should not be evaluated.
Based on this understanding, I don't get why Chicken prints 1.0 when I enter '3/3 at the REPL.
CHICKEN
(c) 2008-2016, The CHICKEN Team
(c) 2000-2007, Felix L. Winkelmann
Version 4.11.0
linux-unix-gnu-x86-64 [ 64bit manyargs dload ptables ]
compiled 2016-08-23 on buildvm-13.phx2.fedoraproject.org
#;1> '3/3
1.0
I expected it to print 3/3. Why does this get evaluated instead of a quote being present?
Thanks.

Quote is a syntax which expands to a quote expression. That is to say, 'X means (quote X), whatever X is. quote is an operator whose value is the argument syntax itself. For instance, the value of (quote (+ 2 2)) is the list (+ 2 2) itself, rather than the value 4. Likewise, (quote a) yields the symbol a rather than the value of the expression a.
Like other Lisp dialects, Scheme programs are written in a data notation. Every element of the source code of a Scheme program corresponds to an identifiable data structure which a Scheme program could manipulate. quote is a way of gaining access to a piece of the program's body as a literal object, passing that object into the program's space of run-time values.
3/3 is a token which denotes a number. That number is 1.0. Some objects have more than one "spelling". Sometimes you use one spelling when entering the object into the Lisp system, and when it is printed, a different spelling is used.
The 3/3 evaluation is not the usual expression evaluation, but something which occurs when the token is scanned and converted to an object.
Try entering 3/3 without the quote.
Analogy: your question is like:
How come when I type '1.0E3, I get 1000.0? The exponent E3 is being evaluated in spite of the quote!
However, I would expect 3/3 and '3/3 to produce 1 rather than 1.0.
The reason 3/3 denotes 1.0 is that Chicken Scheme doesn't have full support for rational numbers, "out of the box". See this mailing list posting:
https://lists.gnu.org/archive/html/chicken-users/2013-03/msg00032.html
Also see the recommendation: there is an "egg" (Chicken Scheme module) called numbers which provides the "full numerical tower". "Numerical tower" is a Lisp jargon for the type system of numbers. A "full tower" means having "the works": complex numbers, rationals, bignum integers, floating-point numbers in multiple precision and so on.

Related

Does Guile relax restriction on variable name convention by allowing variable names beginning with number?

For example, It seems that 1+2 can be used in Guile as a variable name:
(define 1+2 4)
1+2 ;==>4
I was surprised to find that R6RS appears not to like identifiers whose names start with a digit (unless they are escaped, perhaps?), if I am reading it properly. It looks as if the same is true for R5RS. I have not looked at other specifications.
So, if my readings of the specs are correct, then yes, Guile is relaxing this requirement. However, as I say, I was surprised by this, as, for instance, Racket is perfectly happy with identifiers like 1+, even when using the r5rs language, and such identifiers are very common in other Lisp-family languages (Common Lisp defines 1+ and 1- in the language itself).
It may however be the case that I am misreading the syntax for <identifier> in the specs, or misinterpreting what they mean.

applicative-order/call-by-value and normal-order/call-by-name differences

Background
I am learning the sicp according to an online course and got confused by its lecture notes. In the lecture notes, the applicative order seems to equal cbv and normal order to cbn.
Confusion
But the wiki points out that, beside evaluation orders(left to right, right to left, or simultaneous), there is a difference between the applicative order and cbv:
Unlike call-by-value, applicative order evaluation reduces terms within a function body as much as possible before the function is applied.
I don't understand what does it mean by reduced. Aren't applicative order and cbv both going to get the exact value of a variable before going into the function evaluation.
And for the normal order and cbv, I am even more confused according to wiki.
In contrast, a call-by-name strategy does not evaluate inside the body of an unapplied function.
I guess does it mean that normal order would evaluate inside the body of an unapplied function. How could it be?
Question
Could someone give me some more concrete definitions of the four strategies.
Could someone show an example for each strategy, using whatever programming language.
Thanks a lot?
Applicative order (without taking into account
the order of evaluation ,which in scheme is undefined) would be equivalent to cbv. All arguments of a function call are fully evaluated before entering the functions body. This is the example given in
SICP
(define (try a b)
(if (= a 0) 1 b))
If you define the function, and call it with these arguments:
(try 0 (/ 1 0))
When using applicative order evaluation (default in scheme) this will produce and error. It will evaluate
(/ 1 0) before entering the body. While with normal order evaluation, this would return 1. The arguments
will be passed without evaluation to the functions body and (/ 1 0) will never be evaluated because (= a 1) is true, avoiding the error.
In the article you link to, they are talking about Lambda Calculus when they mention Applicative and Normal order evaluation. In this article wiki It is explained more clearly I think.
Reduced means applying reduction rules to the expression. (also in the link).
α-conversion: changing bound variables (alpha);
β-reduction: applying functions to their arguments (beta);
Normal order:
The leftmost, outermost redex is always reduced first. That is, whenever possible the arguments are substituted into the body of an abstraction before the arguments are reduced.
Call-by-name
As normal order, but no reductions are performed inside abstractions. For example λx.(λx.x)x is in normal form according to this strategy, although it contains the redex (λx.x)x.
A normal form is an equivalent expression that cannot be reduced any further under the rules imposed by the form
In the same article, they say about call-by-value
Only the outermost redexes are reduced: a redex is reduced only when its right hand side has reduced to a value (variable or lambda abstraction).
And Applicative order:
The leftmost, innermost redex is always reduced first. Intuitively this means a function's arguments are always reduced before the function itself.
You can read the article I linked for more information about lambda-calculus.
Also Programming Language Foundations

The little schemer - semantic

I have just started reading The Little schemer. I have some problem understanding some words.
In page 27 it says,
The Law of Eq?
The primitive eq? takes two arguments. Each must be a non-numeric atom."
And a footnote says: In practice, some numbers may be arguments of eq?
I am using racket-minimal as my scheme interpreter. It evaluates (eq? 10 10) to #t.
There are many similar type of problems in TOYS chapter.
What did the author mean by that must(marked as bold) and the footnote?
It's traditional to embed some primitive data types such as low integers and characters in the pointer itself making those datatypes eq? even when the data came to be in the source at different times / points in source. However, numbers can be any size so even if number upto a certain implementation dependent size at some point they will be to big for the pointer. When you try (eq? 10000000000 10000000000) it might be #f on 32 bits systems and #t in 64 bit systems while (eqv? 10000000000 10000000000) is #t in any system.
Scheme's true identity predicate is eqv?. Eq? is an optimized version that is allowed to report #f instead of #t when applied to numbers, characters, or procedures. Most Scheme implementations of eq? do the right thing on small exact numbers (called "fixnums"), characters, and procedures, but fall down on larger numbers or numbers of other types.
So saying "MUST" means that you get only partly predictable results if you apply eq? to a number; the footnote means that in some cases (and this typically includes 10) you will get away with it. For details on what various Schemes actually do with fixnums, see FixnumInfo at the R7RS development site.

Meaning of # in Scheme number literals

DrRacket running R5RS says that 1### is a perfectly valid Scheme number and prints a value of 1000.0. This leads me to believe that the pound signs (#) specify inexactness in a number, but I'm not certain. The spec also says that it is valid syntax for a number literal, but it does not say what those signs mean.
Any ideas as to what the # signs in Scheme number literals signifiy?
The hash syntax was introduced in 1989. There were a discussion on inexact numbers on the Scheme authors mailing list, which contains several nice ideas. Some caught on and some didn't.
http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1989/msg00178.html
One idea that stuck was introducing the # to stand for an unknown digit.
If you have measurement with two significant digits you can indicate that with 23## that the digits 2 and 3 are known, but that the last digits are unknown. If you write 2300, then you can't see that the two zero aren't to ne trusted. When I saw the syntax I expected 23## to evaluate to 2350, but (I believe) the interpretation is implementation dependent. Many implementation interpret 23## as 2300.
The syntax was formally introduced here:
http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1989/msg00324.html
EDIT
From http://groups.csail.mit.edu/mac/ftpdir/scheme-reports/r3rs-html/r3rs_8.html#SEC52
An attempt to produce more digits than are available in the internal
machine representation of a number will be marked with a "#" filling
the extra digits. This is not a statement that the implementation
knows or keeps track of the significance of a number, just that the
machine will flag attempts to produce 20 digits of a number that has
only 15 digits of machine representation:
3.14158265358979##### ; (flo 20 (exactness s))
EDIT2
Gerald Jay Sussman writes why the introduced the syntax here:
http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1994/msg00096.html
Here's the R4RS and R5RS docs regarding numerical constants:
R4RS 6.5.4 Syntax of numerical constants
R5RS 6.2.4 Syntax of numerical constants.
To wit:
If the written representation of a number has no exactness prefix, the constant may be either inexact or exact. It is inexact if it contains a decimal point, an exponent, or a "#" character in the place of a digit, otherwise it is exact.
Not sure they mean anything beyond that, other than 0.

How does the Scheme function inexact->exact operate?

How does the Scheme procedure inexact->exact, described in SICP, operate?
The Scheme standard only gives some general constraints on how exactness/inexactness is recorded, but most Scheme implementations, up to standard R5RS, operate as follows (MIT Scheme, which is SICP's "mother tongue", also works this way):
The type information for each cell that contains data of a numeric type says whether the data is exact or inexact.
Arithmetic operations on the data record derive the exactness of the result from the exactness of the inputs, where generally inexactness is infectious: if any of the operands is inexact, the result probably will be so too. Note, though, Scheme implementations are allowed to infer exactness in special cases, say if you multiply inexact 4.3 by exact 0, you can know the result is 0 exactly.
The special operations inexact->exact and exact->inexact are casts on the numeric types, ensuring that the resulting type is exact or inexact respectively.
Some points: first, different scheme standards vary in when operators give exactness or not; the standards underdetermine what happens. For example, several Scheme implementations have representations for exact rationals, allowing (/ 1 3) to be represented exactly, where a Scheme implementation with only floats must represent this inexactly.
Second, R6RS has a different notion of contagion from that of SICP and earlier standards, because the older criterion is, frankly, broken.
Exactness is simply a property of a number: it doesn't change the value of the number itself. So, for an implementation that uses a flag to indicate exactness, inexact->exact simply sets the exactness flag on that number.

Resources