DrRacket running R5RS says that 1### is a perfectly valid Scheme number and prints a value of 1000.0. This leads me to believe that the pound signs (#) specify inexactness in a number, but I'm not certain. The spec also says that it is valid syntax for a number literal, but it does not say what those signs mean.
Any ideas as to what the # signs in Scheme number literals signifiy?
The hash syntax was introduced in 1989. There were a discussion on inexact numbers on the Scheme authors mailing list, which contains several nice ideas. Some caught on and some didn't.
http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1989/msg00178.html
One idea that stuck was introducing the # to stand for an unknown digit.
If you have measurement with two significant digits you can indicate that with 23## that the digits 2 and 3 are known, but that the last digits are unknown. If you write 2300, then you can't see that the two zero aren't to ne trusted. When I saw the syntax I expected 23## to evaluate to 2350, but (I believe) the interpretation is implementation dependent. Many implementation interpret 23## as 2300.
The syntax was formally introduced here:
http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1989/msg00324.html
EDIT
From http://groups.csail.mit.edu/mac/ftpdir/scheme-reports/r3rs-html/r3rs_8.html#SEC52
An attempt to produce more digits than are available in the internal
machine representation of a number will be marked with a "#" filling
the extra digits. This is not a statement that the implementation
knows or keeps track of the significance of a number, just that the
machine will flag attempts to produce 20 digits of a number that has
only 15 digits of machine representation:
3.14158265358979##### ; (flo 20 (exactness s))
EDIT2
Gerald Jay Sussman writes why the introduced the syntax here:
http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1994/msg00096.html
Here's the R4RS and R5RS docs regarding numerical constants:
R4RS 6.5.4 Syntax of numerical constants
R5RS 6.2.4 Syntax of numerical constants.
To wit:
If the written representation of a number has no exactness prefix, the constant may be either inexact or exact. It is inexact if it contains a decimal point, an exponent, or a "#" character in the place of a digit, otherwise it is exact.
Not sure they mean anything beyond that, other than 0.
Related
My understanding is that the single quote ' in Scheme is used to tell Scheme that what follows is a symbol and not a variable. Hence, it should not be evaluated.
Based on this understanding, I don't get why Chicken prints 1.0 when I enter '3/3 at the REPL.
CHICKEN
(c) 2008-2016, The CHICKEN Team
(c) 2000-2007, Felix L. Winkelmann
Version 4.11.0
linux-unix-gnu-x86-64 [ 64bit manyargs dload ptables ]
compiled 2016-08-23 on buildvm-13.phx2.fedoraproject.org
#;1> '3/3
1.0
I expected it to print 3/3. Why does this get evaluated instead of a quote being present?
Thanks.
Quote is a syntax which expands to a quote expression. That is to say, 'X means (quote X), whatever X is. quote is an operator whose value is the argument syntax itself. For instance, the value of (quote (+ 2 2)) is the list (+ 2 2) itself, rather than the value 4. Likewise, (quote a) yields the symbol a rather than the value of the expression a.
Like other Lisp dialects, Scheme programs are written in a data notation. Every element of the source code of a Scheme program corresponds to an identifiable data structure which a Scheme program could manipulate. quote is a way of gaining access to a piece of the program's body as a literal object, passing that object into the program's space of run-time values.
3/3 is a token which denotes a number. That number is 1.0. Some objects have more than one "spelling". Sometimes you use one spelling when entering the object into the Lisp system, and when it is printed, a different spelling is used.
The 3/3 evaluation is not the usual expression evaluation, but something which occurs when the token is scanned and converted to an object.
Try entering 3/3 without the quote.
Analogy: your question is like:
How come when I type '1.0E3, I get 1000.0? The exponent E3 is being evaluated in spite of the quote!
However, I would expect 3/3 and '3/3 to produce 1 rather than 1.0.
The reason 3/3 denotes 1.0 is that Chicken Scheme doesn't have full support for rational numbers, "out of the box". See this mailing list posting:
https://lists.gnu.org/archive/html/chicken-users/2013-03/msg00032.html
Also see the recommendation: there is an "egg" (Chicken Scheme module) called numbers which provides the "full numerical tower". "Numerical tower" is a Lisp jargon for the type system of numbers. A "full tower" means having "the works": complex numbers, rationals, bignum integers, floating-point numbers in multiple precision and so on.
I have a system that is confined to two alphanumeric characters. Some simple math shows that we get 1,296 combinations if we use all possible permutations 0-9 and a-z. Lower case letters cannot be distinguished from upper case, special characters (including a blank character) cannot be used.
Is there any creative mapping, perhaps to an external reference, to create a way to take this two character field significantly beyond 1,296 combinations?
Examples of identifers would be `00, OO, AZ, Z4, etc.'
Thanks!
I'm afraid not, no more than you could get a 3 bit number to represent more than 8 different numbers. If you're interested in the details you can look up information theory or Kolmogorov complexity. Essentially with only 1,296 combinations then you can only label 1,296 possible pieces of information.
As an example, consider if you had 1,297 things. All of those two letter combinations would take up the first 1,296 so what combination would be associated with the next one? It would have to be a repeat of something which you had earlier.
Shor also has some good material on this, and the implications of that sort of thing form the basis for a lot of file compression systems.
You could maybe squeeze out one more combination if you cheat, and allow a 'null' value to represent a different possibility, but thats not totally relevant to the idea of the question.
If you are restricted to two characters taken from an alphabet of 36, then you are limited to 36² distinct symbols, that's it.
More context is required to find workarounds, like stealing bits elsewhere, using symbols in pairs, breaking the case limitation, exploiting the history of transations...
The precise meaning of "a system that is confined to two alphanumeric characters" needs to be known to be able to suggest a workaround. Is that a space constraint? Do you need the restriction to 2 chars for efficiency? Does it need to work with other code that accepts or generates 2 char indexes?
If you have up to 1295 identifiers that are used often, and some others that occur only occasionally, you could choose an identifier, e.g. "ZZ", to indicate that another identifier is following. So "00" through to "ZY" would be 1295 simple 2-char identifiers, and "ZZ00" though to "ZZZZ" would be a further 1296 combined 4-char identifiers. (Or "ZZ0000" through to "ZZZZZZ" for a further 1296*1296 identifiers ...)
This could work for space constraints. For efficiency, it depends on whether the additional check to see if the identifier is "ZZ" is too expensive or not.
Does ISO-Prolog have any prescriptions / recommendations
regarding the representation of negative integers and operations on them? 2's complement, maybe?
Asking as a programmer/user: Are there any assumptions I can safely make when performing bit-level operations on negative integers?
ISO/IEC 13211-1 has several requirements for integers, but a concrete representation is not required. If the integer representation is bounded, one of the following conditions holds
7.1.2 Integer
...
minint = -(*minint)
minint = -(maxint+1)
Further, the evaluable functors listed in 9.4 Bitwise functors, that is (>>)/2, (<<)/2, (/\)/2, (\/)/2, (\)/1, and xor/2 are implementation defined for negative values. E.g.,
8.4.1 (>>)/2 – bitwise right shift
9.4.1.1 Description
...
The value shall be implementation defined depending onwhether the shift is logical (fill with zeros) or arithmetic(fill with a copy of the sign bit).The value shall be implementation defined if VS is negative,or VS is larger than the bit size of an integer.
Note that implementation defined means that a conforming processor has to document this in the accompanying documentation. So before using a conforming processor, you have to read the manual.
De facto, there is no current Prolog processor (I am aware of) that does not provide arithmetic right shift and does not use 2's complement.
Strictly speaking these are two different questions:
Actual physical representation: this isn't visible at the Prolog level, and therefore the standard quite rightly has nothing to say about it. Note that many Prolog systems have two or more internal representations (e.g. two's complement fixed size and sign+magnitude bignums) but present a single integer type to the programmer.
Results of bitwise operations: while the standard defines these operations, it leaves much of their behaviour implementation defined. This is a consequence of (a) not having a way to specify the width of a bit pattern, and (b) not committing to a specific mapping between negative numbers and bit patterns.
This not only means that all bitwise operations on negative numbers are officially not portable, but also has the curious effect that the result of bitwise negation is totally implementation-defined (even for positive arguments): Y is \1 could legally give -2, 268435454, 2147483646, 9223372036854775806, etc. All you know is that negating twice returns the original number.
In practice, fortunately, there seems to be a consensus towards "The bitwise arithmetic operations behave as if operating on an unlimited length two's complement representation".
I have been doing some reading, and I'm having a tough time understanding how to interpret something that is a "digits x".
I.E.
type something is digits 6
I get that it's 6 digits of precision, but I guess what has me mixed up is what does that mean.
1) Y.XXXXXX (6X's),
2) XXX.XXX (Any number of digits, just will always be 6 of them counting both fore and aft the mantissa)
...
I'm just trying to understand what a range of something that is digits 6 (or digits n to be more generic), is there a formula I can simply plug into to determine what my ranges are on a type that is some number of digits?
A type declared with digits is a floating-point type, similar to Float or Long_Float.
The 6 is "the minimum number of significant decimal digits required for
the floating point type". For example, all the following will be represented reasonably accurately (but not exactly):
type My_Real is digits 6;
X: My_Real := 1.23456;
Y: My_Real := 12345.6;
Z: My_Real := 1.23456E7;
In practice, there are usually just 2 or 3 underlying floating-point types on a given system. The compiler will choose an appropriate one as the underlying type for your declaration. In practice, two types declared with digits 2 and digits 6 will probably have exactly the same representation and precision.
Understanding the phrase "not exactly" requires an understanding of floating-point that's well beyond the scope of a single question, but if you're familiar with floating-point in other languages, it's the same general idea.
If you want a general understanding of what floating-point is and how it works, the Wikipedia Article isn't bad. A much more advanced treatment is David Goldberg's classic paper "What Every Computer Scientist Should Know About Floating-Point Arithmetic", available here as a web page and here as a PDF.
How does the Scheme procedure inexact->exact, described in SICP, operate?
The Scheme standard only gives some general constraints on how exactness/inexactness is recorded, but most Scheme implementations, up to standard R5RS, operate as follows (MIT Scheme, which is SICP's "mother tongue", also works this way):
The type information for each cell that contains data of a numeric type says whether the data is exact or inexact.
Arithmetic operations on the data record derive the exactness of the result from the exactness of the inputs, where generally inexactness is infectious: if any of the operands is inexact, the result probably will be so too. Note, though, Scheme implementations are allowed to infer exactness in special cases, say if you multiply inexact 4.3 by exact 0, you can know the result is 0 exactly.
The special operations inexact->exact and exact->inexact are casts on the numeric types, ensuring that the resulting type is exact or inexact respectively.
Some points: first, different scheme standards vary in when operators give exactness or not; the standards underdetermine what happens. For example, several Scheme implementations have representations for exact rationals, allowing (/ 1 3) to be represented exactly, where a Scheme implementation with only floats must represent this inexactly.
Second, R6RS has a different notion of contagion from that of SICP and earlier standards, because the older criterion is, frankly, broken.
Exactness is simply a property of a number: it doesn't change the value of the number itself. So, for an implementation that uses a flag to indicate exactness, inexact->exact simply sets the exactness flag on that number.