How does the Scheme procedure inexact->exact, described in SICP, operate?
The Scheme standard only gives some general constraints on how exactness/inexactness is recorded, but most Scheme implementations, up to standard R5RS, operate as follows (MIT Scheme, which is SICP's "mother tongue", also works this way):
The type information for each cell that contains data of a numeric type says whether the data is exact or inexact.
Arithmetic operations on the data record derive the exactness of the result from the exactness of the inputs, where generally inexactness is infectious: if any of the operands is inexact, the result probably will be so too. Note, though, Scheme implementations are allowed to infer exactness in special cases, say if you multiply inexact 4.3 by exact 0, you can know the result is 0 exactly.
The special operations inexact->exact and exact->inexact are casts on the numeric types, ensuring that the resulting type is exact or inexact respectively.
Some points: first, different scheme standards vary in when operators give exactness or not; the standards underdetermine what happens. For example, several Scheme implementations have representations for exact rationals, allowing (/ 1 3) to be represented exactly, where a Scheme implementation with only floats must represent this inexactly.
Second, R6RS has a different notion of contagion from that of SICP and earlier standards, because the older criterion is, frankly, broken.
Exactness is simply a property of a number: it doesn't change the value of the number itself. So, for an implementation that uses a flag to indicate exactness, inexact->exact simply sets the exactness flag on that number.
Related
While using the VHDL-2019 IEEE spec
section. 5.2.3.1. General
"However, an implementation shall allow the declaration of any integer
type whose range is wholly contained within the bounds –(2**63) and
(2**63)–1 inclusive."
(I added the exponential **)
Does this mean –(2**63) = -9223372036854775808 ?
In the 1993 spec it states -((2**31) - 1) and (2**31) - 1)
-2147483647 & 2147483647
Does the new VHDL spec have an error in that definition?
Ken
The change is quite intentional. See LCS2016_026c. You could note this gives the same range as a 64 bit integer in programming languages. The non-symmetrical effect comes from two's complement numbers which are the basis of integer types in VHDL tool implementations, the age of big iron with decimal based ALUs long faded.
The previous symmetrical range was not an implementation concern, VHDL arithmetic semantics requires run time detection of rollover or underflow. This change allows simpler detection based on changing signs without testing values while performing arithmetic in yet a larger than 64 bits universal integer.
The value range increase is an attempt to force synthesis vendors to support more than the minimum range specified in previous editions of the the standard. How well that works out (and over what implementation interval) will be a matter of history at some future date. There are also secondary effects based on index ranges (IEEE Std 1076-2019 5.3.2.2 Index constraints and discrete ranges) and positional correspondence for enumerated types (5.2.2 Enumerated types, 5.2.2.1 General). It's not practicable to simulate (or synthesize) composite objects with extreme index value ranges, starting with stack size issues. Industry practice isn't settled, and likely may result in today's HDLs being obsoleted.
Concerns to the accuracy of the standard's semantic description can be addressed to the IEEE-SAs VASG subcommittee which encourages participation by interested parties. You will find Stackoverflow vhdl tag denizens here who have been involved in the standardization process.
Does ISO-Prolog have any prescriptions / recommendations
regarding the representation of negative integers and operations on them? 2's complement, maybe?
Asking as a programmer/user: Are there any assumptions I can safely make when performing bit-level operations on negative integers?
ISO/IEC 13211-1 has several requirements for integers, but a concrete representation is not required. If the integer representation is bounded, one of the following conditions holds
7.1.2 Integer
...
minint = -(*minint)
minint = -(maxint+1)
Further, the evaluable functors listed in 9.4 Bitwise functors, that is (>>)/2, (<<)/2, (/\)/2, (\/)/2, (\)/1, and xor/2 are implementation defined for negative values. E.g.,
8.4.1 (>>)/2 – bitwise right shift
9.4.1.1 Description
...
The value shall be implementation defined depending onwhether the shift is logical (fill with zeros) or arithmetic(fill with a copy of the sign bit).The value shall be implementation defined if VS is negative,or VS is larger than the bit size of an integer.
Note that implementation defined means that a conforming processor has to document this in the accompanying documentation. So before using a conforming processor, you have to read the manual.
De facto, there is no current Prolog processor (I am aware of) that does not provide arithmetic right shift and does not use 2's complement.
Strictly speaking these are two different questions:
Actual physical representation: this isn't visible at the Prolog level, and therefore the standard quite rightly has nothing to say about it. Note that many Prolog systems have two or more internal representations (e.g. two's complement fixed size and sign+magnitude bignums) but present a single integer type to the programmer.
Results of bitwise operations: while the standard defines these operations, it leaves much of their behaviour implementation defined. This is a consequence of (a) not having a way to specify the width of a bit pattern, and (b) not committing to a specific mapping between negative numbers and bit patterns.
This not only means that all bitwise operations on negative numbers are officially not portable, but also has the curious effect that the result of bitwise negation is totally implementation-defined (even for positive arguments): Y is \1 could legally give -2, 268435454, 2147483646, 9223372036854775806, etc. All you know is that negating twice returns the original number.
In practice, fortunately, there seems to be a consensus towards "The bitwise arithmetic operations behave as if operating on an unlimited length two's complement representation".
For example, if you try (+ 3 4), how is it broken down and calculated in the source, specifically? Does it use recursion with add1?
The implementation of + is actually much more complicated than you might expect, because arithmetic is generic in Racket: it works on integers, rational numbers, complex numbers, and so on. You can even mix and match these kinds of numbers and it'll do the right thing. In the end, it's ultimately going to use arithmetic in C, which is what the runtime system is written in.
If you're curious, you can find more of the guts of the numeric tower here: https://github.com/plt/racket/blob/master/src/racket/src/numarith.c
Other pointers: Bignum arithmetic, the Scheme numeric tower, the Racket reference on numbers.
The + operator is a primitive operation, part of the core language. For efficiency reasons, it wouldn't make much sense to implement it as a recursive procedure.
DrRacket running R5RS says that 1### is a perfectly valid Scheme number and prints a value of 1000.0. This leads me to believe that the pound signs (#) specify inexactness in a number, but I'm not certain. The spec also says that it is valid syntax for a number literal, but it does not say what those signs mean.
Any ideas as to what the # signs in Scheme number literals signifiy?
The hash syntax was introduced in 1989. There were a discussion on inexact numbers on the Scheme authors mailing list, which contains several nice ideas. Some caught on and some didn't.
http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1989/msg00178.html
One idea that stuck was introducing the # to stand for an unknown digit.
If you have measurement with two significant digits you can indicate that with 23## that the digits 2 and 3 are known, but that the last digits are unknown. If you write 2300, then you can't see that the two zero aren't to ne trusted. When I saw the syntax I expected 23## to evaluate to 2350, but (I believe) the interpretation is implementation dependent. Many implementation interpret 23## as 2300.
The syntax was formally introduced here:
http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1989/msg00324.html
EDIT
From http://groups.csail.mit.edu/mac/ftpdir/scheme-reports/r3rs-html/r3rs_8.html#SEC52
An attempt to produce more digits than are available in the internal
machine representation of a number will be marked with a "#" filling
the extra digits. This is not a statement that the implementation
knows or keeps track of the significance of a number, just that the
machine will flag attempts to produce 20 digits of a number that has
only 15 digits of machine representation:
3.14158265358979##### ; (flo 20 (exactness s))
EDIT2
Gerald Jay Sussman writes why the introduced the syntax here:
http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1994/msg00096.html
Here's the R4RS and R5RS docs regarding numerical constants:
R4RS 6.5.4 Syntax of numerical constants
R5RS 6.2.4 Syntax of numerical constants.
To wit:
If the written representation of a number has no exactness prefix, the constant may be either inexact or exact. It is inexact if it contains a decimal point, an exponent, or a "#" character in the place of a digit, otherwise it is exact.
Not sure they mean anything beyond that, other than 0.
Does emacs have support for big numbers that don't fit in integers? If it does, how do I use them?
Emacs Lispers frustrated by Emacs’s
lack of bignum handling: calc.el
provides very good bignum
capabilities.—EmacsWiki
calc.el is part of the GNU Emacs distribution. See its source code for available functions. You can immediately start playing with it by typing M-x quick-calc. You may also want to check bigint.el package, that is a non-standard, lightweight implementation for handling bignums.
Emacs 27.1 supports bignums natively (see the NEWS file of Emacs):
** Emacs Lisp integers can now be of arbitrary size.
Emacs uses the GNU Multiple Precision (GMP) library to support
integers whose size is too large to support natively. The integers
supported natively are known as "fixnums", while the larger ones are
"bignums". The new predicates 'bignump' and 'fixnump' can be used to
distinguish between these two types of integers.
All the arithmetic, comparison, and logical (a.k.a. "bitwise")
operations where bignums make sense now support both fixnums and
bignums. However, note that unlike fixnums, bignums will not compare
equal with 'eq', you must use 'eql' instead. (Numerical comparison
with '=' works on both, of course.)
Since large bignums consume a lot of memory, Emacs limits the size of
the largest bignum a Lisp program is allowed to create. The
nonnegative value of the new variable 'integer-width' specifies the
maximum number of bits allowed in a bignum. Emacs signals an integer
overflow error if this limit is exceeded.
Several primitive functions formerly returned floats or lists of
integers to represent integers that did not fit into fixnums. These
functions now simply return integers instead. Affected functions
include functions like 'encode-char' that compute code-points, functions
like 'file-attributes' that compute file sizes and other attributes,
functions like 'process-id' that compute process IDs, and functions like
'user-uid' and 'group-gid' that compute user and group IDs.
Bignums are automatically chosen when arithmetic calculations with fixnums overflow the fixnum-range. The expression (bignump most-positive-fixnum) returns nil while (bignump (+ most-positive-fixnum 1)) returns t.