Fixed precision vs. arbitrary precision - performance

A lot of modern languages have support for arbitrary precision numbers. Java has BigInteger, Haskell has Integer, Python is de-facto arbitrary. But for a lot of these languages, arbitrary precision isn't de-facto, and is instead prefixed with 'Big'.
Why isn't arbitrary precision de-facto for all modern languages? is there a particular reason to use fixed-precision numbers over arbitrary-precision?
If I had to guess it would be because fixed precision somehow corresponds to assembly instructions, so it's more optimal and runs faster, which would be a worthwhile trade-off if you didn't have to worry about overflow because you knew the numeric ranges beforehand. Are there any particular use cases of fixed-precision over arbitrary-precision?

It is a trade-off between performance and features/safety. I cannot think of any reason why I would prefer using overflowing integers other then performance. Also, I could easily emulate overflowing semantics with non-overflowing types if I ever needed to.
Also, overflowing a signed int is a very rare occurrence in practice. I happens almost never. I wish that modern CPUs supported raising an exception on overflow without performance cost.
Different languages emphasize different features (where performance is a feature as well). That's good.

Fixed precision is likely to be faster than arbitrary; don't confuse however fixed precision with precision of the machine itself. You can use an extended (but fixed) precision in some cases. I myself often used the excellent library qd by the great expert D.H. Bailey (see http://crd-legacy.lbl.gov/~dhbailey/mpdist/) which can be easely installed on a Linux system for instance. This library provides two fixed-precision types with a greater precision than native double-precision, but they are (by far) quicker than more known arbitrary-precision libraries.

Related

Fixed point precision real numbers arithmetic support for Eigen/Eigen3

I'm going to raise again a very general question relating to Eigen/Eigen3 matrix library support for different matrix support "fields/representations" for operations.
I've analyzed a bit the Eigen matrix template library, and so far, I've only seen suffort for floating point real numbers arithmetic (that is IEEE754 single precision 32 bits and double precision 64 bits floating point numbers).
I would like to raise a question concerning fixed-size precision real numbers arithmetic support for Eigen/Eigen3:
is there any support for fixed precision vectorization in Eigen/Eigen3 ?
if not so, what would be necessary to implement such a suport ?
can standard decomposition routines and matrix operations be immediately implemented using fixed size precision ? If so, how ?
if not so, what are the pre-requisites for such a support (concepts, operators overloads, "real" functions required to be implemented, etc...) in order to implement such operations/decompositions without impairing Eigen's core ?
are there any plans to implement such functionnalities into the core of Eigen/Eigen3 ?
If such kind of things aren't foreseen in the near future,
does there exist already any kind of such functionnality that you are aware of and that would be compatible with Eigen/Eigen3 in order to fully implement vectorization/optimizations ?
if not so, which approach would you recomment if s.o. was interested in implementing it ?
I would like to know the feasibility to implement a few matrix computations onto a 16- or 32-bits micro-controller. I'm not aware of any such kind of things that are disclosed under GPL licencing scheme, and would be geatly interested if such thing would be usable. If not, I would like to assess workload necessary to implement it.
Thanking anyone in advance for help.

Pascal variables representation on the machine

I was given a question in my homework.
How the pascal variables represented on the machine?
for example: in C it can be different on different machines and compilers, in java there is a VM, so the programmer can assume he will get the exact same representation on different machines.
I have been googling for a while and could not find an answer about pascal. The question is about the original version of pascal, if it changer something.
Thank you!
Original (J&W) Pascal is extremely rare, and most people don't know it. It got cleaned up a bit into the ISO 7185 standard, though those changes IIRC mostly affect scoping and type equivalence, not the kind of types.
Original (non UCSD, over a decade before Borland/Turbo dialects) Pascal has nearly no machine dependent types. Just one type INTEGER and one floating point type, REAL, and non integer ordinal types like enums, boolean and char. Char was not guaranteed 8-bit, the with dependent on the machine word.
Pascal shows his mainframe roots here, where words had exotic sizes like 60-bit, didn't allow subword acess (say bytelevel access, but that is a stretch, since they might not know the concept of byte), and multiple chars were packed into machinewords. (see packed array below). C was several years later, and targeted minis, so avoided the worst of that legacy.
The integer type is the biggest type in the system, quite often the biggest type that the machine can do conveniently. Smaller integer sizes are constructed with subranges, there are no unsigned types, but these can be defined with relevant subranges (and it is up to the compiler/VM to implement those efficiently)
e..g BYTE= 0..255;
Arrays can be packed, and must be unpacked before use (with pack() and unpack()).
There is no stringtype, typically packed fixed size array of char is used, with right padding with space to signal endofstring (so trailing spaces is hard, but it is only a convention, and not much runtime support, so in exceptional cases, you simply make an exception)
Unions contain all components as separate fields (no overlap) and are always named.
It had pointers, but you couldn't take addresses of arbitrary symbols, and new pointers could only be created with NEW.
So in general original Pascal is what you would call a reasonably "safe" language, though it was not fully designed as such (and afaik not 100% safe in theory. It was also much more suitable for VMing than TP (and that happened with UCSD, albeit for a subset).
Pascal and its successors can be considered as reconnaissance of concepts that were later popularized with Java.
In C it can be different on different machines and compilers, so are they in Pascal, of course.

What's the most efficient way to run cross-platform, deterministic simulations in Haskell?

My goal is to run a simulation that requires non-integral numbers across different machines that might have a varying CPU architectures and OSes. The main priority is that given the same initial state, each machine should reproduce the simulation exactly the same. Secondary priority is that I'd like the calculations to have performance and precision as close as realistically possible to double-precision floats.
As far as I can tell, there doesn't seem to be any way to affect the determinism of floating
point calculations from within a Haskell program, similar to the _controlfp and _FPU_SETCW macros in C. So, at the moment I consider my options to be
Use Data.Ratio
Use Data.Fixed
Use Data.Fixed.Binary from the fixed-point package
Write a module to call _ controlfp (or the equivivalent for each platform) via FFI.
Possibly, something else?
One problem with the fixed point arithmetic libraries is that they don't have e.g. trigonometric functions or logarithms defined for them (as they don't implement the Floating type-class) so I guess I would need to provide lookup tables for all the functions in the simulation seed data. Or is there some better way?
Both of the fixed point libraries also hide the newtype constructor, so any (de-)serialization would need to be done via toRational/fromRational as far as I can tell, and that feels like it would add unnecessary overhead.
My next step is to benchmark the different fixed-point solutions to see the real world performance, but meanwhile, I'd gladly take any advice you have on this subject.
Clause 11 of the IEEE 754-2008 standard describes what is needed for reproducible floating-point results. Among other things, you need unambiguous expression evaluation rules. Some languages permit floating-point expressions to be evaluated with extra precision or permit some alterations of expressions (such as evaluating a*b+c in a single instruction instead of separate multiply and add instructions). I do not know about Haskell’s semantics. If Haskell does not precisely map expressions to definite floating-point operations, then it cannot support reproducible floating-point results.
Also, since you mention trigonometric and logarithmic functions, be aware that these vary from implementation to implementation. I am not aware of any math library that provides correctly rounded implementations of every standard math function. (CRLibm is a project to create one.) So each math library uses its own approximations, and their results vary slightly. Perhaps you might work around this by including a math library with your simulation code, so that it is used instead of each Haskell implementation’s default library.
Routines that convert between binary floating-point and decimal numerals are also a source of differences between implementations. This is less of a problem than it used to be because algorithms for converting correctly are known. However, it is something that might need to be checked in each implementation.

The reason behind endianness?

I was wondering, why some architectures use little-endian and others big-endian. I remember I read somewhere that it has to do with performance, however, I don't understand how can endianness influence it. Also I know that:
The little-endian system has the property that the same value can be read from memory at different lengths without using different addresses.
Which seems a nice feature, but, even so, many systems use big-endian, which probably means big-endian has some advantages too (if so, which?).
I'm sure there's more to it, most probably digging down to the hardware level. Would love to know the details.
I've looked around the net a bit for more information on this question and there is a quite a range of answers and reasonings to explain why big or little endian ordering may be preferable. I'll do my best to explain here what I found:
Little-endian
The obvious advantage to little-endianness is what you mentioned already in your question... the fact that a given number can be read as a number of a varying number of bits from the same memory address. As the Wikipedia article on the topic states:
Although this little-endian property is rarely used directly by high-level programmers, it is often employed by code optimizers as well as by assembly language programmers.
Because of this, mathematical functions involving multiple precisions are easier to write because the byte significance will always correspond to the memory address, whereas with big-endian numbers this is not the case. This seems to be the argument for little-endianness that is quoted over and over again... because of its prevalence I would have to assume that the benefits of this ordering are relatively significant.
Another interesting explanation that I found concerns addition and subtraction. When adding or subtracting multi-byte numbers, the least significant byte must be fetched first to see if there is a carryover to more significant bytes. Because the least-significant byte is read first in little-endian numbers, the system can parallelize and begin calculation on this byte while fetching the following byte(s).
Big-endian
Going back to the Wikipedia article, the stated advantage of big-endian numbers is that the size of the number can be more easily estimated because the most significant digit comes first. Related to this fact is that it is simple to tell whether a number is positive or negative by simply examining the bit at offset 0 in the lowest order byte.
What is also stated when discussing the benefits of big-endianness is that the binary digits are ordered as most people order base-10 digits. This is advantageous performance-wise when converting from binary to decimal.
While all these arguments are interesting (at least I think so), their applicability to modern processors is another matter. In particular, the addition/subtraction argument was most valid on 8 bit systems...
For my money, little-endianness seems to make the most sense and is by far the most common when looking at all the devices which use it. I think that the reason why big-endianness is still used, is more for reasons of legacy than performance. Perhaps at one time the designers of a given architecture decided that big-endianness was preferable to little-endianness, and as the architecture evolved over the years the endianness stayed the same.
The parallel I draw here is with JPEG (which is big-endian). JPEG is big-endian format, despite the fact that virtually all the machines that consume it are little-endian. While one can ask what are the benefits to JPEG being big-endian, I would venture out and say that for all intents and purposes the performance arguments mentioned above don't make a shred of difference. The fact is that JPEG was designed that way, and so long as it remains in use, that way it shall stay.
I would assume that it once were the hardware designers of the first processors who decided which endianness would best integrate with their preferred/existing/planned micro-architecture for the chips they were developing from scratch.
Once established, and for compatibility reasons, the endianness was more or less carried on to later generations of hardware; which would support the 'legacy' argument for why still both kinds exist today.

Is there really such a thing as a char or short in modern programming?

I've been learning to program for a Mac over the past few months (I have experience in other languages). Obviously that has meant learning the Objective C language and thus the plainer C it is predicated on. So I have stumbles on this quote, which refers to the C/C++ language in general, not just the Mac platform.
With C and C++ prefer use of int over
char and short. The main reason behind
this is that C and C++ perform
arithmetic operations and parameter
passing at integer level, If you have
an integer value that can fit in a
byte, you should still consider using
an int to hold the number. If you use
a char, the compiler will first
convert the values into integer,
perform the operations and then
convert back the result to char.
So my question, is this the case in the Mac Desktop and IPhone OS environments? I understand when talking about theses environments we're actually talking about 3-4 different architectures (PPC, i386, Arm and the A4 Arm variant) so there may not be a single answer.
Nevertheless does the general principle hold that in modern 32 bit / 64 bit systems using 1-2 byte variables that don't align with the machine's natural 4 byte words doesn't provide much of the efficiency we may expect.
For instance, a plain old C-Array of 100,000 chars is smaller than the same 100,000 ints by a factor of four, but if during an enumeration, reading out each index involves a cast/boxing/unboxing of sorts, will we see overall lower 'performance' despite the saved memory overhead?
The processor is very very fast compared to the memory speed. It will always pay to store values in memory as chars or shorts (though to avoid porting problems you should use int8_t and int16_t). Less cache will be used, and there will be fewer memory accesses.
Can't speak for PPC/Arm/A4Arm, but x86 has the ability to operate on data as if it was 8bit, 16bit, or 32bit (64bit if an x86_64 in 64bit mode), although I'm not sure if the compiler would take advantage of those instructions. Even when using 32bit load, the compiler could AND the data with a mask that'd clear the upper 16/24bits, which would be relatively fast.
Likely, the ability to fit far more data into the cache would at least cancel out the speed difference... although the only way to know for sure would be to actually profile the code.
Of course there is a need to use data structures less than the register size of the target machine. Imagine your are storing text data encoded as UTF-8, or ASCII in memory where each character is mostly like a byte in size, do you want to store the characters as 64 bit quantities?
The advice you are looking is a warning not to over optimizes.
You have to balance the savings in space versus the computation performance of you choice.
I wouldn't worry to much about it, today's modern CPUs are complicated enough that its hard to make this kind of judgement on your own. Choose the obvious datatype and let the compiler worry about the rest.
The addressing model of the x86 architecture is that the basic unit of memory is 8 bit bytes.
This is to simplify operation with character strings and decimal arithmetic.
Then, in order to have useful sizes of integers, the instruction set allows using these in units of 1, 2, 4, and (recently) 8 bytes.
A Fact to remember, is that most software development takes place writing for different processors than most of us here deal with on a day to day basis.
C and assembler are common languages for these.
About ten billion CPUs were manufactured in 2008. About 98% of new CPUs produced each year are embedded.

Resources