Difference in accuracy with floating point division vs multiplication - algorithm

Is there a difference between this:
average = (x1+x2)/2;
deviation1 = x1 -average;
deviation2 = x2 -average;
variance = deviation1*deviation1 + deviation2*deviation2;
and this:
average2 = (x1+x2);
deviation1 = 2*x1 -average2;
deviation2 = 2*x2 -average2;
variance = (deviation1*deviation1 + deviation2*deviation2) / 4;
Note that in the second version I am trying to delay division as late as possible. Does the second version [delay divisions] increase accuracy in general?
Snippet above is only intended as an example, I am not trying to optimize this particular snippet.
BTW, I am asking about division in general, not just by 2 or a power of 2 as they reduce to simple shifts in IEEE 754 representation. I took division by 2, just to illustrate the issue using a very simple example.

There's nothing to be gained from this. You are only changing the scale but you'd don't get any more significant figures in your calculation.
The Wikipedia article on variance explains at a high level some of the options for calculation variance in a robust fashion.

You do not gain precision from this since IEEE754 (which is probably what you're using under the covers) gives you the same precision (number of bits) at whatever scale you're working. For example 3.14159 x 107 will be as precise as 3.14159 x 1010.
The only possible advantage (of the former) is that you may avoid overflow when setting the deviations. But, as long as the values themselves are less than half of the maximum possible, that won't be a problem.

I have to agree with David Heffernan, it won't give you a higher precision.
The reason is how float values are stored. You have some bits representing the significant digits and some bits representing the exponent (for example 3.1714x10-12). The bits for the significant digits will always be the same no matter how large your number is - which means in the end the result will not really be a different one.
Even worse - delaying the division can get you an overflow if you have very large numbers.
If you really need a higher precision there are lots of Libraries allowing large numbers or numbers with higher precision.

The best way to answer your question would be to run tests (both randomly-distributed and range-based?) and see if the resulting numbers differ at all in the binary representation.
Note that one issue you'll have if you do this is that your functions won't work for value > MAX_INT/2, because of the way you code average.
avg = (x1+x2)/2 # clobbers numbers > MAX_INT/2
avg = 0.5*x1 + 0.5*x2 # no clobbering
This is almost certainly not an issue though unless you are writing a language-level library. And if most of your numbers are small, it may not matter at all? In fact it probably isn't worth considering, since the value of variance will exceed MAX_INT since it is inherenty a squared quantity; I'd say you might wish to use standard deviation, but no one does that.
Here I do some experiments in python (which I think supports the IEEE whatever-it-is by virtue of probably delegating math to C libraries...):
>>> def compare(numer, denom):
... assert ((numer/denom)*2).hex()==((2*numer)/denom).hex()
>>> [compare(a,b) for a,b in product(range(1,100),range(1,100))]
No problem, I think because division and multiplication by 2 is nicely representable in binary. However try multiplication and division by 3:
>>> def compare(numer, denom):
... assert ((numer/denom)*3).hex()==((3*numer)/denom).hex(), '...'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <listcomp>
File "<stdin>", line 2, in compare
AssertionError: 0x1.3333333333334p-1!=0x1.3333333333333p-1
Does it probably matter much? Perhaps if you're working with very small numbers (in which case you may wish to use log arithmetic). However if you're working with large numbers (uncommon in probability) and you delay division, you will as I mentioned risk overflow, but even worse, risk bugs due to hard-to-read code.

Related

MSE giving negative results in High-Level Synthesis

I am trying to calculate the Mean Squared Error in Vitis HLS. I am using hls::pow(...,2) and divide by n, but all I receive is a negative value for example -0.004. This does not make sense to me. Could anyone point the problem out or have a proper explanation for this??
Besides calculating the mean squared error using hls::pow does not give the same results as (a - b) * (a - b) and for information I am using ap_fixed<> types and not normal float or double precision
Thanks in advance!
It sounds like an overflow and/or underflow issue, meaning that the values reach the sign bit and are interpreted as negative while just be very large.
Have you tried tuning the representation precision or the different saturation/rounding options for the fixed point class? This tuning will depend on the data you're processing.
For example, if you handle data that you know will range between -128.5 and 1023.4, you might need very few fractional bits, say 3 or 4, leaving the rest for the integer part (which might roughly be log2((1023+128)^2)).
Alternatively, if n is very large, you can try a moving average and calculate the mean in small "chunks" of length m < n.
p.s. Getting the absolute value of a - b and store it into an ap_ufixed before the multiplication can already give you one extra bit, but adds an instruction/operation/logic to the algorithm (which might not be a problem if the design is pipelined, but require space if the size of ap_ufixed is very large).

Generate random number in interval in PostScript

I am struggling to find a way to generate a random number within a given interval in PostScript.
Basically PostScript has three functions to help you generate (pseudo-)random numbers. Those are rand, srand and rrand.
The later two are for passing a seed to the number generator to be able to reproduce specific results. At least that´s what I understood they are for. Anyway they don´t seem suitable for my case.
So rand seems to be the only function I can use to generate a random number, but...
rand returns a random integer in the range 0 to 231 − 1 (From the PostScript Language Reference, page 637 (651 in the PDF))
This is far beyond the the interval I´m looking for. I am more interested in values up to small thousands, maybe 10.000 or something like that and small float values, up to 100, all with the lower limit of 0.
I thought I could just narrow my numbers down by simple divisions and extracting the root but that tends to give me unusable small values in quite a lot cases. I am wondering if there are robust ways to either shrink a large number down to what I need or, I´d prefer that, only generate numbers in the desired interval.
Besides: while-loops are not possible in PostScript, otherwise I´d have written a function to generate numbers until they fit in my interval.
Any hints on what to look for breaking numbers down into my interval?
mod is often good enough and it's fast. But you may get a more uniform distribution by using floating-point ops.
rand 16#7fffffff div 100 mul cvi
This is because mod discards the upper bits of the input. And the PRNG is usually trying to randomize over all the bits. By scaling down then up, they all contribute something in the way of rounding effects.
Just use the modulo operator to get it down to the size you want:
GS>rand 100 mod stack
7

MATLAB script to generate reports of rounding errors in algorithms

I am interested in use or created an script to get error rounding reports in algorithms.
I hope the script or something similar is already done...
I think this would be usefull for digital electronic system design because sometimes it´s neccesary to study how would be the accuracy error depending of the number of decimal places that are considered in the design.
This script would work with 3 elements, the algorithm code, the input, and the output.
This script would show the error line by line of the algorithm code.
It would modify the algorith code with some command like roundn and compare the error of the output.
I would define the error as
Errorrounding = Output(without rounding) - Output round
For instance I have the next algorithm
calculation1 = input*constan1 + constan2 %line 1 of the algorithm
output = exp(calculation1) %line 2 of the algorithm
Where 'input' is the input of n elements vector and 'output' is the output and 'constan1' and 'constan2' are constants.
n is the number of elements of the input vector
So, I would put my algorithm in the script and it generated in a automatic way the next algorithm:
input_round = roundn(input,-1*mdec)
calculation1 = input*constant1+constant2*ones(1,n)
calculation1_round = roundn(calculation1,-1*mdec)
output=exp(calculation1_round)
output_round= roundn(output,-1*mdec)
where mdec is the number of decimal places to consider.
Finally the script give the next message
The rounding error at line 1 is #Errorrounding_calculation1
Where '#Errorrounding' would be the result of the next operation Errorrounding_calculation1 = calculation1 - calculation1_round
The rounding error at line 2 is #Errorrounding_output
Where 'Errorrounding_output' would be the result of the next operation Errorrounding_output = output - output_round
Does anyone know if there is something similar already done, or Matlab provides a solution to deal with some issues related?
Thank you.
First point: I suggest reading What Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg. It should illuminate a lot of issues regarding floating-point computations that will help you understand more of the intricacies of the problem you are considering.
Second point: I think the problem you are considering is a lot more complicated than you realize. You are interested in the error introduced into a calculation due to the reduced precision from rounding. What you don't realize is that these errors will propagate through your computations. Consider your example:
output = input*C1 + C2
If each of the three operands is a double-precision floating-point number, they will each have some round-off error in their precision. A bound on this round-off error can be found using the function EPS, which tells you the distance from one double-precision number to the next largest one. For example, a bound on the relative error of the representation of input will be 0.5*eps(input), or halfway between it and the next largest double-precision number. We can therefore estimate some errors bounds on the three operands as follows:
err_input = 0.5.*eps(input); %# Maximum round-off error for input
err_C1 = 0.5.*eps(C1); %# Maximum round-off error for C1
err_C2 = 0.5.*eps(C2); %# Maximum round-off error for C2
Note that these errors could be positive or negative, since the true number may have been rounded up or down to represent it as a double-precision value. Now, notice what happens when we estimate the true value of the operands before they were rounded-off by adding these errors to them, then perform the calculation for output:
output = (input+err_input)*(C1+err_C1) + C2+err_C2
%# ...and after reordering terms
output = input*C1 + C2 + err_input*C1 + err_C1*input + err_input*err_C1 + err_C2
%# ^-----------^ ^-----------------------------------------------------^
%# | |
%# rounded computation difference
You can see from this that the precision round-off of the three operands before performing the calculation could change the output we get by as much as difference. In addition, there will be another source of round-off error when the value output is rounded off to represent it as a double-precision value.
So, you can see how it's quite a bit more complicated than you thought to adequately estimate the errors introduced by precision round-off.
This is more of an extended comment than an answer:
I'm voting to close this on the grounds that it isn't a well-formed question. It sort of expresses a hope or wish that there exists some type of program which would be interesting or useful to you. I suggest that you revise the question to, well, to be a question.
You propose to write a Matlab program to analyse the numerical errors in other Matlab programs. I would not use Matlab for this. I'd probably use Mathematica, which offers more sophisticated structural operations on strings (such as program source text), symbolic computation, and arbitrary precision arithmetic. One of the limitations of Matlab for what you propose is that Matlab, like all other computer implementations of real arithmetic, suffers rounding errors. There are other languages which you might choose too.
What you propose is quite difficult, and would probably require a longer answer than most SOers, including this one, would be happy to contemplate writing. Happily for you, other people have written books on the subject, I suggest you start with this one by NJ Higham. You might also want to investigate matters such as interval arithmetic.
Good luck.

Algorithm to find a common multiplier to convert decimal numbers to whole numbers

I have an array of numbers that potentially have up to 8 decimal places and I need to find the smallest common number I can multiply them by so that they are all whole numbers. I need this so all the original numbers can all be multiplied out to the same scale and be processed by a sealed system that will only deal with whole numbers, then I can retrieve the results and divide them by the common multiplier to get my relative results.
Currently we do a few checks on the numbers and multiply by 100 or 1,000,000, but the processing done by the *sealed system can get quite expensive when dealing with large numbers so multiplying everything by a million just for the sake of it isn’t really a great option. As an approximation lets say that the sealed algorithm gets 10 times more expensive every time you multiply by a factor of 10.
What is the most efficient algorithm, that will also give the best possible result, to accomplish what I need and is there a mathematical name and/or formula for what I’m need?
*The sealed system isn’t really sealed. I own/maintain the source code for it but its 100,000 odd lines of proprietary magic and it has been thoroughly bug and performance tested, altering it to deal with floats is not an option for many reasons. It is a system that creates a grid of X by Y cells, then rects that are X by Y are dropped into the grid, “proprietary magic” occurs and results are spat out – obviously this is an extremely simplified version of reality, but it’s a good enough approximation.
So far there are quiet a few good answers and I wondered how I should go about choosing the ‘correct’ one. To begin with I figured the only fair way was to create each solution and performance test it, but I later realised that pure speed wasn’t the only relevant factor – an more accurate solution is also very relevant. I wrote the performance tests anyway, but currently the I’m choosing the correct answer based on speed as well accuracy using a ‘gut feel’ formula.
My performance tests process 1000 different sets of 100 randomly generated numbers.
Each algorithm is tested using the same set of random numbers.
Algorithms are written in .Net 3.5 (although thus far would be 2.0 compatible)
I tried pretty hard to make the tests as fair as possible.
Greg – Multiply by large number
and then divide by GCD – 63
milliseconds
Andy – String Parsing
– 199 milliseconds
Eric – Decimal.GetBits – 160 milliseconds
Eric – Binary search – 32
milliseconds
Ima – sorry I couldn’t
figure out a how to implement your
solution easily in .Net (I didn’t
want to spend too long on it)
Bill – I figure your answer was pretty
close to Greg’s so didn’t implement
it. I’m sure it’d be a smidge faster
but potentially less accurate.
So Greg’s Multiply by large number and then divide by GCD” solution was the second fastest algorithm and it gave the most accurate results so for now I’m calling it correct.
I really wanted the Decimal.GetBits solution to be the fastest, but it was very slow, I’m unsure if this is due to the conversion of a Double to a Decimal or the Bit masking and shifting. There should be a
similar usable solution for a straight Double using the BitConverter.GetBytes and some knowledge contained here: http://blogs.msdn.com/bclteam/archive/2007/05/29/bcl-refresher-floating-point-types-the-good-the-bad-and-the-ugly-inbar-gazit-matthew-greig.aspx but my eyes just kept glazing over every time I read that article and I eventually ran out of time to try to implement a solution.
I’m always open to other solutions if anyone can think of something better.
I'd multiply by something sufficiently large (100,000,000 for 8 decimal places), then divide by the GCD of the resulting numbers. You'll end up with a pile of smallest integers that you can feed to the other algorithm. After getting the result, reverse the process to recover your original range.
Multiple all the numbers by 10
until you have integers.
Divide
by 2,3,5,7 while you still have all
integers.
I think that covers all cases.
2.1 * 10/7 -> 3
0.008 * 10^3/2^3 -> 1
That's assuming your multiplier can be a rational fraction.
If you want to find some integer N so that N*x is also an exact integer for a set of floats x in a given set are all integers, then you have a basically unsolvable problem. Suppose x = the smallest positive float your type can represent, say it's 10^-30. If you multiply all your numbers by 10^30, and then try to represent them in binary (otherwise, why are you even trying so hard to make them ints?), then you'll lose basically all the information of the other numbers due to overflow.
So here are two suggestions:
If you have control over all the related code, find another
approach. For example, if you have some function that takes only
int's, but you have floats, and you want to stuff your floats into
the function, just re-write or overload this function to accept
floats as well.
If you don't have control over the part of your system that requires
int's, then choose a precision to which you care about, accept that
you will simply have to lose some information sometimes (but it will
always be "small" in some sense), and then just multiply all your
float's by that constant, and round to the nearest integer.
By the way, if you're dealing with fractions, rather than float's, then it's a different game. If you have a bunch of fractions a/b, c/d, e/f; and you want a least common multiplier N such that N*(each fraction) = an integer, then N = abc / gcd(a,b,c); and gcd(a,b,c) = gcd(a, gcd(b, c)). You can use Euclid's algorithm to find the gcd of any two numbers.
Greg: Nice solution but won't calculating a GCD that's common in an array of 100+ numbers get a bit expensive? And how would you go about that? Its easy to do GCD for two numbers but for 100 it becomes more complex (I think).
Evil Andy: I'm programing in .Net and the solution you pose is pretty much a match for what we do now. I didn't want to include it in my original question cause I was hoping for some outside the box (or my box anyway) thinking and I didn't want to taint peoples answers with a potential solution. While I don't have any solid performance statistics (because I haven't had any other method to compare it against) I know the string parsing would be relatively expensive and I figured a purely mathematical solution could potentially be more efficient.
To be fair the current string parsing solution is in production and there have been no complaints about its performance yet (its even in production in a separate system in a VB6 format and no complaints there either). It's just that it doesn't feel right, I guess it offends my programing sensibilities - but it may well be the best solution.
That said I'm still open to any other solutions, purely mathematical or otherwise.
What language are you programming in? Something like
myNumber.ToString().Substring(myNumber.ToString().IndexOf(".")+1).Length
would give you the number of decimal places for a double in C#. You could run each number through that and find the largest number of decimal places(x), then multiply each number by 10 to the power of x.
Edit: Out of curiosity, what is this sealed system which you can pass only integers to?
In a loop get mantissa and exponent of each number as integers. You can use frexp for exponent, but I think bit mask will be required for mantissa. Find minimal exponent. Find most significant digits in mantissa (loop through bits looking for last "1") - or simply use predefined number of significant digits.
Your multiple is then something like 2^(numberOfDigits-minMantissa). "Something like" because I don't remember biases/offsets/ranges, but I think idea is clear enough.
So basically you want to determine the number of digits after the decimal point for each number.
This would be rather easier if you had the binary representation of the number. Are the numbers being converted from rationals or scientific notation earlier in your program? If so, you could skip the earlier conversion and have a much easier time. Otherwise you might want to pass each number to a function in an external DLL written in C, where you could work with the floating point representation directly. Or you could cast the numbers to decimal and do some work with Decimal.GetBits.
The fastest approach I can think of in-place and following your conditions would be to find the smallest necessary power-of-ten (or 2, or whatever) as suggested before. But instead of doing it in a loop, save some computation by doing binary search on the possible powers. Assuming a maximum of 8, something like:
int NumDecimals( double d )
{
// make d positive for clarity; it won't change the result
if( d<0 ) d=-d;
// now do binary search on the possible numbers of post-decimal digits to
// determine the actual number as quickly as possible:
if( NeedsMore( d, 10e4 ) )
{
// more than 4 decimals
if( NeedsMore( d, 10e6 ) )
{
// > 6 decimal places
if( NeedsMore( d, 10e7 ) ) return 10e8;
return 10e7;
}
else
{
// <= 6 decimal places
if( NeedsMore( d, 10e5 ) ) return 10e6;
return 10e5;
}
}
else
{
// <= 4 decimal places
// etc...
}
}
bool NeedsMore( double d, double e )
{
// check whether the representation of D has more decimal points than the
// power of 10 represented in e.
return (d*e - Math.Floor( d*e )) > 0;
}
PS: you wouldn't be passing security prices to an option pricing engine would you? It has exactly the flavor...

What's better multiplication by 2 or adding the number to itself ? BIGnums

I need some help deciding what is better performance wise.
I'm working with bigints (more then 5 million digits) and most of the computation (if not all) is in the part of doubling the current bigint. So i wanted to know is it better to multiply every cell (part of the bigint) by 2 then mod it and you know the rest. Or is it better just add the bigint to itself.
I'm thinking a bit about the ease of implementation too (addition of 2 bigints is more complicated then multiplication by 2) , but I'm more concerned about the performance rather then the size of code or ease of implementation.
Other info:
I'll code it in C++ , I'm fairly familiar with bigints (just never came across this problem).
I'm not in the need of any source code or similar i just need a nice opinion and explanation/proof of it , since i need to make a good decision form the start as the project will be fairly large and mostly built around this part it depends heavily on what i chose now.
Thanks.
Try bitshifting each bit. That is probably the fastest method. When you bitshift an integer to the left, then you double it (multiply by 2). If you have several long integers in a chain, then you need to store the most significant bit, because after shifting it, it will be gone, and you need to use it as the least significant bit on the next long integer.
This doesn't actually matter a whole lot. Modern 64bit computers can add two integers in the same time it takes to bitshift them (1 clockcycle), so it will take just as long. I suggest you try different methods, and then report back if there is any major time differences. All three methods should be easy to implement, and generating a 5mb number should also be easy, using a random number generator.
To store a 5 million digit integer, you'll need quite a few bits -- 5 million if you were referring to binary digits, or ~17 million bits if those were decimal digits. Let's assume the numbers are stored in a binary representation, and your arithmetic happens in chunks of some size, e.g. 32 bits or 64 bits.
If adding the number to itself, each chunk is added to itself and to the carry from the addition of the previous chunk. Any carry forward is kept for the next chunk. That's a couple of addition operation, and some book keeping for tracking the carry.
If multiplying by two by left-shifting, that's one left-shift operation for the multiplication, and one right-shift operation + and with 1 to obtain the carry. Carry book keeping is a little simpler.
Superficially, the shift version appears slightly faster. The overall cost of doubling the number, however, is highly influenced by the size of the number. A 17 million bits number exceeds the cpu's L1 cache, and processing time is likely overwhelmed by memory fetch operations. On modern PC hardware, memory fetch is orders of magnitude slower than addition and shifting.
With that, you might want to pick the one that's simpler for you to implement. I'm leaning towards the left-shift version.
did you try shifting the bits?
<< multiplies by 2
>> divides by 2
Left bit shifting by one is the same as a multiplication by two !
This link explains the mecanism and give examples.
int A = 10; //...01010 = 10
int B = A<<1; //..010100 = 20
If it really matters, you need to write all three methods (including bit-shift!), and profile them, on various input. (Use small numbers, large numbers, and random numbers, to avoid biasing the results.)
Sorry for the "Do it yourself" answer, but that's really the best way. No one cares about this result more than you, which just makes you the best person to figure it out.
Well implemented multiplication of BigNums is O(N log(N) log(log(N)). Addition is O(n). Therefore, adding to itself should be faster than multiplying by two. However that's only true if you're multiplying two arbitrary bignums; if your library knows you're multiplying a bignum by a small integer it may be able to optimize to O(n).
As others have noted, bit-shifting is also an option. It should be O(n) as well but faster constant time. But that will only work if your bignum library supports bit shifting.
most of the computation (if not all) is in the part of doubling the current bigint
If all your computation is in doubling the number, why don't you just keep a distinct (base-2) scale field? Then just add one to scale, which can just be a plain-old int. This will surely be faster than any manipulation of some-odd million bits.
IOW, use a bigfloat.
random benchmark
use Math::GMP;
use Time::HiRes qw(clock_gettime CLOCK_REALTIME CLOCK_PROCESS_CPUTIME_ID);
my $n = Math::GMP->new(2);
$n = $n ** 1_000_000;
my $m = Math::GMP->new(2);
$m = $m ** 10_000;
my $str;
for ($bits = 1_000_000; $bits <= 2_000_000; $bits += 10_000) {
my $start = clock_gettime(CLOCK_PROCESS_CPUTIME_ID);
$str = "$n" for (1..3);
my $stop = clock_gettime(CLOCK_PROCESS_CPUTIME_ID);
print "$bits,#{[($stop-$start)/3]}\n";
$n = $n * $m;
}
Seems to show that somehow GMP is doing its conversion in O(n) time (where n the number of bits in the binary number). This may be due to the special case of having a 1 followed by a million (or two) zeros; the GNU MP docs say it should be slower (but still better than O(N^2).
http://img197.imageshack.us/img197/6527/chartp.png

Resources