how to set round-off precision in mathematica - wolfram-mathematica

I want to illustrate the stability of some numerical algorithms. I want to use Mathematica to round floating point numbers according to the usual rule, for example:
myRound[3/80.]=0.038 if I specify the precision to be 2-digit.
Another one
myRound[89/47.]=1.89
So given a precision number, how to write the myRound function? Please help. Many thanks.

You should look into NumberForm. For example:
NumberForm[89.0/47.0, 3]
Returns 1.89.
Acutally, it occurs to me that if you really want to illustrate round off issues, you should look into the ComputerArithmetic package. It's well documented, so I'll leave it at that.

I am not sure is this is what you would like:
In[34]:= customRound[x_Real] :=
Round[x, 10^Round[RealExponent[x]]*0.01]
In[35]:= customRound[3/80.]
Out[35]= 0.038
In[36]:= customRound[89/47.]
The function actually changes the number, as opposed to merely changing the way it is displayed.

Related

Chicken Scheme.- How to convert a complex number (for ex: (sqrt 2) ) to an integer? Regardless of rounding strategy

I am working on a C extension for Chicken Scheme and have everything in place but I am running into an issue with complex number types.
My code can only handle integers and when any math is done that involves say a square root my extension may end up having to handle complex number.
I just need to remove the decimal place and get whatever integer is close by. I am not worried about accuracy for this.
I have looked around and through the code but did not find anything.
Thanks!
Well, you can inspect the number type from the header tag. A complex number is a block object which has 2 slots; the real and imaginary part. Then, those numbers themselves can be ratnums, flonums, fixnums or bignums. You'll need to handle those situations as well if you want to do it all in C.
It's probably a lot easier to declare your C code as accepting an integer and do any conversion necessary in Scheme.

How I predict how some formula will behave with integers?

I am making some software that need to work with integers.
Also I need to apply some formula to those integers, repeatedly over time (example, do x/=z several times in a row for a indefinite amount).
All tools, algorithms and formulas I could think or find, or don't work with integers at all, or work as approximations at best.
For example the x/=z several times in a row for example, you can theoretically calculate what x will be in the 10th time by doing x = x/(z^10), but that will be wrong if the result is fractional, you can use floor(x/(z^10)), but the result will STILL be wrong.
Plotting software that I found also don't have integers at all, or has "floor()/ceil()" functions support, at best, and still the result would fall in the problem of the previous paragraph.
So how I do it?
Here's something to get you going for the iteration of x/=z:
(that should have ended in "all three terms are 0 with regard to integer division")
Now if x or z are negative, you can try and see whether this still holds; I did not invest the time to make the necessary case distinctions, but they should be fairly analogous.
As Karoly Horvath mentions in a comment, without a clear specification of the kinds of functions for which you would like to find a shortcut to replace iterative evaluation, helping you out won't be possible since there are uncountably many functions over the integers, and the same approach won't work for all of them.

Mathematica not round small value to zero

I have a simple question, for the specific project I am working on I would like mathematica to not evaluate extremely small decimals (of the order of ~10^-90) to zero. I would like a scientific notation return. When I evaluate similar expressions into WolframAlpha I receive a non-zero result.
For an example of a specific evaluation which returns non-zero in wolfram, and zero in mathematica:
Mathematica:
In[219]:= Integrate[dNitrogen, {v, 11000, Infinity}]
Out[219]= 0.
Compared to WolframAlpha:
I've tried searching around myself but oddly enough have only found solutions to the opposite of my problem -Those wanting when mathematica evaluates the small number to print as zero, this seems to involve some use of the Chop function.
Thanks for help/suggestions.
You should use NIntegrate instead of Integrate. By default it will give you the precision you're wanting, and it's also configurable through the PrecisionGoal parameter (and other parameters, see the NIntegrate docs for details).

JDBC / Oracle Double value insertion fails [duplicate]

double r = 11.631;
double theta = 21.4;
In the debugger, these are shown as 11.631000000000000 and 21.399999618530273.
How can I avoid this?
These accuracy problems are due to the internal representation of floating point numbers and there's not much you can do to avoid it.
By the way, printing these values at run-time often still leads to the correct results, at least using modern C++ compilers. For most operations, this isn't much of an issue.
I liked Joel's explanation, which deals with a similar binary floating point precision issue in Excel 2007:
See how there's a lot of 0110 0110 0110 there at the end? That's because 0.1 has no exact representation in binary... it's a repeating binary number. It's sort of like how 1/3 has no representation in decimal. 1/3 is 0.33333333 and you have to keep writing 3's forever. If you lose patience, you get something inexact.
So you can imagine how, in decimal, if you tried to do 3*1/3, and you didn't have time to write 3's forever, the result you would get would be 0.99999999, not 1, and people would get angry with you for being wrong.
If you have a value like:
double theta = 21.4;
And you want to do:
if (theta == 21.4)
{
}
You have to be a bit clever, you will need to check if the value of theta is really close to 21.4, but not necessarily that value.
if (fabs(theta - 21.4) <= 1e-6)
{
}
This is partly platform-specific - and we don't know what platform you're using.
It's also partly a case of knowing what you actually want to see. The debugger is showing you - to some extent, anyway - the precise value stored in your variable. In my article on binary floating point numbers in .NET, there's a C# class which lets you see the absolutely exact number stored in a double. The online version isn't working at the moment - I'll try to put one up on another site.
Given that the debugger sees the "actual" value, it's got to make a judgement call about what to display - it could show you the value rounded to a few decimal places, or a more precise value. Some debuggers do a better job than others at reading developers' minds, but it's a fundamental problem with binary floating point numbers.
Use the fixed-point decimal type if you want stability at the limits of precision. There are overheads, and you must explicitly cast if you wish to convert to floating point. If you do convert to floating point you will reintroduce the instabilities that seem to bother you.
Alternately you can get over it and learn to work with the limited precision of floating point arithmetic. For example you can use rounding to make values converge, or you can use epsilon comparisons to describe a tolerance. "Epsilon" is a constant you set up that defines a tolerance. For example, you may choose to regard two values as being equal if they are within 0.0001 of each other.
It occurs to me that you could use operator overloading to make epsilon comparisons transparent. That would be very cool.
For mantissa-exponent representations EPSILON must be computed to remain within the representable precision. For a number N, Epsilon = N / 10E+14
System.Double.Epsilon is the smallest representable positive value for the Double type. It is too small for our purpose. Read Microsoft's advice on equality testing
I've come across this before (on my blog) - I think the surprise tends to be that the 'irrational' numbers are different.
By 'irrational' here I'm just referring to the fact that they can't be accurately represented in this format. Real irrational numbers (like π - pi) can't be accurately represented at all.
Most people are familiar with 1/3 not working in decimal: 0.3333333333333...
The odd thing is that 1.1 doesn't work in floats. People expect decimal values to work in floating point numbers because of how they think of them:
1.1 is 11 x 10^-1
When actually they're in base-2
1.1 is 154811237190861 x 2^-47
You can't avoid it, you just have to get used to the fact that some floats are 'irrational', in the same way that 1/3 is.
One way you can avoid this is to use a library that uses an alternative method of representing decimal numbers, such as BCD
If you are using Java and you need accuracy, use the BigDecimal class for floating point calculations. It is slower but safer.
Seems to me that 21.399999618530273 is the single precision (float) representation of 21.4. Looks like the debugger is casting down from double to float somewhere.
You cant avoid this as you're using floating point numbers with fixed quantity of bytes. There's simply no isomorphism possible between real numbers and its limited notation.
But most of the time you can simply ignore it. 21.4==21.4 would still be true because it is still the same numbers with the same error. But 21.4f==21.4 may not be true because the error for float and double are different.
If you need fixed precision, perhaps you should try fixed point numbers. Or even integers. I for example often use int(1000*x) for passing to debug pager.
Dangers of computer arithmetic
If it bothers you, you can customize the way some values are displayed during debug. Use it with care :-)
Enhancing Debugging with the Debugger Display Attributes
Refer to General Decimal Arithmetic
Also take note when comparing floats, see this answer for more information.
According to the javadoc
"If at least one of the operands to a numerical operator is of type double, then the
operation is carried out using 64-bit floating-point arithmetic, and the result of the
numerical operator is a value of type double. If the other operand is not a double, it is
first widened (§5.1.5) to type double by numeric promotion (§5.6)."
Here is the Source

Set degrees as default in Mathematica 8

Is it possible to set the trigonometric functions to use degrees instead of radians?
Short answer
No, this is not possible. I'd suggest to define alternative functions, and work with those: sinDeg[d_] := Sin[d Degree]. Or just use Degree explicitly: Sin[30 Degree]. (Try also entering ESC deg ESC.)
Longer answer
You can Unprotect the functions, and re-define them using the Gayley-Villegas trick, but this is very likely to break several things in Mathematica, as I expect it is using these functions internally.
Since this is such a nasty thing to do, I'm not going to give a code example, instead I'll leave it to you to figure out based on my link above. :-)
I think the output is based on the input. So for example Cos[60 Degree] will output in degrees.

Resources