How do I trim the zero value after decimal - vb6

As I tried to debug, I found that : just as I type in
Dim value As Double
value = 0.90000
then hit enter, and it automatically converts to 0.9
Shouldn't it keep the precision in double in visual basic?
For my calculation, I absolutely need to show the precision

If precision is required then the Currency data type is what you want to use.

There are at least two representations of your value in play. One is the value you see on the screen -- a string -- and one is the internal representation -- a binary value. In dealing with fractional values, the two are often not equivalent and where they aren't, it's because they can't be.
If you stick with doubles, VB will maintain 53 bits of mantissa throughout your calculations, no matter how they might appear when printed. If you transition through the string domain, say by saving to a file or DB and later retrieving, it often has to leave some of that precision behind. It's inevitable, because the interface between the two domains is not perfect. Some values that can be exactly represented as strings (or Decimals, that is, powers of ten) can't be exactly represented as fractional powers of 2.
This has nothing to do with VB, it's the nature of floating point. The best you can do is control where the rounding occurs. For this purpose your friend is the Format function, which controls how a value appears in string form.
? Format$(0.9, "0.00000") will show you an example.

You are getting what you see on the screen confused with what bits are being set in the Double to make that number.
VB is simply being "helpful", and simply knocking off excess zeros. But for all intents and purposes,
0.9
is identical to
0.90000
If you don't believe me, try doing this comparison:
Debug.Print CDbl("0.9") = CDbl("0.90000")
As has already been said, displayed precision can be shown using the Format$() function, e.g.
Debug.Print Format$(0.9, "0.00000")

No, it shouldn't keep the precision. Binary floating point values don't retain this information... and it would be somewhat odd to do so, given that you're expressing the value in one base even though it's being represented in another.
I don't know whether VB6 has a decimal floating point type, but that's probably what you want - or a fixed point decimal type, perhaps. Certainly in .NET, System.Decimal has retained extra 0s from .NET 1.1 onwards. If this doesn't help you, you could think about remembering two integers - e.g. "90000" and "100000" in this case, so that the value you're representing is one integer divided by another, with the associated level of precision.
EDIT: I thought that Currency may be what you want, but according to this article, that's fixed at 4 decimal places, and you're trying to retain 5. You could potentially just multiply by 10, if you always want 5 decimal places - but it's an awkward thing to remember to do everywhere... and you'd have to work out how to format it appropriately. It would also always be 4 decimal places, I suspect, even if you'd specified fewer - so if you want "0.300" to be different to "0.3000" then Currency may not be appropriate. I'm entirely basing this on articles online though...

You can also enter the value as 0.9# instead. This helps avoid implicit coercion within an expression that may truncate the precision you expect. In most cases the compiler won't require this hint though because floating point literals default to Double (indeed, the IDE typically deletes the # symbol unless the value was an integer, e.g. 9#).
Contrast the results of these:
MsgBox TypeName(0.9)
MsgBox TypeName(0.9!)
MsgBox TypeName(0.9#)

Related

Go Protobuf Precision Decimals

What is the correct scalar type to use in my protobuf definition file, if I want to transmit an arbitrary-precision decimal value?
I am using shopspring/decimal instead of a float64 in my Go code to prevent math errors. When writing my protobuf file with the intention of transmitting these values over gRPC, I could use:
double which translates to a float64
string which would certainly be precise in its own way but strikes me as clunky
Something like decimal from mgravell/protobuf-net?
Conventional wisdom has taught me to skirt floats in monetary applications, but I may be over-careful since it's a point of serialization.
If you really need arbitrary precision, I fear there is no correct answer right now. There is https://github.com/protocolbuffers/protobuf/issues/4406 open, but it does not seem to be very active. Without built-in support, you will really need to perform the serialization manually and then use either string or bytes to store the result. Which one to use between string and bytes likely depends on whether you need cross-platform/cross-library compatibility: if you need compatibility, use string and parse the decimal representation in the string using the appropriate arbitrary precision type in the reader; if you don't need it and you're going to read the data using the same cpu architecture and library you can probably just use the binary serialization provided by that library (MarshalBinary/UnmarshalBinary) and use bytes.
On the other hand, if you just need to send monetary values with an appropriate precision and do not need arbitrary precision, you can probably just use sint64/uint64 and use an appropriate unit (these are commonly called fixed-point numbers). To give an example, if you need to represent a monetary value in dollars with 4 decimal digits, your unit would be 1/10000th of a dollar so that e.g. the value 1 represents $0.0001, the value 19900 represents $1.99, -500000 represents $-50, and so on. With such a unit you can represent the range $-922,337,203,685,477.5808 to $922,337,203,685,477.5807 - that should likely be sufficient for most purposes. You will still need to perform the scaling manually, but it should be fairly trivial and portable. Given the range above, I would suggest using sint64 is preferable as it allows you also to represent negative values; uint64 should be considered only if you need the extra positive range and don't need negative values.
Alternatively, if you don't mind importing another package, you may want to take a look at https://github.com/googleapis/googleapis/blob/master/google/type/money.proto or https://github.com/googleapis/googleapis/blob/master/google/type/decimal.proto (that incidentally implement something very similar to the two models described above), and the related utility functions at https://pkg.go.dev/github.com/googleapis/go-type-adapters/adapters
As a side note, you are completely correct that you should almost never use floating point for monetary values.

Go type for purchasing/financial calculations

I'm building an online store in Go. As would be expected, several important pieces need to record exact monetary amounts. I'm aware of the rounding problems associated with floats (i.e. 0.3 cannot be exactly represented, etc.).
The concept of currency seems easy to just represent as a string. However, I'm unsure of what type would be most appropriate to express the actual monetary amount in.
The key requirements would seem to be:
Can exactly express decimal numbers down to a specified number of decimal places, based on the currency (some currencies use more than 2 decimal places: http://www.londonfx.co.uk/ccylist.html )
Obviously basic arithmetic operations are needed - add/sub/mul/div.
Sane string conversion - which would essentially mean conversion to it's decimal equivalent. Also, internationalization would need to be at least possible, even if all of the logic for that isn't built in (1.000 in Europe vs 1,000 in the US).
Rounding, possibly using alternate rounding schemes like Banker's rounding.
Needs to have a simple and obvious way to correspond to a database value - MySQL in my case. (Might make the most sense to treat the value as a string at this level in order to ensure it's value is preserved exactly.)
I'm aware of math/big.Rat and it seems to solve a lot of these things, but for example it's string output won't work as-is, since it will output in the "a/b" form. I'm sure there is a solution for that too, but I'm wondering if there is some sort of existing best practice that I'm not aware of (couldn't easily find) for this sort of thing.
UPDATE: This package looks promising: https://code.google.com/p/godec/
You should keep i18n decoupled from your currency implementation. So no, don't bundle everything in a struct and call it a day. Mark what currency the amount represents but nothing more. Let i18n take care of formatting, stringifying, prefixing, etc.
Use an arbitrary precision numerical type like math/big.Rat. If that is not an option (because of serialization limitations or other barriers), then use the biggest fixed-size integer type you can use to represent the amount of atomic money in whatever currency you are representing – cents for USD, yens for JPY, rappen for CHF, cents for EUR, and so forth.
When using the second approach take extra care to not incur in overflows and define a clear and meaningful rounding behaviour for division.

JDBC / Oracle Double value insertion fails [duplicate]

double r = 11.631;
double theta = 21.4;
In the debugger, these are shown as 11.631000000000000 and 21.399999618530273.
How can I avoid this?
These accuracy problems are due to the internal representation of floating point numbers and there's not much you can do to avoid it.
By the way, printing these values at run-time often still leads to the correct results, at least using modern C++ compilers. For most operations, this isn't much of an issue.
I liked Joel's explanation, which deals with a similar binary floating point precision issue in Excel 2007:
See how there's a lot of 0110 0110 0110 there at the end? That's because 0.1 has no exact representation in binary... it's a repeating binary number. It's sort of like how 1/3 has no representation in decimal. 1/3 is 0.33333333 and you have to keep writing 3's forever. If you lose patience, you get something inexact.
So you can imagine how, in decimal, if you tried to do 3*1/3, and you didn't have time to write 3's forever, the result you would get would be 0.99999999, not 1, and people would get angry with you for being wrong.
If you have a value like:
double theta = 21.4;
And you want to do:
if (theta == 21.4)
{
}
You have to be a bit clever, you will need to check if the value of theta is really close to 21.4, but not necessarily that value.
if (fabs(theta - 21.4) <= 1e-6)
{
}
This is partly platform-specific - and we don't know what platform you're using.
It's also partly a case of knowing what you actually want to see. The debugger is showing you - to some extent, anyway - the precise value stored in your variable. In my article on binary floating point numbers in .NET, there's a C# class which lets you see the absolutely exact number stored in a double. The online version isn't working at the moment - I'll try to put one up on another site.
Given that the debugger sees the "actual" value, it's got to make a judgement call about what to display - it could show you the value rounded to a few decimal places, or a more precise value. Some debuggers do a better job than others at reading developers' minds, but it's a fundamental problem with binary floating point numbers.
Use the fixed-point decimal type if you want stability at the limits of precision. There are overheads, and you must explicitly cast if you wish to convert to floating point. If you do convert to floating point you will reintroduce the instabilities that seem to bother you.
Alternately you can get over it and learn to work with the limited precision of floating point arithmetic. For example you can use rounding to make values converge, or you can use epsilon comparisons to describe a tolerance. "Epsilon" is a constant you set up that defines a tolerance. For example, you may choose to regard two values as being equal if they are within 0.0001 of each other.
It occurs to me that you could use operator overloading to make epsilon comparisons transparent. That would be very cool.
For mantissa-exponent representations EPSILON must be computed to remain within the representable precision. For a number N, Epsilon = N / 10E+14
System.Double.Epsilon is the smallest representable positive value for the Double type. It is too small for our purpose. Read Microsoft's advice on equality testing
I've come across this before (on my blog) - I think the surprise tends to be that the 'irrational' numbers are different.
By 'irrational' here I'm just referring to the fact that they can't be accurately represented in this format. Real irrational numbers (like π - pi) can't be accurately represented at all.
Most people are familiar with 1/3 not working in decimal: 0.3333333333333...
The odd thing is that 1.1 doesn't work in floats. People expect decimal values to work in floating point numbers because of how they think of them:
1.1 is 11 x 10^-1
When actually they're in base-2
1.1 is 154811237190861 x 2^-47
You can't avoid it, you just have to get used to the fact that some floats are 'irrational', in the same way that 1/3 is.
One way you can avoid this is to use a library that uses an alternative method of representing decimal numbers, such as BCD
If you are using Java and you need accuracy, use the BigDecimal class for floating point calculations. It is slower but safer.
Seems to me that 21.399999618530273 is the single precision (float) representation of 21.4. Looks like the debugger is casting down from double to float somewhere.
You cant avoid this as you're using floating point numbers with fixed quantity of bytes. There's simply no isomorphism possible between real numbers and its limited notation.
But most of the time you can simply ignore it. 21.4==21.4 would still be true because it is still the same numbers with the same error. But 21.4f==21.4 may not be true because the error for float and double are different.
If you need fixed precision, perhaps you should try fixed point numbers. Or even integers. I for example often use int(1000*x) for passing to debug pager.
Dangers of computer arithmetic
If it bothers you, you can customize the way some values are displayed during debug. Use it with care :-)
Enhancing Debugging with the Debugger Display Attributes
Refer to General Decimal Arithmetic
Also take note when comparing floats, see this answer for more information.
According to the javadoc
"If at least one of the operands to a numerical operator is of type double, then the
operation is carried out using 64-bit floating-point arithmetic, and the result of the
numerical operator is a value of type double. If the other operand is not a double, it is
first widened (§5.1.5) to type double by numeric promotion (§5.6)."
Here is the Source

How to do high precision float point arithmetics in mathematica

In Mma, for example, I want to calculate
1.0492843824838929890231*0.2323432432432432^3
But it does not show the full precision. I tried N or various other functions but none seemed to work. How to achieve this? Many thanks.
When you specify numbers using decimal point, it takes them to have MachinePrecision, roughly 16 digits, hence the results typically have less than 16 meaningful digits. You can do infinite precision by using rational/algebraic numbers. If you want finite precision that's better than default, specify your numbers like this
123.23`100
This makes Mathematica interpret the number as having 100 digits of precision. So you can do
ans=1.0492843824838929890231`100*0.2323432432432432`100^3
Check precision of the final answer using Precision
Precision[ans]
Check tutorial/ArbitraryPrecisionNumbers for more details
You may do:
r[x_]:=Rationalize[x,0];
n = r#1.0492843824838929890231 (r#0.2323432432432432)^3
Out:
228598965838025665886943284771018147212124/17369643723462006556253010609136949809542531
And now, for example
N[n,100]
0.01316083216659453615093767083090600540780118249299143245357391544869\
928014026433963352910151464006549
Sometimes you just want to see more of the machine precision result. These are a few methods.
(1) Put the cursor at the end of the output line, and press Enter (not on the numeric keypad) to copy the output to a new input line, showing all digits.
(2) Use InputForm as in InputForm[1.0/7]
(3) Change the setting of PrintPrecision using the Options Inspector.

Why is BigDecimal returning a weird value?

I am writing code that will deal with currencies, charges, etc. I am going to use the BigDecimal class for math and storage, but we ran into something weird with it.
This statement:
1876.8 == BigDecimal('1876.8')
returns false.
If I run those values through a formatting string "%.13f" I get:
"%.20f" % 1876.8 => 1876.8000000000000
"%.20f" % BigDecimal('1876.8') => 1876.8000000000002
Note the extra 2 from the BigDecimal at the last decimal place.
I thought BigDecimal was supposed to counter the inaccuracies of storing real numbers directly in the native floating point of the computer. Where is this 2 coming from?
It won't give you as much control over the number of decimal places, but the conventional format mechanism for BigDecimal appears to be:
a.to_s('F')
If you need more control, consider using the Money gem, assuming your domain problem is mostly about currency.
gem install money
You are right, BigDecimal should be storing it correctly, my best guess is:
BigDecimal is storing the value correctly
When passed to a string formatting function, BigDecimal is being cast as a lower precision floating point value, creating the ...02.
When compared directly with a float, the float has an extra decimal place far beyond the 20 you see (classic floats can't be compared behavoir).
Either way, you are unlikely to get accurate results comparing a float to a BigDecimal.
Don't compare FPU decimal string fractions for equality
The problem is that the equality comparison of a floating or double value with a decimal constant that contains a fraction is rarely successful.
Very few decimal string fractions have exact values in the binary FP representation, so equality comparisons are usually doomed.*
To answer your exact question, the 2 is coming from a slightly different conversion of the decimal string fraction into the Float format. Because the fraction cannot be represented exactly, it's possible that two computations will consider different amounts of precision in intermediate calculations and ultimately end up rounding the result to a 52-bit IEEE 754 double precision mantissa differently. It hardly matters because there is no exact representation anyway, but one is probably more wrong than the other.
In particular, your 1876.8 cannot be represented exactly by an FP object, in fact, between 0.01 and 0.99, only 0.25, 0.50, and 0.75 have exact binary representations. All the others, include 1876.8, repeat forever and are rounded to 52 bits. This is about half of the reason that BigDecimal even exists. (The other half of the reason is the fixed precision of FP data: sometimes you need more.)
So, the result that you get when comparing an actual machine value with a decimal string constant depends on every single bit in the binary fraction ... down to 1/252 ... and even then requires rounding.
If there is anything even the slightest bit (hehe, bit, sorry) imperfect about the process that produced the number, or the input conversion code, or anything else involved, they won't look exactly equal.
An argument could even be made that the comparison should always fail because no IEEE-format FPU can even represent that number exactly. They really are not equal, even though they look like it. On the left, your decimal string has been converted to a binary string, and most of the numbers just don't convert exactly. On the right, it's still a decimal string.
So don't mix floats with BigDecimal, just compare one BigDecimal with another BigDecimal. (Even when both operands are floats, testing for equality requires great care or a fuzzy test. Also, don't trust every formatted digit: output formatting will carry remainders way off the right side of the fraction, so you don't generally start seeing zeroes, you will just see garbage values.)
*The problem: machine numbers are x/2n, but decimal constants are x/(2n * 5m). Your value as sign, exponent, and mantissa is the infinitely repeating 0 10000001001 1101010100110011001100110011001100110011001100110011... Ironically, FP arithmetic is perfectly precise and equality comparisons work perfectly well when the value has no fraction.
as David said, BigDecimal is storing it right
p (BigDecimal('1876.8') * 100000000000000).to_i
returns 187680000000000000
so, yes, the string formatting is ruining it
If you don't need fractional cents, consider storing and manipulating the currency as an integer, then dividing by 100 when it's time to display. I find that easier than dealing with the inevitable precision issues of storing and manipulating in floating point.
On Mac OS X, I'm running ruby 1.8.7 (2008-08-11 patchlevel 72) [i686-darwin9]
irb(main):004:0> 1876.8 == BigDecimal('1876.8') => true
However, being Ruby, I think you should think in terms of messages sent to objects. What does this return to you:
BigDecimal('1876.8') == 1876.8
The two aren't equivalent, and if you're trying to use BigDecimal's ability to determine precise decimal equality, it should be the receiver of the message asking about the equality.
For the same reason I don't think formatting the BigDecimal by sending a format message to the format string is the right approach either.

Resources