d3.format "none" type not rounding - d3.js

The d3 documentation states that the (none) format type works "like g, but trims insignificant trailing zeros". The g format type uses "either decimal or exponent notation, rounded to significant digits."
Mike Bostock explained that "The none format type trims trailing zeros, but the precision is interpreted as significant digits rather than the number of digits past the decimal point."
If I use d3.format('.2')(2.0), I get 2 (trailing zeros are dropped).
But when I use d3.format('.2')(2.001) the result is 2.001: No rounding happens. I would have expected the result to be 2.0 (rounding to two significant digits, but keeping the zero), or 2 (rounding to two significant digits, then dropping the zero).
Is this a bug, or am I misunderstanding the syntax?

This happened because I was using an old version of d3 (3.5.17, which ships with the current version of plot.ly 1.27.1).
In that version of d3, the (none) format type doesn't exist. It was introduced in 2015.

Related

formatting a number with SI prefix change in version 4

Before version 4
var formatter = d3.format("s");
formatter(400000) // 400k
In version 4
var formatter = d3.format("s");
formatter(400000) // 400.000k
Is there any way I can get the format like previous versions without using precision.
Is there any way I can get the format like previous versions without using precision?
Without using precision, this is not possible anymore in D3 v4.x.
According to the documentation:
Depending on the type, the precision either indicates the number of digits that follow the decimal point (types f and %), or the number of significant digits (types ​, e, g, r, s and p).
Meaning that, for ("s"), the precision indicates the number of significant digits.
And here comes the interesting part, that doesn't exist in D3 v3.x API:
If the precision is not specified, it defaults to 6 for all types except (none), which defaults to 12. (emphasis mine)
So, the precision for formatter(400000) defaults to 6, which gives you:
400.000k
For instance, if you do formatter(40), you'll get:
40.0000
PS: Trailing zeros in a number containing a decimal point are significant.

Why does printf (Unix) use round half down?

Why does printf behave in such an uncommon way?
> printf %.0f 2.5
> 2
> printf %.0f 2.51
> 3
Is there an advantage of this behaviour that compensates the probable misunderstandings (like this one)?
It's not strictly round-down:
> printf '%.0f\n' 2.5
2
> printf '%.0f\n' 3.5
4
This is a form of rounding used to combat bias if you are rounding a large number of values; roughly half of them will be rounded down, the other half rounded up. The rule is, round down if the integer portion is even, up if the integer portion is odd.
This is, however, only an explanation of a particular rounding scheme, which is not guaranteed to be used by all implementations of printf.
From the POSIX specification for the printf command:
The floating-point formatting conversion specifications of printf() are not required because all arithmetic in the shell is integer arithmetic. The awk utility performs floating-point calculations and provides its own printf function. The bc utility can perform arbitrary-precision floating-point arithmetic, but does not provide extensive formatting capabilities. (This printf utility cannot really be used to format bc output; it does not support arbitrary precision.) Implementations are encouraged to support the floating-point conversions as an extension.
Thus: %f isn't even required to exist at all; anything it may or may not do is entirely unspecified by the relevant standard.
Similarly, there's no guidance on rounding provided on the POSIX standard for the printf() function:
f, F
The double argument shall be converted to decimal notation in the style "[-]ddd.ddd", where the number of digits after the radix character is equal to the precision specification. If the precision is missing, it shall be taken as 6; if the precision is explicitly zero and no '#' flag is present, no radix character shall appear. If a radix character appears, at least one digit appears before it. The low-order digit shall be rounded in an implementation-defined manner.
A double argument representing an infinity shall be converted in one of the styles "[-]inf" or "[-]infinity"; which style is implementation-defined. A double argument representing a NaN shall be converted in one of the styles "[-]nan(n-char-sequence)" or "[-]nan"; which style, and the meaning of any n-char-sequence, is implementation-defined. The F conversion specifier produces "INF", "INFINITY", or "NAN" instead of "inf", "infinity", or "nan", respectively.

Numeric Format BEST15.2 allows how many places after decimal in SAS

I am confused to see SAS Numeric Format BEST15.2 is allowing more than two palces after decimal. What is the correct interpretation of BEST15.2
Looking up the documentation the Best format only has a width specification, not a decimal specification.
Further, the documentation does say:
Numbers with decimals are written with as many digits to the left and
right of the decimal point as needed or as allowed by the width.
which might explain what you are seeing.
An alternative could be the BESTDw.p format which allows you to specify the decimal precision:
Prints numeric values, lining up decimal places for values of similar
magnitude, and prints integers without decimals.

Meaning of # in Scheme number literals

DrRacket running R5RS says that 1### is a perfectly valid Scheme number and prints a value of 1000.0. This leads me to believe that the pound signs (#) specify inexactness in a number, but I'm not certain. The spec also says that it is valid syntax for a number literal, but it does not say what those signs mean.
Any ideas as to what the # signs in Scheme number literals signifiy?
The hash syntax was introduced in 1989. There were a discussion on inexact numbers on the Scheme authors mailing list, which contains several nice ideas. Some caught on and some didn't.
http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1989/msg00178.html
One idea that stuck was introducing the # to stand for an unknown digit.
If you have measurement with two significant digits you can indicate that with 23## that the digits 2 and 3 are known, but that the last digits are unknown. If you write 2300, then you can't see that the two zero aren't to ne trusted. When I saw the syntax I expected 23## to evaluate to 2350, but (I believe) the interpretation is implementation dependent. Many implementation interpret 23## as 2300.
The syntax was formally introduced here:
http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1989/msg00324.html
EDIT
From http://groups.csail.mit.edu/mac/ftpdir/scheme-reports/r3rs-html/r3rs_8.html#SEC52
An attempt to produce more digits than are available in the internal
machine representation of a number will be marked with a "#" filling
the extra digits. This is not a statement that the implementation
knows or keeps track of the significance of a number, just that the
machine will flag attempts to produce 20 digits of a number that has
only 15 digits of machine representation:
3.14158265358979##### ; (flo 20 (exactness s))
EDIT2
Gerald Jay Sussman writes why the introduced the syntax here:
http://groups.csail.mit.edu/mac/ftpdir/scheme-mail/HTML/rrrs-1994/msg00096.html
Here's the R4RS and R5RS docs regarding numerical constants:
R4RS 6.5.4 Syntax of numerical constants
R5RS 6.2.4 Syntax of numerical constants.
To wit:
If the written representation of a number has no exactness prefix, the constant may be either inexact or exact. It is inexact if it contains a decimal point, an exponent, or a "#" character in the place of a digit, otherwise it is exact.
Not sure they mean anything beyond that, other than 0.

How do I trim the zero value after decimal

As I tried to debug, I found that : just as I type in
Dim value As Double
value = 0.90000
then hit enter, and it automatically converts to 0.9
Shouldn't it keep the precision in double in visual basic?
For my calculation, I absolutely need to show the precision
If precision is required then the Currency data type is what you want to use.
There are at least two representations of your value in play. One is the value you see on the screen -- a string -- and one is the internal representation -- a binary value. In dealing with fractional values, the two are often not equivalent and where they aren't, it's because they can't be.
If you stick with doubles, VB will maintain 53 bits of mantissa throughout your calculations, no matter how they might appear when printed. If you transition through the string domain, say by saving to a file or DB and later retrieving, it often has to leave some of that precision behind. It's inevitable, because the interface between the two domains is not perfect. Some values that can be exactly represented as strings (or Decimals, that is, powers of ten) can't be exactly represented as fractional powers of 2.
This has nothing to do with VB, it's the nature of floating point. The best you can do is control where the rounding occurs. For this purpose your friend is the Format function, which controls how a value appears in string form.
? Format$(0.9, "0.00000") will show you an example.
You are getting what you see on the screen confused with what bits are being set in the Double to make that number.
VB is simply being "helpful", and simply knocking off excess zeros. But for all intents and purposes,
0.9
is identical to
0.90000
If you don't believe me, try doing this comparison:
Debug.Print CDbl("0.9") = CDbl("0.90000")
As has already been said, displayed precision can be shown using the Format$() function, e.g.
Debug.Print Format$(0.9, "0.00000")
No, it shouldn't keep the precision. Binary floating point values don't retain this information... and it would be somewhat odd to do so, given that you're expressing the value in one base even though it's being represented in another.
I don't know whether VB6 has a decimal floating point type, but that's probably what you want - or a fixed point decimal type, perhaps. Certainly in .NET, System.Decimal has retained extra 0s from .NET 1.1 onwards. If this doesn't help you, you could think about remembering two integers - e.g. "90000" and "100000" in this case, so that the value you're representing is one integer divided by another, with the associated level of precision.
EDIT: I thought that Currency may be what you want, but according to this article, that's fixed at 4 decimal places, and you're trying to retain 5. You could potentially just multiply by 10, if you always want 5 decimal places - but it's an awkward thing to remember to do everywhere... and you'd have to work out how to format it appropriately. It would also always be 4 decimal places, I suspect, even if you'd specified fewer - so if you want "0.300" to be different to "0.3000" then Currency may not be appropriate. I'm entirely basing this on articles online though...
You can also enter the value as 0.9# instead. This helps avoid implicit coercion within an expression that may truncate the precision you expect. In most cases the compiler won't require this hint though because floating point literals default to Double (indeed, the IDE typically deletes the # symbol unless the value was an integer, e.g. 9#).
Contrast the results of these:
MsgBox TypeName(0.9)
MsgBox TypeName(0.9!)
MsgBox TypeName(0.9#)

Resources