Numeric Format BEST15.2 allows how many places after decimal in SAS - format

I am confused to see SAS Numeric Format BEST15.2 is allowing more than two palces after decimal. What is the correct interpretation of BEST15.2

Looking up the documentation the Best format only has a width specification, not a decimal specification.
Further, the documentation does say:
Numbers with decimals are written with as many digits to the left and
right of the decimal point as needed or as allowed by the width.
which might explain what you are seeing.
An alternative could be the BESTDw.p format which allows you to specify the decimal precision:
Prints numeric values, lining up decimal places for values of similar
magnitude, and prints integers without decimals.

Related

Convert string currency to float with ruby

I have the following string:
"1.273,08"
And I need to convert to float and the result must be:
1273.08
I tried some code using gsub but I can't solve this.
How can I do this conversion?
You have already received two good answers how to massage your String into your desired format using String#delete and String#tr.
But there is a deeper problem.
The decimal value 1 273.0810 cannot be accurately represented as an IEEE 754-2019 / ISO/IEC 60559:2020 binary64 floating point value.
Just like the value 1/3rd can easily be represented in ternary (0.13) but has an infinite representation in decimal (0.33333333…10, i.e. 0.[3]…10) and thus cannot be accurately represented, the value 8/100th can easily be represented in decimal (0.0810) but has an infinite representation in binary (0.0001010001111010111000010100011110101110000101…2, i.e. 0.[00010100011110101110]…2). In other words, it is impossible to express 1 273.0810 as a Ruby Float.
And that's not specific to Ruby, or even to programming, that is just basic high school maths: you cannot represent this number in binary, period, just like you cannot represent 1/3rd in decimal, or π in any integer base.
And of course, computers don't have infinite memory, so not only does 1 273.0810 have an infinite representation in binary, but as a Float, it will also be cut off after 64 bits. The closest possible value to 1 273.0810 as an IEEE 754-2019 / ISO/IEC 60559:2020 binary64 floating point value is 1 273.079 999 999 999 927 240 423 858 1710, which is less than 1 273.0810.
That is why you should never represent money using binary numbers: everybody will expect it to be decimal, not binary; if I write a cheque, I write it in decimal, not binary. People will expect that it is impossible to represent $ 1/3rd, they will expect that it is impossible to represent $ π, but they will not expect and not accept that if they put $ 1273.08 into their account, they will actually end up with slightly less than that.
The correct way to represent money would be to use a specialized Money datatype, or at least using the bigdecimal library from the standard library:
require 'bigdecimal'
BigDecimal('1.273,08'.delete('.').tr(',', '.'))
#=> 0.127308e4
I would do
"1.273,08".delete('.') # delete '.' from the string
.tr(',', '.') # replace ',' with '.'
.to_f # translate to float
#=> 1273.08
So, we're using . as a thousands separator and , instead of a dot:
str = "1.273,08"
str.gsub('.','').gsub(',', '.').to_f

Maximum digits allowed after decimal in double datatype in vb6

I want to know how many number of digits are allowed after decimal point for primitive double datatype in vb6, without actually getting rounded off.
You get up to 16 significant figures in case of double data type in vb6 with accuracy
eg 1.0000000000000006

Convert a long character field to numeric, NOT scientific notation (SAS)

I need to join two tables - one table has householdid which is CHAR30, which appears to have center alignment and the other householdid as numeric 20. I need to convert to the numeric 20 but when I do that it appears truncated, perhaps because of the strange alignment (not all of the 30 positions are actually needed).
When I try to keep the full 30 positions as a numeric I instead get a conversion to scientific notation so of course this will not work as a key id for later operations.
As long as the number is converted properly, it doesn't matter what format it has. A format just tells SAS how to show you the number. Behind the scenes, it is just a DOUBLE.
1.0 = 1 = 1e0
Now if you have converted to a number and cannot get a join, then look at the informat you used to read it in.
try
num_id = input(strip(char_id),best32.);
Strip removes leading and trailing blanks. The BEST32. INFORMAT tries its "best" to read the number up to 32 characters in length.
You cannot store a 20 digit number as a numeric in SAS. SAS stores all numbers as 8 byte floating point and so does not have enough bits to represent that many digits uniquely. You can ask SAS what is the largest integer it can represent exactly by using the CONSTANT() function.
1 data _null_;
2 x=constant('EXACTINT',8);
3 put x = comma32. ;
4 run;
x=9,007,199,254,740,992
Read and store your 20 and 30 digit strings as character variables.
Use the bestd32. format. Tends to work out pretty well for long key variables. Depending on the length of the variable, you can change 32 to whichever length you need.
Based on the comments under the original question, the only thing you can do is convert all ID fields to strings, and use the strings to do the joins. #Reeza suggested this in one of the comments but it should have been posted as an answer.
I assume you are pulling this information out of another database/system that allows for greater numeric precision then SAS does. If you don't convert the values to strings when they are read into SAS, then you run the risk of losing precision.
If you lose precision, the ID in SAS is likely to become very slightly different to the ID in the original system, which can cause problems when searching the original system for an ID obtained from SAS.
Be sure you don't read the numbers into SAS as numeric, then convert to string. If you do it this way you are still losing precision as soon as the numbers are stored in SAS as numeric variables.

What does WriteLN in this case do?

I was just wondering what the following command in a pascal program does:
WRITELN(MaxTab[index,1]:7:5,' ',
MaxTab[index,2]:8:3,' ',
MaxTab[index,3]:5:1);
MaxTab is defined as ARRAY[1..200,1..3] OF REAL, and index is a counter. Usually WRITELN simply prints the text which is written in the brackets or the variables, but I do not understand what the numbers behind ] are for (e.g. ]:7:5).
This is a Pascal construct similar to sprintf("%07.5f") in C-like languages. From the FreePascal documentation:
For real values, you can use the aforementioned syntax to display scientific notation in a specified field width, or you can convert to fixed decimal-point notation with:
Value : field_width : decimal_field_width
The field width is the total field width, including the decimal part. The whole number part is always displayed fully, so if you have not allocated enough space, it will be displayed anyway.
However, if the number of decimal digits exceeds the specified decimal field width, the output will be displayed rounded to the specified number of places (though the variable itself is not changed).
write (573549.56792:20:2);
would look like (with 11 spaces in front):
573549.57
The value after the first colon determines the width of the field in characters, the second value determines the number of digits to show following the decimal point.

How do I trim the zero value after decimal

As I tried to debug, I found that : just as I type in
Dim value As Double
value = 0.90000
then hit enter, and it automatically converts to 0.9
Shouldn't it keep the precision in double in visual basic?
For my calculation, I absolutely need to show the precision
If precision is required then the Currency data type is what you want to use.
There are at least two representations of your value in play. One is the value you see on the screen -- a string -- and one is the internal representation -- a binary value. In dealing with fractional values, the two are often not equivalent and where they aren't, it's because they can't be.
If you stick with doubles, VB will maintain 53 bits of mantissa throughout your calculations, no matter how they might appear when printed. If you transition through the string domain, say by saving to a file or DB and later retrieving, it often has to leave some of that precision behind. It's inevitable, because the interface between the two domains is not perfect. Some values that can be exactly represented as strings (or Decimals, that is, powers of ten) can't be exactly represented as fractional powers of 2.
This has nothing to do with VB, it's the nature of floating point. The best you can do is control where the rounding occurs. For this purpose your friend is the Format function, which controls how a value appears in string form.
? Format$(0.9, "0.00000") will show you an example.
You are getting what you see on the screen confused with what bits are being set in the Double to make that number.
VB is simply being "helpful", and simply knocking off excess zeros. But for all intents and purposes,
0.9
is identical to
0.90000
If you don't believe me, try doing this comparison:
Debug.Print CDbl("0.9") = CDbl("0.90000")
As has already been said, displayed precision can be shown using the Format$() function, e.g.
Debug.Print Format$(0.9, "0.00000")
No, it shouldn't keep the precision. Binary floating point values don't retain this information... and it would be somewhat odd to do so, given that you're expressing the value in one base even though it's being represented in another.
I don't know whether VB6 has a decimal floating point type, but that's probably what you want - or a fixed point decimal type, perhaps. Certainly in .NET, System.Decimal has retained extra 0s from .NET 1.1 onwards. If this doesn't help you, you could think about remembering two integers - e.g. "90000" and "100000" in this case, so that the value you're representing is one integer divided by another, with the associated level of precision.
EDIT: I thought that Currency may be what you want, but according to this article, that's fixed at 4 decimal places, and you're trying to retain 5. You could potentially just multiply by 10, if you always want 5 decimal places - but it's an awkward thing to remember to do everywhere... and you'd have to work out how to format it appropriately. It would also always be 4 decimal places, I suspect, even if you'd specified fewer - so if you want "0.300" to be different to "0.3000" then Currency may not be appropriate. I'm entirely basing this on articles online though...
You can also enter the value as 0.9# instead. This helps avoid implicit coercion within an expression that may truncate the precision you expect. In most cases the compiler won't require this hint though because floating point literals default to Double (indeed, the IDE typically deletes the # symbol unless the value was an integer, e.g. 9#).
Contrast the results of these:
MsgBox TypeName(0.9)
MsgBox TypeName(0.9!)
MsgBox TypeName(0.9#)

Resources