Is it possible that text editor could contain character values after writing a double value into the file - binaryfiles

If you write a double value into a binary file and open that binary file in a text editor. Is it
possible that you might see the string ABCDEFGH in the file ?

Yes, although the value of the double that results in ABCDEFGH will vary between systems.
Most modern computers use a little endian representation for both integers and IEEE floating point numbers. In this case, the value of the double will be: 1.5839800103804824e+40.
For systems using big endian integers and big endian IEEE floating point numbers: 2393736.541207228
On systems that use different endianness for their integers and floating pointer numbers, it doesn't appear to be possible to do this. (ABCDEFGH corresponds to an alias of zero, so you can't necessarily convert the other way)
And apparently, there are some ARM chips out there that use little endianness overall, but swap the words of their double precision numbers. On such a system, ABCDEFGH could be produced with 710524627902859500000.0.
Edit: and all of this assumes that your text editor is using an ASCII-compatible text encoding.

Related

Convert string currency to float with ruby

I have the following string:
"1.273,08"
And I need to convert to float and the result must be:
1273.08
I tried some code using gsub but I can't solve this.
How can I do this conversion?
You have already received two good answers how to massage your String into your desired format using String#delete and String#tr.
But there is a deeper problem.
The decimal value 1 273.0810 cannot be accurately represented as an IEEE 754-2019 / ISO/IEC 60559:2020 binary64 floating point value.
Just like the value 1/3rd can easily be represented in ternary (0.13) but has an infinite representation in decimal (0.33333333…10, i.e. 0.[3]…10) and thus cannot be accurately represented, the value 8/100th can easily be represented in decimal (0.0810) but has an infinite representation in binary (0.0001010001111010111000010100011110101110000101…2, i.e. 0.[00010100011110101110]…2). In other words, it is impossible to express 1 273.0810 as a Ruby Float.
And that's not specific to Ruby, or even to programming, that is just basic high school maths: you cannot represent this number in binary, period, just like you cannot represent 1/3rd in decimal, or π in any integer base.
And of course, computers don't have infinite memory, so not only does 1 273.0810 have an infinite representation in binary, but as a Float, it will also be cut off after 64 bits. The closest possible value to 1 273.0810 as an IEEE 754-2019 / ISO/IEC 60559:2020 binary64 floating point value is 1 273.079 999 999 999 927 240 423 858 1710, which is less than 1 273.0810.
That is why you should never represent money using binary numbers: everybody will expect it to be decimal, not binary; if I write a cheque, I write it in decimal, not binary. People will expect that it is impossible to represent $ 1/3rd, they will expect that it is impossible to represent $ π, but they will not expect and not accept that if they put $ 1273.08 into their account, they will actually end up with slightly less than that.
The correct way to represent money would be to use a specialized Money datatype, or at least using the bigdecimal library from the standard library:
require 'bigdecimal'
BigDecimal('1.273,08'.delete('.').tr(',', '.'))
#=> 0.127308e4
I would do
"1.273,08".delete('.') # delete '.' from the string
.tr(',', '.') # replace ',' with '.'
.to_f # translate to float
#=> 1273.08
So, we're using . as a thousands separator and , instead of a dot:
str = "1.273,08"
str.gsub('.','').gsub(',', '.').to_f

Numeric Format BEST15.2 allows how many places after decimal in SAS

I am confused to see SAS Numeric Format BEST15.2 is allowing more than two palces after decimal. What is the correct interpretation of BEST15.2
Looking up the documentation the Best format only has a width specification, not a decimal specification.
Further, the documentation does say:
Numbers with decimals are written with as many digits to the left and
right of the decimal point as needed or as allowed by the width.
which might explain what you are seeing.
An alternative could be the BESTDw.p format which allows you to specify the decimal precision:
Prints numeric values, lining up decimal places for values of similar
magnitude, and prints integers without decimals.

How to convert fixed-point VHDL type back to float?

I am using IEEE fixed point package in VHDL.
It works well, but I now facing a problem concerning their string representation in a test bench : I would like to dump them in a text file.
I have found that it is indeed possible to directly write ufixed or sfixed using :
write(buf, to_string(x)); --where x is either sfixed or ufixed (and buf : line)
But then I get values like 11110001.10101 (for sfixed q8.5 representation).
So my question : how to convert back these fixed point numbers to reals (and then to string) ?
The variable needs to be split into two std-logic-vector parts, the integer part can be converted to a string using standard conversion, but for the fraction part the string conversion is a bit different. For the integer part you need to use a loop and divide by 10 and convert the modulo remainder into ascii character, building up from the lower digit to the higher digit. For the fractional part it also need a loop but one needs to multiply by 10 take the floor and isolate this digit to get the corresponding character, then that integer is used to be substracted to the fraction number, etc. This is the concept, worked in MATLAB to test and making a vhdl version I will share soon. I was surprised not to find such useful function anywhere. Of course fixed-point format can vary Q(N,M) N and M can have all sorts of values, while for floating point, it is standardized.

endian.h for floats

I am trying to read an unformatted binary file, that was written by a big endian machine. My machine is a 32bit little endian.
I already know how to swap bytes for different variable types, but it is a cumbersome work. I found this set of functions endian.h that handle integer swapping very easily.
I was wondering if there is something similar for floats or strings, or if I have to program it from scratch? Since they are handled differently for this endianness problem as integers.
Thanks.
I do not think there is a standart header for swapping floats. You could take a look at http://www.gamedev.net/page/resources/_/technical/game-programming/writing-endian-independent-code-in-c-r2091
which provides some helpful code.
As for strings, there is no need to do endian-swapping. Endianness is used to order the bytes of a variable. A string consists of a series of chars. Each char has only one Byte so there is nothing to swap.

Converting double precision floating points from little to big endian

I have a file (file.dat) that contains an array of double precision floating points; and a program that reads file.dat as an input and does something with the data contained. The problem though is that this program can only accept big endian but file.dat is little endian.
What can I do to convert the way the double precision floats are stored from little to big endian?

Resources