how can I determine the decimal value of a 32bit word containing 4 hexadecimal values? - endianness

Suppose Byte 0 in RAM contains the value 0x12. Subsequent bytes contain 0x34, 0x45, and 0x78. On a Big-Endian system with a 32-bit word, what’s the decimal value of the word?
I know that for a Big Endian system the order of the word would be 0x78, 0x45, 0x34, 0x12. I converted each value to decimal and got 120, 69, 52, 18. I want to know, in order to get the decimal value of the word, do I add all these values together (120 + 69 + 52 + 18), or do I interpret them as digits in a decimal number (120695218)?

Do you know how to convert a single integer from hex to decimal? On a big-endian system you have an integer value of 0x12344578 = ... + 5*16^2 + 7*16^1 + 8*16^0.
If you were writing a computer program to print a word as decimal, you'd already have the word as a binary integer (hex is a human-readable serialization format for binary, not actually used internally), and you'd do repeated division by 10, using the remainder to as the low digit each time. (So you generate digits LSD first, in reverse printing order.)
And for a program, endianness wouldn't be an issue. You'd just do a word load to get the integer value of the word in a register.

Related

Char value to decimal data type in RPGLE

Why when I send a char(CL) value to a decimal data type is converted to (-33)?
I wrote (move 'CL' NmOfField)
and NmOfField is a decimal data type, so I found the value on NmOfField is (-33)
I want to know why (-33) specifically?
It has to do with the fact that hex representation of 'CL' is X'C3D3' and the fact that NmOfField is a packed decimal type.
Say it's a packed(3:0) and it's value before move is 0. Each pair nibble in 'CL' becomes a digits, so the absolute value is 33, the second to last nibble is considered sign, and in packed decimal D is minus so hexadecimal representation of NmOfField is X'033D' and the numeric value is -33. If NmOfField value had been +123 (X'123F') before move, it would have been -133 after (X'133D') as 'CL' is only two characters long (move works right to left til there is no more room in the result, or no more char in factor 2). If one of the digit nibble had not been a decimal digit, a error would have been raised.
You should avoid using move* operators because of that kind of suprises and some other, and prefer free-form syntax listed here

What is this unpack doing? Can someone help me understand just a few letters?

I'm reading this code and I'm a tad confused as to what is going on. This code is using Ruby's OpenSSL library.
encrypted_message = cipher.update(address_string) + cipher.final
encrypted_message
=> "G\xCB\xE10prs\x1D\xA7\xD0\xB0\xCEmX\xDC#k\xDD\x8B\x8BB\xE1#!v\xF1\xDC\x19\xDD\xD0\xCA\xC9\x8B?B\xD4\xED\xA1\x83\x10\x1F\b\xF0A\xFEMBs'\xF3\xC7\xBC\x87\x9D_n\\z\xB7\xC1\xA5\xDA\xF4s \x99\\\xFD^\x85\x89s\e"
[3] pry(Encoder)> encrypted_message.unpack('H*')
=> ["47cbe1307072731da7d0b0ce6d58dc406bdd8b8b42e1232176f1dc19ddd0cac98b3f42d4eda183101f08f041fe4d427327f3c7bc879d5f6e5c7ab7c1a5daf47320995cfd5e8589731b"]
It seems that the H directive is this:
hex string (high nibble first)
How are the escaped characters in the encrypted_message transformed into letters and numbers?
I think the heart of the issue is that I don't understand this. What is going on?
['A'].pack('H')
=> "\xA0"
Here is a good explanation of Ruby's pack and unpack methods.
According to your question:
> ['A'].pack('H')
=> "\xA0"
A byte consists of 8 bits. A nibble consists of 4 bits. So a byte has two nibbles. The ascii value of ‘h’ is 104. Hex value of 104 is 68. This 68 is stored in two nibbles. First nibble, meaning 4 bits, contain the value 6 and the second nibble contains the value 8. In general we deal with high nibble first and going from left to right we pick the value 6 and then 8.
In the above case the input ‘A’ is not ASCII ‘A’ but the hex ‘A’. Why is it hex ‘A’. It is hex ‘A’ because the directive ‘H’ is telling pack to treat input value as hex value. Since ‘H’ is high nibble first and since the input has only one nibble then that means the second nibble is zero. So the input changes from ['A'] to ['A0'] .
Since hex value A0 does not translate into anything in the ASCII table the final output is left as it and hence the result is \xA0. The leading \x indicates that the value is hex value.

How MKI$ and CVI Functions works

I am working on GwBasic and want to know how 'CVI("aa")' returns '24929' is that converts each char to ASCII but code of "aa" is 9797.
CVI converts between a GW-BASIC integer and its internal representation in bytes. That internal representation is a 16-bit little-endian signed integer, so that the value you find is the same as ASC("a") + 256*ASC("a"), which is 97 + 256*97, which is 24929.
MKI$ is the opposite operation of CVI, so that MKI$(24929) returns the string "aa".
The 'byte reversal' is a consequence of the little endianness of GW-BASIC's internal representation of integers: the leftmost byte of the representation is the least significant byte, whereas in hexadecimal notation you would write the most significant byte on the left.

Convert HEX 32 bit from GPS plot on Ruby

I am working with the following HEX values representing different values from a GPS/GPRS plot. All are given as 32 bit integer.
For example:
296767 is the decimal value (unsigned) reported for hex number: 3F870400
Another one:
34.96987500 is the decimal float value (signed) given on radian resolution 10^(-8) reported for hex humber: DA4DA303.
Which is the process for transforming the hex numbers onto their corresponding values on Ruby?
I've already tried unpack/pack with directives: L, H & h. Also tried adding two's complement and converting them to binary and then decimal with no success.
If you are expecting an Integer value:
input = '3F870400'
output = input.scan(/../).reverse.join.to_i( 16 )
# 296767
If you are expecting degrees:
input = 'DA4DA303'
temp = input.scan(/../).reverse.join.to_i( 16 )
temp = ( temp & 0x80000000 > 1 ? temp - 0x100000000 : temp ) # Handles negatives
output = temp * 180 / (Math::PI * 10 ** 8)
# 34.9698751282937
Explanation:
The hexadecimal string is representing bytes of an Integer stored least-significant-byte first (or little-endian). To store it as raw bytes you might use [296767].pack('V') - and if you had the raw bytes in the first place you would simply reverse that binary_string.unpack('V'). However, you have a hex representation instead. There are a few different approaches you might take (including putting the hex back into bytes and unpacking it), but in the above I have chosen to manipulate the hex string into the most-significant-byte first form and use Ruby's String#to_i

ruby pack and hex values

A nibble is four bits. That means there are 16 (2^4) possible values. That means a nibble corresponds to a single hex digit, since hex is base 16. A byte is 2^8, which therefore can be represented by 2 hex digits, and consequently 2 nibbles.
So here below I have a 1 byte character:
'A'
That character is 2^8:
'A'.unpack('B*')
=> ["01000001"]
That means it should be represented by two hex digits:
01000001 == 41
According to the Ruby documentation, for the Array method pack, when aTemplateString (the parameter) is equal to 'H', then it will return a hex string. But this is what I get back:
['A'].pack('H')
=> "\xA0"
My first point is that's not the hex value it should return. It should have returned the hex value of 41. The second point is the concept of nibble, as I explained above, means for 1 byte, it should return two nibbles. But above it inserts a 0, because it thinks the input only has 1 nibble, even though 'A' is one byte and has two nibbles. So clearly I am missing something here.
I think you want unpack:
'A'.unpack('H*') #=> ["41"]
pack does the opposite:
['41'].pack('H*') #=> "A"
It's tricky. ["1"].pack("H") => "\x10" and ["16"].pack("H") => "\x10". I spent long long time to understand this.

Resources