Char value to decimal data type in RPGLE - char

Why when I send a char(CL) value to a decimal data type is converted to (-33)?
I wrote (move 'CL' NmOfField)
and NmOfField is a decimal data type, so I found the value on NmOfField is (-33)
I want to know why (-33) specifically?

It has to do with the fact that hex representation of 'CL' is X'C3D3' and the fact that NmOfField is a packed decimal type.
Say it's a packed(3:0) and it's value before move is 0. Each pair nibble in 'CL' becomes a digits, so the absolute value is 33, the second to last nibble is considered sign, and in packed decimal D is minus so hexadecimal representation of NmOfField is X'033D' and the numeric value is -33. If NmOfField value had been +123 (X'123F') before move, it would have been -133 after (X'133D') as 'CL' is only two characters long (move works right to left til there is no more room in the result, or no more char in factor 2). If one of the digit nibble had not been a decimal digit, a error would have been raised.
You should avoid using move* operators because of that kind of suprises and some other, and prefer free-form syntax listed here

Related

Arithmetic with Chars in Julia

Julia REPL tells me that the output of
'c'+2
is 'e': ASCII/Unicode U+0065 (category Ll: Letter, lowercase)
but that the output of
'c'+2-'a'
is 4.
I'm fine with the fact that Chars are identified as numbers via their ASCII code. But I'm confused about the type inference here: why is the first output a char and the second an integer?
Regarding the motivation for conventions, it’s similar to time stamps and intervals. The difference between time stamps is an interval and accordingly you can add an interval to a time stamp to get another time stamp. You cannot, however, add two time stamps because that doesn’t make sense—what is the sum of two points in time supposed to be? The difference between two chars is their distance in code point space (an integer); accordingly you can add an integer to a char to get another char that’s offset by that many code points. You can’t add two chars because adding two code points is not a meaningful operation.
Why allow comparisons of chars and differences in the first place? Because it’s common to use that kind of arithmetic and comparison to implement parsing code, eg for parsing numbers in various bases and formats.
The reason is:
julia> #which 'a' - 1
-(x::T, y::Integer) where T<:AbstractChar in Base at char.jl:227
julia> #which 'a' - 'b'
-(x::AbstractChar, y::AbstractChar) in Base at char.jl:226
Subtraction of Char and integer is Char. This is e.g. 'a' - 1.
However, subtraction of two Char is integer. This is e.g. 'a' - 'b'.
Note that for Char and integer both addition and subtraction are defined, but for two Char only subtraction works:
julia> 'a' + 'a'
ERROR: MethodError: no method matching +(::Char, ::Char)
This indeed can lead to tricky cases at times that rely of order of operations, as in this example:
julia> 'a' + ('a' - 'a')
'a': ASCII/Unicode U+0061 (category Ll: Letter, lowercase)
julia> 'a' + 'a' - 'a'
ERROR: MethodError: no method matching +(::Char, ::Char)
Also note that when working with Char and integer you cannot subtract Char from integer:
julia> 2 - 'a'
ERROR: MethodError: no method matching -(::Int64, ::Char)
Motivation:
subtraction of two char - this is sometimes useful if you want to get a relative position of a char with reference to other char, e.g. c - '0' to convert char to its decimal representation if you know char is a digit;
adding or subtracting an integer and char - the same but in reverse e.g. you want to convert digit to char and write '0' + d.
I have been using Julia for years now, and I used this feature maybe once or twice, so I would not say it is super commonly needed.

how can I determine the decimal value of a 32bit word containing 4 hexadecimal values?

Suppose Byte 0 in RAM contains the value 0x12. Subsequent bytes contain 0x34, 0x45, and 0x78. On a Big-Endian system with a 32-bit word, what’s the decimal value of the word?
I know that for a Big Endian system the order of the word would be 0x78, 0x45, 0x34, 0x12. I converted each value to decimal and got 120, 69, 52, 18. I want to know, in order to get the decimal value of the word, do I add all these values together (120 + 69 + 52 + 18), or do I interpret them as digits in a decimal number (120695218)?
Do you know how to convert a single integer from hex to decimal? On a big-endian system you have an integer value of 0x12344578 = ... + 5*16^2 + 7*16^1 + 8*16^0.
If you were writing a computer program to print a word as decimal, you'd already have the word as a binary integer (hex is a human-readable serialization format for binary, not actually used internally), and you'd do repeated division by 10, using the remainder to as the low digit each time. (So you generate digits LSD first, in reverse printing order.)
And for a program, endianness wouldn't be an issue. You'd just do a word load to get the integer value of the word in a register.

What is this unpack doing? Can someone help me understand just a few letters?

I'm reading this code and I'm a tad confused as to what is going on. This code is using Ruby's OpenSSL library.
encrypted_message = cipher.update(address_string) + cipher.final
encrypted_message
=> "G\xCB\xE10prs\x1D\xA7\xD0\xB0\xCEmX\xDC#k\xDD\x8B\x8BB\xE1#!v\xF1\xDC\x19\xDD\xD0\xCA\xC9\x8B?B\xD4\xED\xA1\x83\x10\x1F\b\xF0A\xFEMBs'\xF3\xC7\xBC\x87\x9D_n\\z\xB7\xC1\xA5\xDA\xF4s \x99\\\xFD^\x85\x89s\e"
[3] pry(Encoder)> encrypted_message.unpack('H*')
=> ["47cbe1307072731da7d0b0ce6d58dc406bdd8b8b42e1232176f1dc19ddd0cac98b3f42d4eda183101f08f041fe4d427327f3c7bc879d5f6e5c7ab7c1a5daf47320995cfd5e8589731b"]
It seems that the H directive is this:
hex string (high nibble first)
How are the escaped characters in the encrypted_message transformed into letters and numbers?
I think the heart of the issue is that I don't understand this. What is going on?
['A'].pack('H')
=> "\xA0"
Here is a good explanation of Ruby's pack and unpack methods.
According to your question:
> ['A'].pack('H')
=> "\xA0"
A byte consists of 8 bits. A nibble consists of 4 bits. So a byte has two nibbles. The ascii value of ‘h’ is 104. Hex value of 104 is 68. This 68 is stored in two nibbles. First nibble, meaning 4 bits, contain the value 6 and the second nibble contains the value 8. In general we deal with high nibble first and going from left to right we pick the value 6 and then 8.
In the above case the input ‘A’ is not ASCII ‘A’ but the hex ‘A’. Why is it hex ‘A’. It is hex ‘A’ because the directive ‘H’ is telling pack to treat input value as hex value. Since ‘H’ is high nibble first and since the input has only one nibble then that means the second nibble is zero. So the input changes from ['A'] to ['A0'] .
Since hex value A0 does not translate into anything in the ASCII table the final output is left as it and hence the result is \xA0. The leading \x indicates that the value is hex value.

How MKI$ and CVI Functions works

I am working on GwBasic and want to know how 'CVI("aa")' returns '24929' is that converts each char to ASCII but code of "aa" is 9797.
CVI converts between a GW-BASIC integer and its internal representation in bytes. That internal representation is a 16-bit little-endian signed integer, so that the value you find is the same as ASC("a") + 256*ASC("a"), which is 97 + 256*97, which is 24929.
MKI$ is the opposite operation of CVI, so that MKI$(24929) returns the string "aa".
The 'byte reversal' is a consequence of the little endianness of GW-BASIC's internal representation of integers: the leftmost byte of the representation is the least significant byte, whereas in hexadecimal notation you would write the most significant byte on the left.

ruby pack and hex values

A nibble is four bits. That means there are 16 (2^4) possible values. That means a nibble corresponds to a single hex digit, since hex is base 16. A byte is 2^8, which therefore can be represented by 2 hex digits, and consequently 2 nibbles.
So here below I have a 1 byte character:
'A'
That character is 2^8:
'A'.unpack('B*')
=> ["01000001"]
That means it should be represented by two hex digits:
01000001 == 41
According to the Ruby documentation, for the Array method pack, when aTemplateString (the parameter) is equal to 'H', then it will return a hex string. But this is what I get back:
['A'].pack('H')
=> "\xA0"
My first point is that's not the hex value it should return. It should have returned the hex value of 41. The second point is the concept of nibble, as I explained above, means for 1 byte, it should return two nibbles. But above it inserts a 0, because it thinks the input only has 1 nibble, even though 'A' is one byte and has two nibbles. So clearly I am missing something here.
I think you want unpack:
'A'.unpack('H*') #=> ["41"]
pack does the opposite:
['41'].pack('H*') #=> "A"
It's tricky. ["1"].pack("H") => "\x10" and ["16"].pack("H") => "\x10". I spent long long time to understand this.

Resources