How MKI$ and CVI Functions works - gw-basic

I am working on GwBasic and want to know how 'CVI("aa")' returns '24929' is that converts each char to ASCII but code of "aa" is 9797.

CVI converts between a GW-BASIC integer and its internal representation in bytes. That internal representation is a 16-bit little-endian signed integer, so that the value you find is the same as ASC("a") + 256*ASC("a"), which is 97 + 256*97, which is 24929.
MKI$ is the opposite operation of CVI, so that MKI$(24929) returns the string "aa".
The 'byte reversal' is a consequence of the little endianness of GW-BASIC's internal representation of integers: the leftmost byte of the representation is the least significant byte, whereas in hexadecimal notation you would write the most significant byte on the left.

Related

Char value to decimal data type in RPGLE

Why when I send a char(CL) value to a decimal data type is converted to (-33)?
I wrote (move 'CL' NmOfField)
and NmOfField is a decimal data type, so I found the value on NmOfField is (-33)
I want to know why (-33) specifically?
It has to do with the fact that hex representation of 'CL' is X'C3D3' and the fact that NmOfField is a packed decimal type.
Say it's a packed(3:0) and it's value before move is 0. Each pair nibble in 'CL' becomes a digits, so the absolute value is 33, the second to last nibble is considered sign, and in packed decimal D is minus so hexadecimal representation of NmOfField is X'033D' and the numeric value is -33. If NmOfField value had been +123 (X'123F') before move, it would have been -133 after (X'133D') as 'CL' is only two characters long (move works right to left til there is no more room in the result, or no more char in factor 2). If one of the digit nibble had not been a decimal digit, a error would have been raised.
You should avoid using move* operators because of that kind of suprises and some other, and prefer free-form syntax listed here

how can I determine the decimal value of a 32bit word containing 4 hexadecimal values?

Suppose Byte 0 in RAM contains the value 0x12. Subsequent bytes contain 0x34, 0x45, and 0x78. On a Big-Endian system with a 32-bit word, what’s the decimal value of the word?
I know that for a Big Endian system the order of the word would be 0x78, 0x45, 0x34, 0x12. I converted each value to decimal and got 120, 69, 52, 18. I want to know, in order to get the decimal value of the word, do I add all these values together (120 + 69 + 52 + 18), or do I interpret them as digits in a decimal number (120695218)?
Do you know how to convert a single integer from hex to decimal? On a big-endian system you have an integer value of 0x12344578 = ... + 5*16^2 + 7*16^1 + 8*16^0.
If you were writing a computer program to print a word as decimal, you'd already have the word as a binary integer (hex is a human-readable serialization format for binary, not actually used internally), and you'd do repeated division by 10, using the remainder to as the low digit each time. (So you generate digits LSD first, in reverse printing order.)
And for a program, endianness wouldn't be an issue. You'd just do a word load to get the integer value of the word in a register.

How to convert a signed integer to an unsigned integer in Ruby?

I have a number that I received from a C program that came to me as a negative number:
-1771632774
It's supposed to be this number:
2523334522
I realized that this must be due to some conversion from an signed integer to an unsigned integer. Now that I have this negative number in Ruby, how can I convert it back to the unsigned version?
Put the negative integer in an array. Call pack with an argument of 'L' which represents "32-bit unsigned, native endian (uint32_t)". Call unpack with the same argument. Finally, get the number out of the array.
[-1771632774].pack('L').unpack('L').first
#=> 2523334522
http://ruby-doc.org/core-2.4.0/Array.html#method-i-pack

What is this unpack doing? Can someone help me understand just a few letters?

I'm reading this code and I'm a tad confused as to what is going on. This code is using Ruby's OpenSSL library.
encrypted_message = cipher.update(address_string) + cipher.final
encrypted_message
=> "G\xCB\xE10prs\x1D\xA7\xD0\xB0\xCEmX\xDC#k\xDD\x8B\x8BB\xE1#!v\xF1\xDC\x19\xDD\xD0\xCA\xC9\x8B?B\xD4\xED\xA1\x83\x10\x1F\b\xF0A\xFEMBs'\xF3\xC7\xBC\x87\x9D_n\\z\xB7\xC1\xA5\xDA\xF4s \x99\\\xFD^\x85\x89s\e"
[3] pry(Encoder)> encrypted_message.unpack('H*')
=> ["47cbe1307072731da7d0b0ce6d58dc406bdd8b8b42e1232176f1dc19ddd0cac98b3f42d4eda183101f08f041fe4d427327f3c7bc879d5f6e5c7ab7c1a5daf47320995cfd5e8589731b"]
It seems that the H directive is this:
hex string (high nibble first)
How are the escaped characters in the encrypted_message transformed into letters and numbers?
I think the heart of the issue is that I don't understand this. What is going on?
['A'].pack('H')
=> "\xA0"
Here is a good explanation of Ruby's pack and unpack methods.
According to your question:
> ['A'].pack('H')
=> "\xA0"
A byte consists of 8 bits. A nibble consists of 4 bits. So a byte has two nibbles. The ascii value of ‘h’ is 104. Hex value of 104 is 68. This 68 is stored in two nibbles. First nibble, meaning 4 bits, contain the value 6 and the second nibble contains the value 8. In general we deal with high nibble first and going from left to right we pick the value 6 and then 8.
In the above case the input ‘A’ is not ASCII ‘A’ but the hex ‘A’. Why is it hex ‘A’. It is hex ‘A’ because the directive ‘H’ is telling pack to treat input value as hex value. Since ‘H’ is high nibble first and since the input has only one nibble then that means the second nibble is zero. So the input changes from ['A'] to ['A0'] .
Since hex value A0 does not translate into anything in the ASCII table the final output is left as it and hence the result is \xA0. The leading \x indicates that the value is hex value.

How to represent 11111111 as a byte in java

When I say that 0b11111111 is a byte in java, it says " cannot convert int to byte," which is because, as i understand it, 11111111=256, and bytes in java are signed, and go from -128 to 127. But, if a byte is just 8 bits of data, isn't 11111111 8 bits? I know 11111111 could be an integer, but in my situation it must be represented as a byte because it must be sent to a file in byte form. So how do I send a byte with the bits 11111111 to a file(by the way, this is my question)? When I try printing the binary value of -1, i get 11111111111111111111111111111111, why is that? I don't really understand how signed bytes work.
You need to cast the value to a byte:
byte b = (byte) 0b11111111;
The reason you need the cast is that 0b11111111 is an int literal (with a decimal value of 255) and it's outside the range of valid byte values (-128 to +127).
Java allows hex literals, but not binary. You can declare a byte with the binary value of 11111111 using this:
byte myByte = (byte) 0xFF;
You can use hex literals to store binary data in ints and longs as well.
Edit: you actually can have binary literals in Java 7 and up, my bad.

Resources