I'm generating a random number in the range of 65 to 90 (which corresponds to the byte representations of uppercase characters as bytes). The random number generator returns an integer value and I want to convert it to a byte.
When I say I want to convert the integer to a byte, I don't mean the byte representation of the number - i.e. I don't mean int 66 becoming byte [54 54]. I mean, if the RNG returns the integer 66, I want a byte with the value 66 (which would correspond to an uppercase B).
Use the byte() conversion to convert an integer to a byte:
var n int = 66
b := byte(n) // b is a byte
fmt.Printf("%c %d\n", b, b) // prints B 66
You should be able to convert any of those integers to the char value by simply doing character := string(asciiNum) where asciiNum is the integer that you've generated, and character will be the character with the byte value corresponding to the generated int
Related
Suppose Byte 0 in RAM contains the value 0x12. Subsequent bytes contain 0x34, 0x45, and 0x78. On a Big-Endian system with a 32-bit word, what’s the decimal value of the word?
I know that for a Big Endian system the order of the word would be 0x78, 0x45, 0x34, 0x12. I converted each value to decimal and got 120, 69, 52, 18. I want to know, in order to get the decimal value of the word, do I add all these values together (120 + 69 + 52 + 18), or do I interpret them as digits in a decimal number (120695218)?
Do you know how to convert a single integer from hex to decimal? On a big-endian system you have an integer value of 0x12344578 = ... + 5*16^2 + 7*16^1 + 8*16^0.
If you were writing a computer program to print a word as decimal, you'd already have the word as a binary integer (hex is a human-readable serialization format for binary, not actually used internally), and you'd do repeated division by 10, using the remainder to as the low digit each time. (So you generate digits LSD first, in reverse printing order.)
And for a program, endianness wouldn't be an issue. You'd just do a word load to get the integer value of the word in a register.
I need to convert 2 bytes to an integer in VB6
I currently have the byte array as:
bytArray(0) = 26
bytArray(1) = 85
the resulting number I assume should be 21786
I need these 2 turned into an integer so I can convert to a single and do additional arithmetic on it.
How do I get the integer of the 2 bytes?
If your assumed value is correct, the pair of array elements are stored in little endian format. So the following would convert the two array elements into a signed short integer.
Dim Sum As Integer
Sum = bytArray(0) + bytArray(1) * 256
Note that if your elements would sum to more than 32,767 (bytArray(1) >= 128), you'll see an overflow exception occur.
You don't have to convert to an integer first, you can go directly to a single, using the logic shown by #MarkL
Dim Sngl as Single
Sngl = (bytArray(1) * 256!) + bytArray(0)
Edit: As #BillHileman notes, this will give an unsigned result. Do as he suggests to make it signed.
When I say that 0b11111111 is a byte in java, it says " cannot convert int to byte," which is because, as i understand it, 11111111=256, and bytes in java are signed, and go from -128 to 127. But, if a byte is just 8 bits of data, isn't 11111111 8 bits? I know 11111111 could be an integer, but in my situation it must be represented as a byte because it must be sent to a file in byte form. So how do I send a byte with the bits 11111111 to a file(by the way, this is my question)? When I try printing the binary value of -1, i get 11111111111111111111111111111111, why is that? I don't really understand how signed bytes work.
You need to cast the value to a byte:
byte b = (byte) 0b11111111;
The reason you need the cast is that 0b11111111 is an int literal (with a decimal value of 255) and it's outside the range of valid byte values (-128 to +127).
Java allows hex literals, but not binary. You can declare a byte with the binary value of 11111111 using this:
byte myByte = (byte) 0xFF;
You can use hex literals to store binary data in ints and longs as well.
Edit: you actually can have binary literals in Java 7 and up, my bad.
I am working with the following HEX values representing different values from a GPS/GPRS plot. All are given as 32 bit integer.
For example:
296767 is the decimal value (unsigned) reported for hex number: 3F870400
Another one:
34.96987500 is the decimal float value (signed) given on radian resolution 10^(-8) reported for hex humber: DA4DA303.
Which is the process for transforming the hex numbers onto their corresponding values on Ruby?
I've already tried unpack/pack with directives: L, H & h. Also tried adding two's complement and converting them to binary and then decimal with no success.
If you are expecting an Integer value:
input = '3F870400'
output = input.scan(/../).reverse.join.to_i( 16 )
# 296767
If you are expecting degrees:
input = 'DA4DA303'
temp = input.scan(/../).reverse.join.to_i( 16 )
temp = ( temp & 0x80000000 > 1 ? temp - 0x100000000 : temp ) # Handles negatives
output = temp * 180 / (Math::PI * 10 ** 8)
# 34.9698751282937
Explanation:
The hexadecimal string is representing bytes of an Integer stored least-significant-byte first (or little-endian). To store it as raw bytes you might use [296767].pack('V') - and if you had the raw bytes in the first place you would simply reverse that binary_string.unpack('V'). However, you have a hex representation instead. There are a few different approaches you might take (including putting the hex back into bytes and unpacking it), but in the above I have chosen to manipulate the hex string into the most-significant-byte first form and use Ruby's String#to_i
I have the character "ö". If I look in this UTF-8 table I see it has the hex value F6. If I look in the Unicode table I see that "ö" has the indices E0and 16. If I add both I get the hex value of the code point of F6. This is the binary value 1111 0110.
1) How do I get from the hex value F6 to the indices E0 and 16?
2) I don't know how to come from F6 to the two bytes C3 B6 ...
Because I didn't got the results I tried to go the other way. "ö" is represented in ISO-8859-1 as "ö". In the UTF-8 table I can see that "Ã" has the decimal value 195 and "¶" has the decimal value 182. Converted to bits this is 1100 0011 1011 0110.
Process:
Look in a table and get the unicode for the character "ö". Calculated from the indices E0 and 16 you get the Unicode U+00F6.
According to the algorithm posted by wildplasser you can calculate the coded UTF-8 value C3 and B6.
In the binary form you get 1100 0011 1011 0110 which corresponds to the decimal values 195 and 182.
If these values are interpreted as ISO 8859-1 (only 1 byte) then you get "ö".
PS: I found also this link, which shows the values from step 2.
The pages you are using are confusing you somewhat. Neither your "UTF-8 table" or "Unicode table" are giving you the value of the code point in UTF-8. They are both simply listing the Unicode value of the characters.
In Unicode, every character ("code point") has a unique number assigned to it. The character ö is assigned the code point U+00F6, which is F6 in hexadecimal, and 246 in decimal.
UTF-8 is a representation of Unicode, using a sequence of between one and four bytes per Unicode code point. The transformation from 32-bit Unicode code points to UTF-8 byte sequences is described in that article - it is pretty simple to do, once you get used to it. Of course, computers do it all the time, but you can do it with a pencil and paper easily, and in your head with a bit of practice.
If you do that transformation, you will see that U+00F6 transforms to the UTF-8 sequence C3 B6, or 1100 0011 1011 0110 in binary, which is why that is the UTF-8 representation of ö.
The other half of your question is about ISO-8859-1. This is a character encoding commonly called "Latin-1". The numeric values of the Latin-1 encoding are the same as the first 256 code points in Unicode, thus ö is F6 in Latin-1.
Once you have converted between UTF-8 and standard Unicode code points (UTF-32), it should be trivial to get the Latin-1 encoding. However, not all UTF-8 sequences / Unicode characters have corresponding Latin-1 characters.
See the excellent article The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) for a better understanding of character encodings and transformations between them.
unsigned cha_latin2utf8(unsigned char *dst, unsigned cha)
{
if (cha < 0x80) { *dst = cha; return 1; }
/* all 11 bit codepoints (0x0 -- 0x7ff)
** fit within a 2byte utf8 char
** firstbyte = 110 +xxxxx := 0xc0 + (char>>6) MSB
** second = 10 +xxxxxx := 0x80 + (char& 63) LSB
*/
*dst++ = 0xc0 | (cha >>6) & 0x1f; /* 2+1+5 bits */
*dst++ = 0x80 | (cha) & 0x3f; /* 1+1+6 bits */
return 2; /* number of bytes produced */
}
To test it:
#include <stdio.h>
int main (void)
{
char buff[12];
cha_latin2utf8 ( buff, 0xf6);
fprintf(stdout, "%02x %02x\n"
, (unsigned) buff[0] & 0xff
, (unsigned) buff[1] & 0xff );
return 0;
}
The result:
c3 b6