Convert HEX 32 bit from GPS plot on Ruby - ruby

I am working with the following HEX values representing different values from a GPS/GPRS plot. All are given as 32 bit integer.
For example:
296767 is the decimal value (unsigned) reported for hex number: 3F870400
Another one:
34.96987500 is the decimal float value (signed) given on radian resolution 10^(-8) reported for hex humber: DA4DA303.
Which is the process for transforming the hex numbers onto their corresponding values on Ruby?
I've already tried unpack/pack with directives: L, H & h. Also tried adding two's complement and converting them to binary and then decimal with no success.

If you are expecting an Integer value:
input = '3F870400'
output = input.scan(/../).reverse.join.to_i( 16 )
# 296767
If you are expecting degrees:
input = 'DA4DA303'
temp = input.scan(/../).reverse.join.to_i( 16 )
temp = ( temp & 0x80000000 > 1 ? temp - 0x100000000 : temp ) # Handles negatives
output = temp * 180 / (Math::PI * 10 ** 8)
# 34.9698751282937
Explanation:
The hexadecimal string is representing bytes of an Integer stored least-significant-byte first (or little-endian). To store it as raw bytes you might use [296767].pack('V') - and if you had the raw bytes in the first place you would simply reverse that binary_string.unpack('V'). However, you have a hex representation instead. There are a few different approaches you might take (including putting the hex back into bytes and unpacking it), but in the above I have chosen to manipulate the hex string into the most-significant-byte first form and use Ruby's String#to_i

Related

How do I convert an integer to a byte in Go

I'm generating a random number in the range of 65 to 90 (which corresponds to the byte representations of uppercase characters as bytes). The random number generator returns an integer value and I want to convert it to a byte.
When I say I want to convert the integer to a byte, I don't mean the byte representation of the number - i.e. I don't mean int 66 becoming byte [54 54]. I mean, if the RNG returns the integer 66, I want a byte with the value 66 (which would correspond to an uppercase B).
Use the byte() conversion to convert an integer to a byte:
var n int = 66
b := byte(n) // b is a byte
fmt.Printf("%c %d\n", b, b) // prints B 66
You should be able to convert any of those integers to the char value by simply doing character := string(asciiNum) where asciiNum is the integer that you've generated, and character will be the character with the byte value corresponding to the generated int

how can I determine the decimal value of a 32bit word containing 4 hexadecimal values?

Suppose Byte 0 in RAM contains the value 0x12. Subsequent bytes contain 0x34, 0x45, and 0x78. On a Big-Endian system with a 32-bit word, what’s the decimal value of the word?
I know that for a Big Endian system the order of the word would be 0x78, 0x45, 0x34, 0x12. I converted each value to decimal and got 120, 69, 52, 18. I want to know, in order to get the decimal value of the word, do I add all these values together (120 + 69 + 52 + 18), or do I interpret them as digits in a decimal number (120695218)?
Do you know how to convert a single integer from hex to decimal? On a big-endian system you have an integer value of 0x12344578 = ... + 5*16^2 + 7*16^1 + 8*16^0.
If you were writing a computer program to print a word as decimal, you'd already have the word as a binary integer (hex is a human-readable serialization format for binary, not actually used internally), and you'd do repeated division by 10, using the remainder to as the low digit each time. (So you generate digits LSD first, in reverse printing order.)
And for a program, endianness wouldn't be an issue. You'd just do a word load to get the integer value of the word in a register.

Converting 2 bytes to an integer in VB 6

I need to convert 2 bytes to an integer in VB6
I currently have the byte array as:
bytArray(0) = 26
bytArray(1) = 85
the resulting number I assume should be 21786
I need these 2 turned into an integer so I can convert to a single and do additional arithmetic on it.
How do I get the integer of the 2 bytes?
If your assumed value is correct, the pair of array elements are stored in little endian format. So the following would convert the two array elements into a signed short integer.
Dim Sum As Integer
Sum = bytArray(0) + bytArray(1) * 256
Note that if your elements would sum to more than 32,767 (bytArray(1) >= 128), you'll see an overflow exception occur.
You don't have to convert to an integer first, you can go directly to a single, using the logic shown by #MarkL
Dim Sngl as Single
Sngl = (bytArray(1) * 256!) + bytArray(0)
Edit: As #BillHileman notes, this will give an unsigned result. Do as he suggests to make it signed.

convert 9.2532 decimal into binary matlab

I use de2bi(x) but it help only for integer i want to convert for digit with point decimal into binary
de2bi(x)
convert this 9.2553 into decimal with fraction part also convert into binary format in Matlab if possible than without function file with code want output
MATLAB, of course, already stores double values in binary using IEEE-754 binary64 format. All we have to do is somehow get MATLAB to show us the bits.
One way is to use typecast which makes MATLAB interpret a set of memory locations as a different type. In this case, we'll make MATLAB think a double is a uint64 and then send the "integer" through dec2bin. We'll have to do some decomposition on the string after that to get the actual value.
Note: This currently only works with positive values. If you need negative values too, I'll have to make some adjustments.
function binstr = double2bin(d)
d = double(d); % make sure the input is a double-precision float
ieee754_d = dec2bin(typecast(d, 'uint64'),64); % read double as uint64
% IEEE-754 64-bit double:
% bit 1 (msb) = sign bit (we'll ignore this for now)
% bits 2-12 = exponent with bias of 1023
% bits 13-64 = significand with leading 1 removed (implicit)
exponent = bin2dec(ieee754_d(2:12))-1022; % 2^n has n+1 bits
significand = ['1' ieee754_d(13:64)];
if (exponent < 1) % d < 1, so we'll need to pad with zeros
binstr = ['0.' repmat('0',1,-exponent) significand];
else % d >= 1; move exponent bits to the left of binary point
binstr = [significand(1:exponent) '.' significand(exponent+1:end)];
end
end
Test run:
>> double2bin(9.2532)
ans = 1001.0100000011010001101101110001011101011000111000100
Ad hoc solution:
Expand by 2^44 (to get an integer value).
Convert integer result to binary.
Reduce by 2^44 by placing "decimal" point.
(2^44 is the smallest power of 2 expansion that gives an integer result).
Code sample:
expandedRes = dec2bin(9.2553*2^44);
res = expandedRes[1:end-44, '.', end-43:end);
Result:
res =
1001.01000001010110110101011100111110101010110011

How to translate Text to Binary with Cocoa?

I'm making a simple Cocoa program that can encode text to binary and decode it back to text. I tried to make this script and I was not even close to accomplishing this. Can anyone help me? This has to include two textboxes and two buttons or whatever is best, Thanks!
There are two parts to this.
The first is to encode the characters of the string into bytes. You do this by sending the string a dataUsingEncoding: message. Which encoding you choose will determine which bytes it gives you for each character. Start with NSUTF8StringEncoding, and then experiment with other encodings, such as NSUnicodeStringEncoding, once you get it working.
The second part is to convert every bit of every byte into either a '0' character or a '1' character, so that, for example, the letter A, encoded in UTF-8 to a single byte, will be represented as 01000001.
So, converting characters to bytes, and converting bytes to characters representing bits. These two are completely separate tasks; the second part should work correctly for any stream of bytes, including any valid stream of encoded characters, any invalid stream of encoded characters, and indeed anything that isn't text at all.
The first part is easy enough:
- (NSString *) stringOfBitsFromEncoding:(NSStringEncoding)encoding
ofString:(NSString *)inputString
{
//Encode the characters to bytes using the UTF-8 encoding. The bytes are contained in an NSData object, which we receive.
NSData *data = [string dataUsingEncoding:NSUTF8StringEncoding];
//I did say these were two separate jobs.
return [self stringOfBitsFromData:data];
}
For the second part, you'll need to loop through the bytes of the data. A C for loop will do the job there, and that will look like this:
//This is the method we're using above. I'll leave out the method signature and let you fill that in.
- …
{
//Find out how many bytes the data object contains.
NSUInteger length = [data length];
//Get the pointer to those bytes. “const” here means that we promise not to change the values of any of the bytes. (The compiler may give a warning if we don't include this, since we're not allowed to change these bytes anyway.)
const char *bytes = [data bytes];
//We'll store the output here. There are 8 bits per byte, and we'll be putting in one character per bit, so we'll tell NSMutableString that it should make room for (the number of bytes times 8) characters.
NSMutableString *outputString = [NSMutableString stringWithCapacity:length * 8];
//The loop. We start by initializing i to 0, then increment it (add 1 to it) after each pass. We keep looping as long as i < length; when i >= length, the loop ends.
for (NSUInteger i = 0; i < length; ++i) {
char thisByte = bytes[i];
for (NSUInteger bitNum = 0; bitNum < 8; ++bitNum) {
//Call a function, which I'll show the definition of in a moment, that will get the value of a bit at a given index within a given character.
bool bit = getBitAtIndex(thisByte, bitNum);
//If this bit is a 1, append a '1' character; if it is a 0, append a '0' character.
[outputString appendFormat: #"%c", bit ? '1' : '0'];
}
}
return outputString;
}
Bits 101 (or, 1100101)
Bits are literally just digits in base 2. Humans in the Western world usually write out numbers in base 10, but a number is a number no matter what base it's written in, and every character, and every byte, and indeed every bit, is just a number.
Digits—including bits—are counted up from the lowest place, according to the exponent to which the base is raised to find the magnitude of that place. We want bits, so that base is 2, so our place values are:
2^0 = 1: The ones place (the lowest bit)
2^1 = 2: The twos place (the next higher bit)
2^2 = 4: The fours place
2^3 = 8: The eights place
And so on, up to 2^7. (Note that the highest exponent is exactly one lower than the number of digits we're after; in this case, 7 vs. 8.)
If that all reminds you of reading about “the ones place”, “the tens place”, “the hundreds place”, etc. when you were a kid, it should: it's the exact same principle.
So a byte such as 65, which (in UTF-8) completely represents the character 'A', is the sum of:
2^7 × 0 = 0
+ 2^6 × 0 = 64
+ 2^5 × 1 = 0
+ 2^4 × 0 = 0
+ 2^3 × 0 = 0
+ 2^2 × 0 = 0
+ 2^1 × 0 = 0
+ 2^0 × 1 = 1
= 0 + 64 +0+0+0+0+0 + 1
= 64 + 1
= 65
Back when you learned base 10 numbers as a kid, you probably noticed that ten is “10”, one hundred is “100”, etc. This is true in base 2 as well: as 10^x is “1” followed by x “0”s in base 10, so is 2^x “1” followed by “x” 0s in base 2. So, for example, sixty-four in base 2 is “1000000” (count the zeroes and compare to the table above).
We are going to use these exact-power-of-two numbers to test each bit in each input byte.
Finding the bit
C has a pair of “shift” operators that will insert zeroes or remove digits at the low end of a number. The former is called “shift left”, and is written as <<, and you can guess the opposite.
We want shift left. We want to shift 1 left by the number of the bit we're after. That is exactly equivalent to raising 2 (our base) to the power of that number; for example, 1 << 6 = 2^6 = “1000000”.
Testing the bit
C has an operator for bit testing, too; it's &, the bitwise AND operator. (Do not confuse this with &&, which is the logical AND operator. && is for using whole true/false values in making decisions; & is one of your tools for working with bits within values.)
Strictly speaking, & does not test single bits; it goes through the bits of both input values, and returns a new value whose bits are the bitwise AND of each input pair. So, for example,
01100101
& 00101011
----------
00100001
Each bit in the output is 1 if and only if both of the corresponding input bits were also 1.
Putting these two things together
We're going to use the shift left operator to give us a number where one bit, the nth bit, is set—i.e., 2^n—and then use the bitwise AND operator to test whether the same bit is also set in our input byte.
//This is a C function that takes a char and an int, promising not to change either one, and returns a bool.
bool getBitAtIndex(const char byte, const int bitNum)
//It could also be a method, which would look like this:
//- (bool) bitAtIndex:(const int)bitNum inByte:(const char)byte
//but you would have to change the code above. (Feel free to try it both ways.)
{
//Find 2^bitNum, which will be a number with exactly 1 bit set. For example, when bitNum is 6, this number is “1000000”—a single 1 followed by six 0s—in binary.
const int powerOfTwo = 1 << bitNum;
//Test whether the same bit is also set in the input byte.
bool bitIsSet = byte & powerOfTwo;
return bitIsSet;
}
A bit of magic I should acknowledge
The bitwise AND operator does not evaluate to a single bit—it does not evaluate to only 1 or 0. Remember the above example, in which the & operator returned 33.
The bool type is a bit magic: Any time you convert any value to bool, it automatically becomes either 1 or 0. Anything that is not 0 becomes 1; anything that is 0 becomes 0.
The Objective-C BOOL type does not do this, which is why I used bool in the code above. You are free to use whichever you prefer, except that you generally should use BOOL whenever you deal with anything that expects a BOOL, particularly when overriding methods in subclasses or implementing protocols. You can convert back and forth freely, though not losslessly (since bool will change non-zero values as described above).
Oh yeah, you said something about text boxes too
When the user clicks on your button, get the stringValue of your input field, call stringOfBitsFromEncoding:ofString: using a reasonable encoding (such as UTF-8) and that string, and set the resulting string as the new stringValue of your output field.
Extra credit: Add a pop-up button with which the user can choose an encoding.
Extra extra credit: Populate the pop-up button with all of the available encodings, without hard-coding or hard-nibbing a list.

Resources