Converting 2 bytes to an integer in VB 6 - vb6

I need to convert 2 bytes to an integer in VB6
I currently have the byte array as:
bytArray(0) = 26
bytArray(1) = 85
the resulting number I assume should be 21786
I need these 2 turned into an integer so I can convert to a single and do additional arithmetic on it.
How do I get the integer of the 2 bytes?

If your assumed value is correct, the pair of array elements are stored in little endian format. So the following would convert the two array elements into a signed short integer.
Dim Sum As Integer
Sum = bytArray(0) + bytArray(1) * 256
Note that if your elements would sum to more than 32,767 (bytArray(1) >= 128), you'll see an overflow exception occur.

You don't have to convert to an integer first, you can go directly to a single, using the logic shown by #MarkL
Dim Sngl as Single
Sngl = (bytArray(1) * 256!) + bytArray(0)
Edit: As #BillHileman notes, this will give an unsigned result. Do as he suggests to make it signed.

Related

Creating hash function to map 6 numbers to a short string

I have 6 variables 0 ≤ n₁,...,n₆ ≤ 12 and I'd like to build a hash function to do the direct mapping D(n₁,n₂,n₃,n₄,n₅,n₆) = S and another function to do the inverse mapping I(S) = (n₁,n₂,n₃,n₄,n₅,n₆), where S is a string (a-z, A-Z, 0-9).
My goal is to minimize the length of S for 3 or less.
I thought as the variables have 13 possible values, a single letter (a-z) should be able to represent 2 of them, but I realized that 1 + 12 = m and 2 + 11 = m, so I still don't know how to write a function.
Is there any approach to build a function that does this mapping and returns a small string?
Using the whole ASCII to represent S is an option if it's necessary.
You can convert a set of numbers in any given range to numbers in any other range using base conversion.
Binary is base 2 (0-1), decimal is base 10 (0-9). Your 6 numbers are base 13 (0-12).
Checking whether a conversion would be possible involves counting the number of possible combinations of values for each set. With each number in the range [0,n] (thus base n+1), we can go from all 0's to all n's, thus each number can take on n+1 values and the total number of possibilities is (n+1)numberCount. For 6 decimal digits, for example, it would be 106 = 1000000, which checks out, since there are 1000000 possible numbers with (at most) 6 digits, i.e. numbers < 1000000.
Lower- and uppercase letters and numbers (26+26+10) would be base 62 (0-61), but, following from the above, 3 such values would be insufficient to represent your 6 numbers (136 > 623). To do conversion from/to these, you can do the conversion to a set of base 62 numbers, then have appropriate if-statements to convert 0-9 <=> 0-9, a-z <=> 10-35, A-Z <=> 36-61.
You can represent your data in 3 bytes (since 2563 >= 136), although this wouldn't necessary be printable characters - 32-126 is considered the standard printable range (which is still too small of a range), 128-255 is the extended range and may not be displayed properly in any given environment (to give the best chance of properly displaying it, you should at least avoid 0-31 and 127, which are control characters - you can convert 0-... to the above ranges by adding 32 and then adding another 1 if the value is >= 127).
Many / most languages should allow you to give a numeric value to represent a character, so it should be fairly simple to output it once you do the base conversion. Although some may use Unicode to represent characters, which could make it a bit less trivial to work with ASCII.
If the numbers had specific constraints, that would reduce the number of possible combinations, thus possibly making it fit into a smaller set or range of numbers.
To do the actual base conversion:
It might be simplest to first convert it to a regular integral type (typically binary or decimal), where we don't have to worry about the base, and then convert it to the target base (although first make sure your value will fit in whichever data type you're using).
Consider how binary works:
1101 is 13 = 23 + 22 + 20
13 % 2 = 1 13 / 2 = 6
6 % 2 = 0 6 / 2 = 3
3 % 2 = 1 3 / 2 = 1
1 % 2 = 1
The above, from top to bottom: 1101 = our number
Using the same idea, we can convert to/from any base as follows: (pseudo-code)
int convertFromBase(array, base):
output = 0
for each i in array
output = base*output + i
return output
int[] convertToBase(num, base):
output = []
while num > 0
output.append(num % base)
num /= base
output.reverse()
return output
You can also extend this logic to situations where each number is in a different range by changing what you divide or multiple by at each step (a detailed explanation of that is perhaps a bit beyond the scope of the question).
I thought as the variables have 13 possible values, a single letter
(a-z) should be able to represent 2 of them
This reasoning is wrong. In fact to represent two variables (=any combination these variables might take) you will need 13x13 = 169 symbols.
For your example the 6 variables can take 13^6 (=4826809) different combinations. In order to represent all possible combinations you will need 5 letters (a-z) since 26^5 (=11881376) is the least amount that is will yield more than 13^6 combinations.
For ASCII characters 3 symbols should suffice since 256^3 > 13^6.
If you are still interested in code that does the conversion, I will be happy to help.

convert 9.2532 decimal into binary matlab

I use de2bi(x) but it help only for integer i want to convert for digit with point decimal into binary
de2bi(x)
convert this 9.2553 into decimal with fraction part also convert into binary format in Matlab if possible than without function file with code want output
MATLAB, of course, already stores double values in binary using IEEE-754 binary64 format. All we have to do is somehow get MATLAB to show us the bits.
One way is to use typecast which makes MATLAB interpret a set of memory locations as a different type. In this case, we'll make MATLAB think a double is a uint64 and then send the "integer" through dec2bin. We'll have to do some decomposition on the string after that to get the actual value.
Note: This currently only works with positive values. If you need negative values too, I'll have to make some adjustments.
function binstr = double2bin(d)
d = double(d); % make sure the input is a double-precision float
ieee754_d = dec2bin(typecast(d, 'uint64'),64); % read double as uint64
% IEEE-754 64-bit double:
% bit 1 (msb) = sign bit (we'll ignore this for now)
% bits 2-12 = exponent with bias of 1023
% bits 13-64 = significand with leading 1 removed (implicit)
exponent = bin2dec(ieee754_d(2:12))-1022; % 2^n has n+1 bits
significand = ['1' ieee754_d(13:64)];
if (exponent < 1) % d < 1, so we'll need to pad with zeros
binstr = ['0.' repmat('0',1,-exponent) significand];
else % d >= 1; move exponent bits to the left of binary point
binstr = [significand(1:exponent) '.' significand(exponent+1:end)];
end
end
Test run:
>> double2bin(9.2532)
ans = 1001.0100000011010001101101110001011101011000111000100
Ad hoc solution:
Expand by 2^44 (to get an integer value).
Convert integer result to binary.
Reduce by 2^44 by placing "decimal" point.
(2^44 is the smallest power of 2 expansion that gives an integer result).
Code sample:
expandedRes = dec2bin(9.2553*2^44);
res = expandedRes[1:end-44, '.', end-43:end);
Result:
res =
1001.01000001010110110101011100111110101010110011

Convert HEX 32 bit from GPS plot on Ruby

I am working with the following HEX values representing different values from a GPS/GPRS plot. All are given as 32 bit integer.
For example:
296767 is the decimal value (unsigned) reported for hex number: 3F870400
Another one:
34.96987500 is the decimal float value (signed) given on radian resolution 10^(-8) reported for hex humber: DA4DA303.
Which is the process for transforming the hex numbers onto their corresponding values on Ruby?
I've already tried unpack/pack with directives: L, H & h. Also tried adding two's complement and converting them to binary and then decimal with no success.
If you are expecting an Integer value:
input = '3F870400'
output = input.scan(/../).reverse.join.to_i( 16 )
# 296767
If you are expecting degrees:
input = 'DA4DA303'
temp = input.scan(/../).reverse.join.to_i( 16 )
temp = ( temp & 0x80000000 > 1 ? temp - 0x100000000 : temp ) # Handles negatives
output = temp * 180 / (Math::PI * 10 ** 8)
# 34.9698751282937
Explanation:
The hexadecimal string is representing bytes of an Integer stored least-significant-byte first (or little-endian). To store it as raw bytes you might use [296767].pack('V') - and if you had the raw bytes in the first place you would simply reverse that binary_string.unpack('V'). However, you have a hex representation instead. There are a few different approaches you might take (including putting the hex back into bytes and unpacking it), but in the above I have chosen to manipulate the hex string into the most-significant-byte first form and use Ruby's String#to_i

"interval is empty", Lua math.random isn't working for large numbers?

I didn't know if this is a bug in Lua itself or if I was doing something wrong. I couldn't find anything about it anywhere. I am using Lua for Windows (Lua 5.1.4):
>return math.random(0, 1000000000)
1251258
This returns a random integer between 0 and 10000000000, as expected. This seems to work for all other values. But if I add a single 0:
>return math.random(0, 10000000000)
stdin:1: bad argument #2 to 'random' (interval is empty)
Any number higher than that does the same thing.
I tried to figure out exactly how high a number has to be to cause this and found something even weirder:
>return math.random(0, 2147483647)
-75617745
If the value is 2147483647 then it gives me negative numbers. Any higher than that and it throws an error. Any lower than that and it works fine.
That's 0b1111111111111111111111111111111 in binary, 31 binary digits exactly. I am not sure what that means though.
This unexpected behavior (bug?) is due to how math.random treats the input arguments passed in Lua 5.1. From lmathlib.c:
case 2: { /* lower and upper limits */
int l = luaL_checkint(L, 1);
int u = luaL_checkint(L, 2);
luaL_argcheck(L, l<=u, 2, "interval is empty");
lua_pushnumber(L, floor(r*(u-l+1))+l); /* int between `l' and `u' */
break;
}
As you may know in C, a standard int can represent values -2,147,483,648 to 2,147,483,647. Adding +1 to 2,147,483,647, like in your use-case, will overflow and wrap around the value giving -2,147,483,648. The end result is negative since you're multiplying a positive with a negative number.
Furthermore, anything above 2,147,483,647 will fail the luaL_argcheck due to overflow wraparound.
There are a few ways to address this problem:
Upgrade to Lua 5.2. That one has since fixed this issue by treating the input arguments as lua_Number instead.
Switch to LuaJIT which does not have this integer overflow issue.
Patch the Lua 5.1 source yourself with the fix and recompile.
Modify your random range so it does not overflow.
If you need a range that is larger than what the random function supports (32 bit signed integers or 2^31 due to sign bit, because math.random is at C level), but smaller than the range of Lua "number" type (based on What is the maximum value of a number in Lua?, 2^52, or maybe even 2^53), you could try generating two random numbers: scale the first to the range desired; add the second to "fill the gap". For example, say you want a range of 0 to 2^36. The largest from math.random is 2^31. So you could do:
-- 2^36 = 2^31 * 2^5 so
scale = 2^5
baseRand = scale * math.random(0, 2^31)
-- baseRand is now between 0 and 2^36 but there are gaps of 2^5 in the set
-- of possible values; fill the gaps with second random number:
fillGap = math.random(0, 2^5)
randNum = baseRand + fillGap
This will work as long as the desired range is less than the Lua interpreter's maximum for Lua numbers, which is a configurable compile time parameter but if you use stock build it is 2^52, a very large number (although not as large as largest long integer, 2^63).
Note also that largest positive N-bit integer is 2^N-1 (not 2^N), but the above technique can be applied to any range, you could have for instance scale = 10^6 then randNum = 10^6 * math.random(0, 10^8) + math.random(0, 10^6).

How to translate Text to Binary with Cocoa?

I'm making a simple Cocoa program that can encode text to binary and decode it back to text. I tried to make this script and I was not even close to accomplishing this. Can anyone help me? This has to include two textboxes and two buttons or whatever is best, Thanks!
There are two parts to this.
The first is to encode the characters of the string into bytes. You do this by sending the string a dataUsingEncoding: message. Which encoding you choose will determine which bytes it gives you for each character. Start with NSUTF8StringEncoding, and then experiment with other encodings, such as NSUnicodeStringEncoding, once you get it working.
The second part is to convert every bit of every byte into either a '0' character or a '1' character, so that, for example, the letter A, encoded in UTF-8 to a single byte, will be represented as 01000001.
So, converting characters to bytes, and converting bytes to characters representing bits. These two are completely separate tasks; the second part should work correctly for any stream of bytes, including any valid stream of encoded characters, any invalid stream of encoded characters, and indeed anything that isn't text at all.
The first part is easy enough:
- (NSString *) stringOfBitsFromEncoding:(NSStringEncoding)encoding
ofString:(NSString *)inputString
{
//Encode the characters to bytes using the UTF-8 encoding. The bytes are contained in an NSData object, which we receive.
NSData *data = [string dataUsingEncoding:NSUTF8StringEncoding];
//I did say these were two separate jobs.
return [self stringOfBitsFromData:data];
}
For the second part, you'll need to loop through the bytes of the data. A C for loop will do the job there, and that will look like this:
//This is the method we're using above. I'll leave out the method signature and let you fill that in.
- …
{
//Find out how many bytes the data object contains.
NSUInteger length = [data length];
//Get the pointer to those bytes. “const” here means that we promise not to change the values of any of the bytes. (The compiler may give a warning if we don't include this, since we're not allowed to change these bytes anyway.)
const char *bytes = [data bytes];
//We'll store the output here. There are 8 bits per byte, and we'll be putting in one character per bit, so we'll tell NSMutableString that it should make room for (the number of bytes times 8) characters.
NSMutableString *outputString = [NSMutableString stringWithCapacity:length * 8];
//The loop. We start by initializing i to 0, then increment it (add 1 to it) after each pass. We keep looping as long as i < length; when i >= length, the loop ends.
for (NSUInteger i = 0; i < length; ++i) {
char thisByte = bytes[i];
for (NSUInteger bitNum = 0; bitNum < 8; ++bitNum) {
//Call a function, which I'll show the definition of in a moment, that will get the value of a bit at a given index within a given character.
bool bit = getBitAtIndex(thisByte, bitNum);
//If this bit is a 1, append a '1' character; if it is a 0, append a '0' character.
[outputString appendFormat: #"%c", bit ? '1' : '0'];
}
}
return outputString;
}
Bits 101 (or, 1100101)
Bits are literally just digits in base 2. Humans in the Western world usually write out numbers in base 10, but a number is a number no matter what base it's written in, and every character, and every byte, and indeed every bit, is just a number.
Digits—including bits—are counted up from the lowest place, according to the exponent to which the base is raised to find the magnitude of that place. We want bits, so that base is 2, so our place values are:
2^0 = 1: The ones place (the lowest bit)
2^1 = 2: The twos place (the next higher bit)
2^2 = 4: The fours place
2^3 = 8: The eights place
And so on, up to 2^7. (Note that the highest exponent is exactly one lower than the number of digits we're after; in this case, 7 vs. 8.)
If that all reminds you of reading about “the ones place”, “the tens place”, “the hundreds place”, etc. when you were a kid, it should: it's the exact same principle.
So a byte such as 65, which (in UTF-8) completely represents the character 'A', is the sum of:
2^7 × 0 = 0
+ 2^6 × 0 = 64
+ 2^5 × 1 = 0
+ 2^4 × 0 = 0
+ 2^3 × 0 = 0
+ 2^2 × 0 = 0
+ 2^1 × 0 = 0
+ 2^0 × 1 = 1
= 0 + 64 +0+0+0+0+0 + 1
= 64 + 1
= 65
Back when you learned base 10 numbers as a kid, you probably noticed that ten is “10”, one hundred is “100”, etc. This is true in base 2 as well: as 10^x is “1” followed by x “0”s in base 10, so is 2^x “1” followed by “x” 0s in base 2. So, for example, sixty-four in base 2 is “1000000” (count the zeroes and compare to the table above).
We are going to use these exact-power-of-two numbers to test each bit in each input byte.
Finding the bit
C has a pair of “shift” operators that will insert zeroes or remove digits at the low end of a number. The former is called “shift left”, and is written as <<, and you can guess the opposite.
We want shift left. We want to shift 1 left by the number of the bit we're after. That is exactly equivalent to raising 2 (our base) to the power of that number; for example, 1 << 6 = 2^6 = “1000000”.
Testing the bit
C has an operator for bit testing, too; it's &, the bitwise AND operator. (Do not confuse this with &&, which is the logical AND operator. && is for using whole true/false values in making decisions; & is one of your tools for working with bits within values.)
Strictly speaking, & does not test single bits; it goes through the bits of both input values, and returns a new value whose bits are the bitwise AND of each input pair. So, for example,
01100101
& 00101011
----------
00100001
Each bit in the output is 1 if and only if both of the corresponding input bits were also 1.
Putting these two things together
We're going to use the shift left operator to give us a number where one bit, the nth bit, is set—i.e., 2^n—and then use the bitwise AND operator to test whether the same bit is also set in our input byte.
//This is a C function that takes a char and an int, promising not to change either one, and returns a bool.
bool getBitAtIndex(const char byte, const int bitNum)
//It could also be a method, which would look like this:
//- (bool) bitAtIndex:(const int)bitNum inByte:(const char)byte
//but you would have to change the code above. (Feel free to try it both ways.)
{
//Find 2^bitNum, which will be a number with exactly 1 bit set. For example, when bitNum is 6, this number is “1000000”—a single 1 followed by six 0s—in binary.
const int powerOfTwo = 1 << bitNum;
//Test whether the same bit is also set in the input byte.
bool bitIsSet = byte & powerOfTwo;
return bitIsSet;
}
A bit of magic I should acknowledge
The bitwise AND operator does not evaluate to a single bit—it does not evaluate to only 1 or 0. Remember the above example, in which the & operator returned 33.
The bool type is a bit magic: Any time you convert any value to bool, it automatically becomes either 1 or 0. Anything that is not 0 becomes 1; anything that is 0 becomes 0.
The Objective-C BOOL type does not do this, which is why I used bool in the code above. You are free to use whichever you prefer, except that you generally should use BOOL whenever you deal with anything that expects a BOOL, particularly when overriding methods in subclasses or implementing protocols. You can convert back and forth freely, though not losslessly (since bool will change non-zero values as described above).
Oh yeah, you said something about text boxes too
When the user clicks on your button, get the stringValue of your input field, call stringOfBitsFromEncoding:ofString: using a reasonable encoding (such as UTF-8) and that string, and set the resulting string as the new stringValue of your output field.
Extra credit: Add a pop-up button with which the user can choose an encoding.
Extra extra credit: Populate the pop-up button with all of the available encodings, without hard-coding or hard-nibbing a list.

Resources