Reading the value of a byte in Racket - byte

In my program I have to communicate over TCP/IP.
To do so I have to marshall the data I want to send.
Sometimes I want to send an integer, this fits perfectly in one byte.
In Racket an integer (0 <= number < 256) is considered to be a byte.
So for example I send :
(write 15 outputPort)
(flush-output outputPort)
At the other end, the receiver has to unmarshall the received data.
So I do :
(define (loop)
(if (byte-ready? inputPort)
(display (read-byte inputPort))
(loop)))
I would suppose it to display 15 (as all numbers between 0 and 255 can fit in one byte) but instead it displays 49 which is the ASCII value of "1".
And if I loop one more I will receive 53 as second value which is the ASCII value of "5".
So is there a manner to make a byte from a value between 0 and 255 without transforming each digit of the number to an ASCII value because that causes to send N bytes were N is the number of digits in the number.
If it isn't possible, what's the advantage of bytes in Racket?
As I could simply send my number as a string :
(write (number->string 15) outputPort)
(flush-output outputPort)
And then unmarshall it by reading a string and then convert in the other way :
(string->number (read-string length inputPort))
But I wanted to use bytes in order to avoid sending strings (because operations on strings are costly) so I could send one byte for a number between 0 and 255 instead of possibly 3 bytes (when the number contains 3 digits).

You want to use write-byte, not write:
-> (write-byte 61)
=
-> (write 61)
61

If you use write-byte instead of write it will actually send the byte value instead of the serialized representation of the value.
Similarly, you can use write-bytes to write a bytestring out byte-by-byte.

Related

What is this unpack doing? Can someone help me understand just a few letters?

I'm reading this code and I'm a tad confused as to what is going on. This code is using Ruby's OpenSSL library.
encrypted_message = cipher.update(address_string) + cipher.final
encrypted_message
=> "G\xCB\xE10prs\x1D\xA7\xD0\xB0\xCEmX\xDC#k\xDD\x8B\x8BB\xE1#!v\xF1\xDC\x19\xDD\xD0\xCA\xC9\x8B?B\xD4\xED\xA1\x83\x10\x1F\b\xF0A\xFEMBs'\xF3\xC7\xBC\x87\x9D_n\\z\xB7\xC1\xA5\xDA\xF4s \x99\\\xFD^\x85\x89s\e"
[3] pry(Encoder)> encrypted_message.unpack('H*')
=> ["47cbe1307072731da7d0b0ce6d58dc406bdd8b8b42e1232176f1dc19ddd0cac98b3f42d4eda183101f08f041fe4d427327f3c7bc879d5f6e5c7ab7c1a5daf47320995cfd5e8589731b"]
It seems that the H directive is this:
hex string (high nibble first)
How are the escaped characters in the encrypted_message transformed into letters and numbers?
I think the heart of the issue is that I don't understand this. What is going on?
['A'].pack('H')
=> "\xA0"
Here is a good explanation of Ruby's pack and unpack methods.
According to your question:
> ['A'].pack('H')
=> "\xA0"
A byte consists of 8 bits. A nibble consists of 4 bits. So a byte has two nibbles. The ascii value of ‘h’ is 104. Hex value of 104 is 68. This 68 is stored in two nibbles. First nibble, meaning 4 bits, contain the value 6 and the second nibble contains the value 8. In general we deal with high nibble first and going from left to right we pick the value 6 and then 8.
In the above case the input ‘A’ is not ASCII ‘A’ but the hex ‘A’. Why is it hex ‘A’. It is hex ‘A’ because the directive ‘H’ is telling pack to treat input value as hex value. Since ‘H’ is high nibble first and since the input has only one nibble then that means the second nibble is zero. So the input changes from ['A'] to ['A0'] .
Since hex value A0 does not translate into anything in the ASCII table the final output is left as it and hence the result is \xA0. The leading \x indicates that the value is hex value.

Converting 2 bytes to an integer in VB 6

I need to convert 2 bytes to an integer in VB6
I currently have the byte array as:
bytArray(0) = 26
bytArray(1) = 85
the resulting number I assume should be 21786
I need these 2 turned into an integer so I can convert to a single and do additional arithmetic on it.
How do I get the integer of the 2 bytes?
If your assumed value is correct, the pair of array elements are stored in little endian format. So the following would convert the two array elements into a signed short integer.
Dim Sum As Integer
Sum = bytArray(0) + bytArray(1) * 256
Note that if your elements would sum to more than 32,767 (bytArray(1) >= 128), you'll see an overflow exception occur.
You don't have to convert to an integer first, you can go directly to a single, using the logic shown by #MarkL
Dim Sngl as Single
Sngl = (bytArray(1) * 256!) + bytArray(0)
Edit: As #BillHileman notes, this will give an unsigned result. Do as he suggests to make it signed.

How to represent 11111111 as a byte in java

When I say that 0b11111111 is a byte in java, it says " cannot convert int to byte," which is because, as i understand it, 11111111=256, and bytes in java are signed, and go from -128 to 127. But, if a byte is just 8 bits of data, isn't 11111111 8 bits? I know 11111111 could be an integer, but in my situation it must be represented as a byte because it must be sent to a file in byte form. So how do I send a byte with the bits 11111111 to a file(by the way, this is my question)? When I try printing the binary value of -1, i get 11111111111111111111111111111111, why is that? I don't really understand how signed bytes work.
You need to cast the value to a byte:
byte b = (byte) 0b11111111;
The reason you need the cast is that 0b11111111 is an int literal (with a decimal value of 255) and it's outside the range of valid byte values (-128 to +127).
Java allows hex literals, but not binary. You can declare a byte with the binary value of 11111111 using this:
byte myByte = (byte) 0xFF;
You can use hex literals to store binary data in ints and longs as well.
Edit: you actually can have binary literals in Java 7 and up, my bad.

Convert HEX 32 bit from GPS plot on Ruby

I am working with the following HEX values representing different values from a GPS/GPRS plot. All are given as 32 bit integer.
For example:
296767 is the decimal value (unsigned) reported for hex number: 3F870400
Another one:
34.96987500 is the decimal float value (signed) given on radian resolution 10^(-8) reported for hex humber: DA4DA303.
Which is the process for transforming the hex numbers onto their corresponding values on Ruby?
I've already tried unpack/pack with directives: L, H & h. Also tried adding two's complement and converting them to binary and then decimal with no success.
If you are expecting an Integer value:
input = '3F870400'
output = input.scan(/../).reverse.join.to_i( 16 )
# 296767
If you are expecting degrees:
input = 'DA4DA303'
temp = input.scan(/../).reverse.join.to_i( 16 )
temp = ( temp & 0x80000000 > 1 ? temp - 0x100000000 : temp ) # Handles negatives
output = temp * 180 / (Math::PI * 10 ** 8)
# 34.9698751282937
Explanation:
The hexadecimal string is representing bytes of an Integer stored least-significant-byte first (or little-endian). To store it as raw bytes you might use [296767].pack('V') - and if you had the raw bytes in the first place you would simply reverse that binary_string.unpack('V'). However, you have a hex representation instead. There are a few different approaches you might take (including putting the hex back into bytes and unpacking it), but in the above I have chosen to manipulate the hex string into the most-significant-byte first form and use Ruby's String#to_i

How to translate Text to Binary with Cocoa?

I'm making a simple Cocoa program that can encode text to binary and decode it back to text. I tried to make this script and I was not even close to accomplishing this. Can anyone help me? This has to include two textboxes and two buttons or whatever is best, Thanks!
There are two parts to this.
The first is to encode the characters of the string into bytes. You do this by sending the string a dataUsingEncoding: message. Which encoding you choose will determine which bytes it gives you for each character. Start with NSUTF8StringEncoding, and then experiment with other encodings, such as NSUnicodeStringEncoding, once you get it working.
The second part is to convert every bit of every byte into either a '0' character or a '1' character, so that, for example, the letter A, encoded in UTF-8 to a single byte, will be represented as 01000001.
So, converting characters to bytes, and converting bytes to characters representing bits. These two are completely separate tasks; the second part should work correctly for any stream of bytes, including any valid stream of encoded characters, any invalid stream of encoded characters, and indeed anything that isn't text at all.
The first part is easy enough:
- (NSString *) stringOfBitsFromEncoding:(NSStringEncoding)encoding
ofString:(NSString *)inputString
{
//Encode the characters to bytes using the UTF-8 encoding. The bytes are contained in an NSData object, which we receive.
NSData *data = [string dataUsingEncoding:NSUTF8StringEncoding];
//I did say these were two separate jobs.
return [self stringOfBitsFromData:data];
}
For the second part, you'll need to loop through the bytes of the data. A C for loop will do the job there, and that will look like this:
//This is the method we're using above. I'll leave out the method signature and let you fill that in.
- …
{
//Find out how many bytes the data object contains.
NSUInteger length = [data length];
//Get the pointer to those bytes. “const” here means that we promise not to change the values of any of the bytes. (The compiler may give a warning if we don't include this, since we're not allowed to change these bytes anyway.)
const char *bytes = [data bytes];
//We'll store the output here. There are 8 bits per byte, and we'll be putting in one character per bit, so we'll tell NSMutableString that it should make room for (the number of bytes times 8) characters.
NSMutableString *outputString = [NSMutableString stringWithCapacity:length * 8];
//The loop. We start by initializing i to 0, then increment it (add 1 to it) after each pass. We keep looping as long as i < length; when i >= length, the loop ends.
for (NSUInteger i = 0; i < length; ++i) {
char thisByte = bytes[i];
for (NSUInteger bitNum = 0; bitNum < 8; ++bitNum) {
//Call a function, which I'll show the definition of in a moment, that will get the value of a bit at a given index within a given character.
bool bit = getBitAtIndex(thisByte, bitNum);
//If this bit is a 1, append a '1' character; if it is a 0, append a '0' character.
[outputString appendFormat: #"%c", bit ? '1' : '0'];
}
}
return outputString;
}
Bits 101 (or, 1100101)
Bits are literally just digits in base 2. Humans in the Western world usually write out numbers in base 10, but a number is a number no matter what base it's written in, and every character, and every byte, and indeed every bit, is just a number.
Digits—including bits—are counted up from the lowest place, according to the exponent to which the base is raised to find the magnitude of that place. We want bits, so that base is 2, so our place values are:
2^0 = 1: The ones place (the lowest bit)
2^1 = 2: The twos place (the next higher bit)
2^2 = 4: The fours place
2^3 = 8: The eights place
And so on, up to 2^7. (Note that the highest exponent is exactly one lower than the number of digits we're after; in this case, 7 vs. 8.)
If that all reminds you of reading about “the ones place”, “the tens place”, “the hundreds place”, etc. when you were a kid, it should: it's the exact same principle.
So a byte such as 65, which (in UTF-8) completely represents the character 'A', is the sum of:
2^7 × 0 = 0
+ 2^6 × 0 = 64
+ 2^5 × 1 = 0
+ 2^4 × 0 = 0
+ 2^3 × 0 = 0
+ 2^2 × 0 = 0
+ 2^1 × 0 = 0
+ 2^0 × 1 = 1
= 0 + 64 +0+0+0+0+0 + 1
= 64 + 1
= 65
Back when you learned base 10 numbers as a kid, you probably noticed that ten is “10”, one hundred is “100”, etc. This is true in base 2 as well: as 10^x is “1” followed by x “0”s in base 10, so is 2^x “1” followed by “x” 0s in base 2. So, for example, sixty-four in base 2 is “1000000” (count the zeroes and compare to the table above).
We are going to use these exact-power-of-two numbers to test each bit in each input byte.
Finding the bit
C has a pair of “shift” operators that will insert zeroes or remove digits at the low end of a number. The former is called “shift left”, and is written as <<, and you can guess the opposite.
We want shift left. We want to shift 1 left by the number of the bit we're after. That is exactly equivalent to raising 2 (our base) to the power of that number; for example, 1 << 6 = 2^6 = “1000000”.
Testing the bit
C has an operator for bit testing, too; it's &, the bitwise AND operator. (Do not confuse this with &&, which is the logical AND operator. && is for using whole true/false values in making decisions; & is one of your tools for working with bits within values.)
Strictly speaking, & does not test single bits; it goes through the bits of both input values, and returns a new value whose bits are the bitwise AND of each input pair. So, for example,
01100101
& 00101011
----------
00100001
Each bit in the output is 1 if and only if both of the corresponding input bits were also 1.
Putting these two things together
We're going to use the shift left operator to give us a number where one bit, the nth bit, is set—i.e., 2^n—and then use the bitwise AND operator to test whether the same bit is also set in our input byte.
//This is a C function that takes a char and an int, promising not to change either one, and returns a bool.
bool getBitAtIndex(const char byte, const int bitNum)
//It could also be a method, which would look like this:
//- (bool) bitAtIndex:(const int)bitNum inByte:(const char)byte
//but you would have to change the code above. (Feel free to try it both ways.)
{
//Find 2^bitNum, which will be a number with exactly 1 bit set. For example, when bitNum is 6, this number is “1000000”—a single 1 followed by six 0s—in binary.
const int powerOfTwo = 1 << bitNum;
//Test whether the same bit is also set in the input byte.
bool bitIsSet = byte & powerOfTwo;
return bitIsSet;
}
A bit of magic I should acknowledge
The bitwise AND operator does not evaluate to a single bit—it does not evaluate to only 1 or 0. Remember the above example, in which the & operator returned 33.
The bool type is a bit magic: Any time you convert any value to bool, it automatically becomes either 1 or 0. Anything that is not 0 becomes 1; anything that is 0 becomes 0.
The Objective-C BOOL type does not do this, which is why I used bool in the code above. You are free to use whichever you prefer, except that you generally should use BOOL whenever you deal with anything that expects a BOOL, particularly when overriding methods in subclasses or implementing protocols. You can convert back and forth freely, though not losslessly (since bool will change non-zero values as described above).
Oh yeah, you said something about text boxes too
When the user clicks on your button, get the stringValue of your input field, call stringOfBitsFromEncoding:ofString: using a reasonable encoding (such as UTF-8) and that string, and set the resulting string as the new stringValue of your output field.
Extra credit: Add a pop-up button with which the user can choose an encoding.
Extra extra credit: Populate the pop-up button with all of the available encodings, without hard-coding or hard-nibbing a list.

Resources