How am I to express variable as hex to send like-
a='00'
write("\x#{a}") => 0x00
Trying to include received string variable into command string of raw data and passed to com port like -
cmd="\x45\x#{a}\x01"
Send(cmd)
In Ruby
Thanks
The String#to_i function takes a base argument, which default to ten - but you can pass in sixteen instead. That'll get you the number you want as a number, rather than a string. From there, you can use the Integer#chr function to get the value you want - a string containing the character with the binary value represented by the original string.
Related
I'm trying to substitute a value into a JSON request body using a JMeter variable and ${value_here} notation. The value is a base64 encoded image, which includes "+" characters.
When I call CompoundVariable.execute, the request body contains the value in the JMeter variable, but all "+" characters have been replaced with empty strings resulting in a malformed image.
Is there some workaround for this, or do I need to work around it in code? Simplified example before, since I am sure none of you want the wall of text that would be my encoded image.
String stored in variable (truncated for brevity):
/9j/4AAQSkZJRgABAQEASABIAAD/2wBDAAYEBQYFBAYGBQYHBwYIChAKCgkJChQODwwQFxQYGBcUFhYaHSUfGhsjHBYWICwgIyYnKSopGR8tMC0oMCUoKSj/2wBDAQcHBwoIChMKChMoGhYaKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCj/wAARCADDASsDASIAAhEBAxEB/8QAHQAAAQQDAQEAAAAAAAAAAAAABQAEBgcBAgMICf/EAEkQAAEDAwMBBAYECwYGAgMBAAECAwQABREGEiExBxNBURQVImFxgTJUkZIIIzM0QnSTobGy4SREYnPB0RZSY4KD8BhyF1NkhP/EABkBAAMBAQEAAAAAAAAAAAAAAAABAgMEBf/EACgRAAICAgICAQMEAwAAAAAAAAABAhESMQMhQVETInGhMkJhsVKR8P/aAAwDAQACEQMRAD8A9U1o44hsZcWlI8ycU1uk1NvgvSXOdieE+Z8B86BItwlOF+8K9KeVyGl/Qb9yU/60CboPi4Q+f7Uz98Vn1hD+tM/fFBRbIRORCYx4juxxSTbYPUwo3zQKdCyDXrCH9aZ++KXp8P60z98UG9WQfqUbn/piserYPP8AYY5A8e7FFBkGvT4f1pn74penw/rTP3xQY22ABlUOMP8AxjmserYOfzKMf+wUUGQa9YQ/rTP3xS9YQ/rTP3xQU26Dn80jEe5sVk22DwBCj8/4BRQZBn1hD+tM/fFL1hD+tM/
Variable in templated request is ${Document_Image_Front} though I'm sure that is irrelevant.
You can use __urlencode function to encode space to + back
${__urldecode(Word "school" is "école" in french)}
returns Word+%22school%22+is+%22%C3%A9cole%22+in+french.
Please tell me how does one convert a variable to a variable of type string in CMake.
I have a variable that contains both digits and letters. Say of the form: "Ax3.0.1". I don't know exactly what type of variable CMake sees this at but I want to convert it to a string so I can itterate through it. Please tell me how can I do that. Thank you.
Internally, every variable in CMake is a string. However, unlike to many other programming languages, in CMake string is not an array of characters. So one cannot directly iterate over characters in the string with foreach.
The closest thing is iteration over character indicies with extracting character by index:
set(var "Ax3.0.1")
# Compute length of the string
string(LENGTH ${var} var_length)
# But foreach needs the last index, not a range.
math(EXPR last_char_index "${var_length} - 1")
message("Characters in string '${var}':")
foreach(char_index RANGE ${last_char_index}) # Iterate over indicies
# Create variable 'char' which contains specific character of the string.
string(SUBSTRING "${var}" "${char_index}" "1" char)
message("${char}")
endforeach()
As you can see, this looks quite ugly. Actually, for extract specific character(s) from the string regular expressions are usually used.
I have a Base64 encoded binary of a packet capture.
I want to extract a substring at a certain position of the capture.
I'm doing this in Ruby:
payload_decoded = Base64.decode64(payload)
file_size = payload_decoded[114..115].unpack('S*')[0]
file_fullpath = payload_decoded[124, file_size]
p file_fullpath
This works to some extent. file_size gets an integer with the length I want to extract. I then can extract the correct slice of the byte array. And if I just test this in my Mac's terminal, it displays the string perfectly.
But, this code in the application itself, that runs in CentOS7, all characters are displayed suffixed with the 00 byte (e.g. T displays as T\x00). I guess I can just strip that out of the string, but would like to avoid that. What would be the most correct way to handle this?
TIA
This seems to get the desired result:
file_fullpath = file_fullpath.force_encoding('UTF-16LE').encode!('UTF-8')
Seems like I first need to "convince" Ruby that the string is UTF-16LE, and only then convert to UTF-8.
What is the best way to turn the string "FA" into /xFA/ ?
To be clear, I don't want to turn "FA" into 7065 or "FA".to_i(16).
In Java the equivalent would be this:
byte b = (byte) Integer.decode("0xFA");
So you're using / markers, but you aren't actually asking about regexps, right?
I think this does what you want:
['FA'].pack('H*')
# => "\xFA"
There is no actual byte type in ruby stdlib (I don't think? unless there's one I don't know about?), just Strings, that can be any number of bytes long (in this case, one). A single "byte" is typically represented as a 1-byte long String in ruby. #bytesize on a String will always return the length in bytes.
"\xFA".bytesize
# => 1
Your example happens not to be a valid UTF-8 character, by itself. Depending on exactly what you're doing and how you're environment is set up, your string might end up being tagged with a UTF-8 encoding by default. If you are dealing with binary data, and want to make sure the string is tagged as such, you might want to #force_encoding on it to be sure. It should NOT be neccesary when using #pack, the results should be tagged as ASCII-8BIT already (which has a synonym of BINARY, it's basically the "null encoding" used in ruby for binary data).
['FA'].pack('H*').encoding
=> #<Encoding:ASCII-8BIT
But if you're dealing with string objects holding what's meant to be binary data, not neccesarily valid character data in any encoding, it is useful to know you may sometimes need to do str.force_encoding("ASCII-8BIT") (or force_encoding("BINARY"), same thing), to make sure your string isn't tagged as a particular text encoding, which would make ruby complain when you try to do certain operations on it if it includes invalid bytes for that encoding -- or in other cases, possibly do the wrong thing
Actually for a regexp
Okay, you actually do want a regexp. So we have to take our string we created, and embed it in a regexp. Here's one way:
representation = "FA"
str = [representation].pack("H*")
# => "\xFA"
data = "\x01\xFA\xC2".force_encoding("BINARY")
regexp = Regexp.new(str)
data =~ regexp
# => 1 (matched on byte 1; the first byte of data is byte 0)
You see how I needed the force_encoding there on the data string, otherwise ruby would default to it being a UTF-8 string (depending on ruby version and environment setup), and complain that those bytes aren't valid UTF-8.
In some cases you might need to explicitly set the regexp to handle binary data too, the docs say you can pass a second argument 'n' to Regexp.new to do that, but I've never done it.
I'm trying to store the literal ascii value of hex FFFF which in decimal is 65535 and is ÿ when written out in VB6. I want to store this value in a buffer which is defined by:
Type HBuff
txt As String * 16
End Type
Global WriteBuffer As HBuff
in the legacy code I inherited.
I want to do something like WriteBuffer.txt = Asc(hex$(-1)) but VB6 stores it as 70
I need to store this value, ÿ in the string, even though it is not printable.
how can I do this?
I'm not sure what your problem is.
If you want to store character number 255 in a string, then do so:
WriteBuffer.txt = Chr$(255)
Be warned though that the result depends on the current locale.
ChrW$(255) does not, but it may yield not the character you want.
For the reference, the code you used returns ASCII code of the first character of the textual hex representation of the number -1. Hex(-1) is FFFF when -1 is typed as Integer (which it is by default), so you get the ASCII code of letter F, which is 70.