Convert a string (representing UTF-8 hex) to string - ruby

I have a string in UTF-8 hex like this:
s = "0059006F007500720020006300720065006400690074002000680061007300200067006F006E0065002000620065006C006F00770020003500200064006F006C006C006100720073002E00200049006600200079006F00750020006800610076006500200061006E0020004100640064002D004F006E0020006F007200200042006F006E0075007300200079006F007500720020007200650073006F00750072006300650073002000770069006C006C00200077006F0072006B00200075006E00740069006C0020006500780068006100750073007400650064002E00200054006F00200074006F00700020007500700020006E006F007700200076006900730069007400200076006F006400610066006F006E0065002E0063006F002E006E007A002F0074006F007000750070"
I want to convert this into actual UTF-8 string. It should read:
Your credit has gone below 5 dollars. If you have an Add-On or Bonus your resources will work until exhausted. To top up now visit vodafone.co.nz/topup
This works:
s.scan(/.{4}/).map { |a| [a.hex].pack('U') }.join
but I'm wondering if there's a better way to do this: whether I should be using Encoding#convert.

The extra 00s suggest that the string is actually the hex representation of a UTF-16 string, rather than UTF-8. Assuming that is the case the steps you need to carry out to get a UTF-8 string are first convert the string into the actual bytes the hex digits represents (Array#pack can be used for this), second mark it as being in the appropriate encoding with force_encoding (which looks like UTF-16BE) and finally use encode to convert it to UTF-8:
[s].pack('H*').force_encoding('utf-16be').encode('utf-8')

I think there are extra null characters all along the string (it's valid, but wasteful), but you can try:
[s].pack('H*').force_encoding('utf-8')
although, it seems "Your credit has gone below 5 dollars"...
The string prints with puts, but I can't read all the unicode characters on the terminal when the string is dumped.

If you are intending to use this on other oddly encoded strings, you could unpad the leading bytes:
[s.gsub(/..(..)/,'\1')].pack('H*')
Or use them:
s.gsub(/..../){|p|p.hex.chr}
If you want to use Encoding::Converter
ec = Encoding::Converter.new('UTF-16BE','UTF-8') # save converter for reuse
ec.convert( [s].pack('H*') ) # or: ec.convert [s].pack'H*'

Related

Print an UTF8-encoded smiley

I am writing an ReactionRoles-Discord-Bot in Python (discord.py).
This Bot saves the ReactionRoles-Smileys as UFT8-Encoded.
The type of the encoded is bytes but it's converted to str to save it.
The string looks something like "b'\\xf0\\x9f\\x98\\x82'".
I am using EMOJI_ENCODED = str(EMOJI.encode('utf8')) to encode it, but bytes(EMOJI_ENCODED).decode('utf8') isn't working.
Do you know how to decode it or how to save it in a better way?
The output of str() is a Unicode string. EMOJI is a Unicode string. str(EMOJI.encode('utf8')) just makes a mangled Unicode string.
The purpose of encoding is to make a byte string that can be saved to a file/database/socket. Simply do b = EMOJI.encode() (default is UTF-8) to get a byte string and s = b.decode() to get the Unicode string back.

How to properly convert byte array of UTF-16LE chars to utf-8 string in Ruby

I have a Base64 encoded binary of a packet capture.
I want to extract a substring at a certain position of the capture.
I'm doing this in Ruby:
payload_decoded = Base64.decode64(payload)
file_size = payload_decoded[114..115].unpack('S*')[0]
file_fullpath = payload_decoded[124, file_size]
p file_fullpath
This works to some extent. file_size gets an integer with the length I want to extract. I then can extract the correct slice of the byte array. And if I just test this in my Mac's terminal, it displays the string perfectly.
But, this code in the application itself, that runs in CentOS7, all characters are displayed suffixed with the 00 byte (e.g. T displays as T\x00). I guess I can just strip that out of the string, but would like to avoid that. What would be the most correct way to handle this?
TIA
This seems to get the desired result:
file_fullpath = file_fullpath.force_encoding('UTF-16LE').encode!('UTF-8')
Seems like I first need to "convince" Ruby that the string is UTF-16LE, and only then convert to UTF-8.

In ruby, how do I turn a text representation of a byte in to a byte?

What is the best way to turn the string "FA" into /xFA/ ?
To be clear, I don't want to turn "FA" into 7065 or "FA".to_i(16).
In Java the equivalent would be this:
byte b = (byte) Integer.decode("0xFA");
So you're using / markers, but you aren't actually asking about regexps, right?
I think this does what you want:
['FA'].pack('H*')
# => "\xFA"
There is no actual byte type in ruby stdlib (I don't think? unless there's one I don't know about?), just Strings, that can be any number of bytes long (in this case, one). A single "byte" is typically represented as a 1-byte long String in ruby. #bytesize on a String will always return the length in bytes.
"\xFA".bytesize
# => 1
Your example happens not to be a valid UTF-8 character, by itself. Depending on exactly what you're doing and how you're environment is set up, your string might end up being tagged with a UTF-8 encoding by default. If you are dealing with binary data, and want to make sure the string is tagged as such, you might want to #force_encoding on it to be sure. It should NOT be neccesary when using #pack, the results should be tagged as ASCII-8BIT already (which has a synonym of BINARY, it's basically the "null encoding" used in ruby for binary data).
['FA'].pack('H*').encoding
=> #<Encoding:ASCII-8BIT
But if you're dealing with string objects holding what's meant to be binary data, not neccesarily valid character data in any encoding, it is useful to know you may sometimes need to do str.force_encoding("ASCII-8BIT") (or force_encoding("BINARY"), same thing), to make sure your string isn't tagged as a particular text encoding, which would make ruby complain when you try to do certain operations on it if it includes invalid bytes for that encoding -- or in other cases, possibly do the wrong thing
Actually for a regexp
Okay, you actually do want a regexp. So we have to take our string we created, and embed it in a regexp. Here's one way:
representation = "FA"
str = [representation].pack("H*")
# => "\xFA"
data = "\x01\xFA\xC2".force_encoding("BINARY")
regexp = Regexp.new(str)
data =~ regexp
# => 1 (matched on byte 1; the first byte of data is byte 0)
You see how I needed the force_encoding there on the data string, otherwise ruby would default to it being a UTF-8 string (depending on ruby version and environment setup), and complain that those bytes aren't valid UTF-8.
In some cases you might need to explicitly set the regexp to handle binary data too, the docs say you can pass a second argument 'n' to Regexp.new to do that, but I've never done it.

Escaping special characters in ruby

This is a common question, but just can't seem to find the answer without resorting to unreliable regular expressions.
Basically if there is a \302\240 or similar combination in a string I want to replace it with the real character.
I am using PLruby for this, hence the warn.
obj = {"a"=>"some string with special chars"}
warn obj.inspect
NOTICE: {"Outputs"=>["a\302\240b"]} <- chars are escaped
warn "\302\240"
NOTICE: <-- there is a non breaking space here, like I want
warn "#{json.inspect}"
NOTICE: {"Outputs"=>["a\302\240"b]} <- chars are escaped
So these can be decoded when I use a string literal, but with the "#{x}" format the \xxx placeholders are never decoded into characters.
How would I assign the same string as the middle command yields?
Ruby Version: 1.8.5
You mentioned that you're using PL/ruby. That suggests that your strings are actually bytea values (the PostgreSQL version of a BLOB) using the old "escape" format. The escape format encodes non-ASCII values in octal with a leading \ so a bit of gsub and Array#pack should sort you out:
bytes = s.gsub(/\\([0-8]{3})/) { [ $1.to_i(8) ].pack('C') }
That will expand the escape values in s to raw bytes and leave them in bytes. You're still dealing with binary data though so just trying to display it on a console won't necessarily do anything useful. If you know that you're dealing with comprehensible strings then you'll have to figure out what encoding they're in and use String methods to sort out the encoding.
Perhaps you just want to use .to_s instead?

Converting integers to UTF-8 (Korean)

I'm running Ruby 1.9.2 and trying to fix some broken UTF-8 text input where the text is literally "\\354\\203\\201\\355\\221\\234\\353\\252\\205" and change it into its correct Korean "상표명"
However after searching for a while and trying a few methods I still get out gibberish.
It's confusing as the escaped characters example on line 3 works fine
# encoding: utf-8
puts "상표명" # Target string
# Output: "상표명"
puts "\354\203\201\355\221\234\353\252\205" # Works with escaped characters like this
# Output: "상표명"
# Real input is a string
input = "\\354\\203\\201\\355\\221\\234\\353\\252\\205"
# After some manipulation got it into an array of numbers
puts [354, 203,201,355,221,234,353,252,205].pack('U*').force_encoding('UTF-8')
# Output: ŢËÉţÝêšüÍ (gibberish)
I'm sure this must have been answered somewhere but I haven't managed to find it.
This is what you want to do to get your UTF-8 Korean text:
s = "\\354\\203\\201\\355\\221\\234\\353\\252\\205"
k = s.scan(/\d+/).map { |n| n.to_i(8) }.pack("C*").force_encoding('utf-8')
# "상표명"
And this is how it works:
The input string is nice and regular so we can use scan to pull out the individual number.
Then a map with to_i(8) to convert the octal values (as noted by Henning Makholm) to integers.
Now we need to convert our list of integers to bytes so we pack('C*') to get a byte string. This string will have the BINARY encoding (AKA ASCII-8BIT).
We happen to know that the bytes really do represent UTF-8 so we can force the issue with force_encoding('utf-8').
The main thing that you were missing was your pack format; 'U' means "UTF-8 character" and would expect an array of Unicode codepoints each represented by a single integer, 'C' expects an array of bytes and that's what we had.
The \354 and so forth are octal escapes, not decimal, so you cannot just write them as 354 to get the integer values of the bytes.

Resources