Comparing same string fails in Ruby with same utf-8 encoding - ruby

The string from the one side came from server scraping, so they might be different, but they look the same:
irb(main):013:0> "‎italy" == "italy"
=> false
I checked encoding and it's the same
irb(main):014:0> "‎italy".encoding === "italy".encoding
=> true
irb(main):016:0> "‎italy".encoding
=> #<Encoding:UTF-8>
Why are they different (==) ?

The first string contains a LEFT-TO-RIGHT MARK at the beginning. This isn’t visible when you print the string, but it does mean the strings are different. Compare the result of calling bytes or chars on the strings.
You will need to strip it off before processing the string.

Related

Why is a UTF-8 string not equal to the equivalent ASCII-8BIT string in Ruby 2.0?

I am using Ruby 2.3:
I have the following string: "\xFF\xFE"
I do a File.binread() on a file containing it, so the encoding of this string is ASCII-8BIT. However, in my code, i check to see whether this string was indeed read by comparing it to the literal string "\xFF\xFE" (which has encoding UTF-8 as all Ruby strings have by default).
However, the comparison returns false, even though both strings contain the same bytes - it just happens that one is with encoding ASCII-8BIT and the other is UTF-8
I have two questions: (1) why does it return false ? and (2) what is the best way to go about achieving what i want? I just want to check whether the string I read matches "\xFF\xFE"
(1) why does it return false?
When comparing strings, they either have to be in the same encoding or their characters must be encodable in US-ASCII.
Comparison works as expected if the string only contains byte values 0 to 127: (0b0xxxxxxx)
a = 'E'.encode('ISO8859-1') #=> "E"
b = 'E'.encode('ISO8859-15') #=> "E"
a.bytes #=> [69]
b.bytes #=> [69]
a == b #=> true
And fails if it contains any byte values 128 to 255: (0b1xxxxxxx)
a = 'É'.encode('ISO8859-1') #=> "\xC9"
b = 'É'.encode('ISO8859-15') #=> "\xC9"
a.bytes #=> [201]
b.bytes #=> [201]
a == b #=> false
Your string can't be represented in US-ASCII, because both its bytes are outside its range:
"\xFF\xFE".bytes #=> [255, 254]
Attempting to convert it doesn't produce any meaningful result:
"\xFF\xFE".encode('US-ASCII', 'ASCII-8BIT', :undef => :replace)
#=> "??"
The string will therefore return false when being compared to a string in another encoding, regardless of its content.
(2) what is the best way to go about achieving what i want?
You could compare your string to a string with the same encoding. binread returns a string in ASCII-8BIT encoding, so you could use b to create a compatible one:
IO.binread('your_file', 2) == "\xFF\xFE".b
or you could compare its bytes:
IO.binread('your_file', 2).bytes == [0xFF, 0xFE]

What's the difference between CGI.unescape and URI.decode_www_form_component?

These functions seem to do the same thing.
irb> CGI.unescape "Sloths%3A+Society+and+Habitat"
=> "Sloths: Society and Habitat"
irb> URI.decode_www_form_component "Sloths%3A+Society+and+Habitat"
=> "Sloths: Society and Habitat"
What's the difference?
These methods are very similar. They both accept a string and an encoding and return a string in the specified encoding with the % escapes decoded. But there are differences:
Invalid escapes
URI.decode_www_form_component raises an ArgumentError if the string contains invalid escape sequences.
URI.decode_www_form_component('%xz')
# ArgumentError: invalid %-encoding (%xz)
CGI.unescape simply ignores them.
CGI.unescape('%xz')
# "%xz"
Invalid encodings
CGI.unescape ignores your specified encoding if the result is invalid
p CGI.unescape("\u263a", 'ASCII')
# "☺"
URI.decode_www_form_component doesn't care
p URI.decode_www_form_component("\u263a", 'ASCII')
# "\xE2\x98\xBA"
Lastly (and I hesitate to even mention this), URI.decode_www_form_component is slightly faster because it uses a precomputed Hash to decode all 485 valid escape codes (it's case-sensitive), whereas CGI.unescape actually interprets the hex code and repacks it as a character.

can't convert "[" with encoding "gb2312" to "utf-8" in ruby1.9.3

I'm learning ruby and try to get the filename from a ftp server. The string I got was encoded in gb2312(simplified Chinese), It's success in most cases with these codes:
str = str.force_encoding("gb2312")
str = str.encode("utf-8")
but it will make an error "in encode': "\xFD" followed by "\x88" on GB2312 (Encoding::InvalidByteSequenceError)" if the string contains the symbol "[" or "【".
The Ruby Encoding allows a lot of introspection. That way, you can find out pretty well, how to handle a given String:
"【".encoding
=> #<Encoding:UTF-8>
"【".valid_encoding?
=> true
"【".force_encoding("gb2312").valid_encoding?
=> false
That shows that this character is not with the given character-set! If you need to transform all those characters, you can use the encode method and provide defaults or replace undefined characters like so:
"【".encode("gb2312", invalid: :replace, undef: :replace)
=> "\x{A1BE}"
If you have a String that has mixed character Encodings, you are pretty screwed. There is no way to find out without a lot of guessing.

ruby 1.9, force_encoding, but check

I have a string I have read from some kind of input.
To the best of my knowledge, it is UTF8. Okay:
string.force_encoding("utf8")
But if this string has bytes in it that are not in fact legal UTF8, I want to know now and take action.
Ordinarily, will force_encoding("utf8") raise if it encounters such bytes? I believe it will not.
If I was doing an #encode I could choose from the handy options with what to do with characters that are invalid in the source encoding (or destination encoding).
But I'm not doing an #encode, I'm doing a #force_encoding. It has no such options.
Would it make sense to
string.force_encoding("utf8").encode("utf8")
to get an exception right away? Normally encoding from utf8 to utf8 doesn't make any sense. But maybe this is the way to get it to raise right away if there's invalid bytes? Or use the :replace option etc to do something different with invalid bytes?
But no, can't seem to make that work either.
Anyone know?
1.9.3-p0 :032 > a = "bad: \xc3\x28 okay".force_encoding("utf-8")
=> "bad: \xC3( okay"
1.9.3-p0 :033 > a.valid_encoding?
=> false
Okay, but how do I find and eliminate those bad bytes? Oddly, this does NOT raise:
1.9.3-p0 :035 > a.encode("utf-8")
=> "bad: \xC3( okay"
If I was converting to a different encoding, it would!
1.9.3-p0 :039 > a.encode("ISO-8859-1")
Encoding::InvalidByteSequenceError: "\xC3" followed by "(" on UTF-8
Or if I told it to, it'd replace it with a "?" =>
1.9.3-p0 :040 > a.encode("ISO-8859-1", :invalid => :replace)
=> "bad: ?( okay"
So ruby's got the smarts to know what are bad bytes in utf-8, and to replace em with something else -- when converting to a different encoding. But I don't want to convert to a different encoding, i want to stay utf8 -- but I might want to raise if there's an invalid byte in there, or I might want to replace invalid bytes with replacement chars.
Isn't there some way to get ruby to do this?
update I believe this has finally been added to ruby in 2.1, with String#scrub present in the 2.1 preview release to do this. So look for that!
(update: see https://github.com/jrochkind/scrub_rb)
So I coded up a solution to what I needed here: https://github.com/jrochkind/ensure_valid_encoding/blob/master/lib/ensure_valid_encoding.rb
But only much more recently did I realize this actually IS built into the stdlib, you just need to, somewhat counter-intuitively, pass 'binary' as the "source encoding":
a = "bad: \xc3\x28 okay".force_encoding("utf-8")
a.encode("utf-8", "binary", :undef => :replace)
=> "bad: �( okay"
Yep, that's exactly what I wanted. So turns out this IS built into 1.9 stdlib, it's just undocumented and few people know it (or maybe few people that speak English know it?). Although I saw these arguments used this way on a blog somewhere, so someone else knew it!
In ruby 2.1, the stdlib finally supports this with scrub.
http://ruby-doc.org/core-2.1.0/String.html#method-i-scrub
make sure that your scriptfile itself is saved as UTF8 and try the following
# encoding: UTF-8
p [a = "bad: \xc3\x28 okay", a.valid_encoding?]
p [a.force_encoding("utf-8"), a.valid_encoding?]
p [a.encode!("ISO-8859-1", :invalid => :replace), a.valid_encoding?]
This gives on my windows7 system the following
["bad: \xC3( okay", false]
["bad: \xC3( okay", false]
["bad: ?( okay", true]
So your bad char is replaced, you can do it right away as follows
a = "bad: \xc3\x28 okay".encode!("ISO-8859-1", :invalid => :replace)
=> "bad: ?( okay"
EDIT: here a solution that works on any arbitrary encoding, the first encodes only the bad chars, the second just replaces by a ?
def validate_encoding(str)
str.chars.collect do |c|
(c.valid_encoding?) ? c:c.encode!(Encoding.locale_charmap, :invalid => :replace)
end.join
end
def validate_encoding2(str)
str.chars.collect do |c|
(c.valid_encoding?) ? c:'?'
end.join
end
a = "bad: \xc3\x28 okay"
puts validate_encoding(a) #=>bad: ?( okay
puts validate_encoding(a).valid_encoding? #=>true
puts validate_encoding2(a) #=>bad: ?( okay
puts validate_encoding2(a).valid_encoding? #=>true
To check that a string has no invalid sequences, try to convert it to the binary encoding:
# Returns true if the string has only valid sequences
def valid_encoding?(string)
string.encode('binary', :undef => :replace)
true
rescue Encoding::InvalidByteSequenceError => e
false
end
p valid_encoding?("\xc0".force_encoding('iso-8859-1')) # true
p valid_encoding?("\u1111") # true
p valid_encoding?("\xc0".force_encoding('utf-8')) # false
This code replaces undefined characters, because we don't care if there are valid sequences that cannot be represented in binary. We only care if there are invalid sequences.
A slight modification to this code returns the actual error, which has valuable information about the improper encoding:
# Returns the encoding error, or nil if there isn't one.
def encoding_error(string)
string.encode('binary', :undef => :replace)
nil
rescue Encoding::InvalidByteSequenceError => e
e.to_s
end
# Returns truthy if the string has only valid sequences
def valid_encoding?(string)
!encoding_error(string)
end
puts encoding_error("\xc0".force_encoding('iso-8859-1')) # nil
puts encoding_error("\u1111") # nil
puts encoding_error("\xc0".force_encoding('utf-8')) # "\xC0" on UTF-8
About the only thing I can think of is to transcode to something and back that won't damage the string in the round-trip:
string.force_encoding("UTF-8").encode("UTF-32LE").encode("UTF-8")
Seems rather wasteful, though.
Okay, here's a really lame pure ruby way to do it I figured out myself. It probably performs for crap. what the heck, ruby? Not selecting my own answer for now, hoping someone else will show up and give us something better.
# Pass in a string, will raise an Encoding::InvalidByteSequenceError
# if it contains an invalid byte for it's encoding; otherwise
# returns an equivalent string.
#
# OR, like String#encode, pass in option `:invalid => :replace`
# to replace invalid bytes with a replacement string in the
# returned string. Pass in the
# char you'd like with option `:replace`, or will, like String#encode
# use the unicode replacement char if it thinks it's a unicode encoding,
# else ascii '?'.
#
# in any case, method will raise, or return a new string
# that is #valid_encoding?
def validate_encoding(str, options = {})
str.chars.collect do |c|
if c.valid_encoding?
c
else
unless options[:invalid] == :replace
# it ought to be filled out with all the metadata
# this exception usually has, but what a pain!
raise Encoding::InvalidByteSequenceError.new
else
options[:replace] || (
# surely there's a better way to tell if
# an encoding is a 'Unicode encoding form'
# than this? What's wrong with you ruby 1.9?
str.encoding.name.start_with?('UTF') ?
"\uFFFD" :
"?" )
end
end
end.join
end
More ranting at http://bibwild.wordpress.com/2012/04/17/checkingfixing-bad-bytes-in-ruby-1-9-char-encoding/
If you are doing this for a "real-life" use case - for example for parsing different strings entered by users, and not just for the sake of being able to "decode" a totally random file which could be made of as many encodings as you wish, then I guess you could at least assume that all charcters for each string have the same encoding.
Then, in this case, what would you think about this?
strings = [ "UTF-8 string with some utf8 chars \xC3\xB2 \xC3\x93",
"ISO-8859-1 string with some iso-8859-1 chars \xE0 \xE8", "..." ]
strings.each { |s|
s.force_encoding "utf-8"
if s.valid_encoding?
next
else
while s.valid_encoding? == false
s.force_encoding "ISO-8859-1"
s.force_encoding "..."
end
s.encode!("utf-8")
end
}
I am not a Ruby "pro" in any way, so please forgive if my solution is wrong or even a bit naive..
I just try to give back what I can, and this is what I've come to, while I was (I still am) working on this little parser for arbitrarily encoded strings, which I am doing for a study-project.
While I'm posting this, I must admit that I've not even fully tested it.. I.. just got a couple of "positive" results, but I felt so excited of possibly having found what I was struggling to find (and for all the time I spent reading about this on SO..) that I just felt the need to share it as quick as possible, hoping that it could help save some time to anyone who has been looking for this for as long as I've been... .. if it works as expected :)
A simple way to provoke an exception seems to be:
untrusted_string.match /./
Here are 2 common situations and how to deal with them in Ruby 2.1+. I know, the question refers to Ruby v1.9, but maybe this is helpful for others finding this question via Google.
Situation 1
You have an UTF-8 string with possibly a few invalid bytes
Remove the invalid bytes:
str = "Partly valid\xE4 UTF-8 encoding: äöüß"
str.scrub('')
# => "Partly valid UTF-8 encoding: äöüß"
Situation 2
You have a string that could be in either UTF-8 or ISO-8859-1 encoding
Check which encoding it is and convert to UTF-8 (if necessary):
str = "String in ISO-8859-1 encoding: \xE4\xF6\xFC\xDF"
unless str.valid_encoding?
str.encode!( 'UTF-8', 'ISO-8859-1', invalid: :replace, undef: :replace, replace: '?' )
end #unless
# => "String in ISO-8859-1 encoding: äöüß"
Notes
The above code snippets assume that Ruby encodes all your strings in UTF-8 by default. Even though, this is almost always the case, you can make sure of this by starting your scripts with # encoding: UTF-8.
If invalid, it is programmatically possible to detect most multi-byte encodings like UTF-8 (in Ruby, see: #valid_encoding?). However, it is NOT (easily) possible to programmatically detect invalidity of single-byte-encodings like ISO-8859-1. Thus the above code snippet does not work the other way around, i.e. detecting if a String is valid ISO-8859-1 encoding.
Even though UTF-8 has become increasingly popular as the default encoding in the web, ISO-8859-1 and other Latin1 flavors are still very popular in the Western countries, especially in North America. Be aware that there a several single-byte encodings out there that are very similar, but slightly vary from ISO-8859-1. Examples: CP1252 (a.k.a. Windows-1252), ISO-8859-15

How to extract a single character (as a string) from a larger string in Ruby?

What is the Ruby idiomatic way for retrieving a single character from a string as a one-character string? There is the str[n] method of course, but (as of Ruby 1.8) it returns a character code as a fixnum, not a string. How do you get to a single-character string?
In Ruby 1.9, it's easy. In Ruby 1.9, Strings are encoding-aware sequences of characters, so you can just index into it and you will get a single-character string out of it:
'µsec'[0] => 'µ'
However, in Ruby 1.8, Strings are sequences of bytes and thus completely unaware of the encoding. If you index into a string and that string uses a multibyte encoding, you risk indexing right into the middle of a multibyte character (in this example, the 'µ' is encoded in UTF-8):
'µsec'[0] # => 194
'µsec'[0].chr # => Garbage
'µsec'[0,1] # => Garbage
However, Regexps and some specialized string methods support at least a small subset of popular encodings, among them some Japanese encodings (e.g. Shift-JIS) and (in this example) UTF-8:
'µsec'.split('')[0] # => 'µ'
'µsec'.split(//u)[0] # => 'µ'
Before Ruby 1.9:
'Hello'[1].chr # => "e"
Ruby 1.9+:
'Hello'[1] # => "e"
A lot has changed in Ruby 1.9 including string semantics.
Should work for Ruby before and after 1.9:
'Hello'[2,1] # => "l"
Please see Jörg Mittag's comment: this is correct only for single-byte character sets.
'abc'[1..1] # => "b"
'abc'[1].chr # => "b"

Resources