ruby 1.9: invalid byte sequence in UTF-8 - ruby

I'm writing a crawler in Ruby (1.9) that consumes lots of HTML from a lot of random sites.
When trying to extract links, I decided to just use .scan(/href="(.*?)"/i) instead of nokogiri/hpricot (major speedup). The problem is that I now receive a lot of "invalid byte sequence in UTF-8" errors.
From what I understood, the net/http library doesn't have any encoding specific options and the stuff that comes in is basically not properly tagged.
What would be the best way to actually work with that incoming data? I tried .encode with the replace and invalid options set, but no success so far...

In Ruby 1.9.3 it is possible to use String.encode to "ignore" the invalid UTF-8 sequences. Here is a snippet that will work both in 1.8 (iconv) and 1.9 (String#encode) :
require 'iconv' unless String.method_defined?(:encode)
if String.method_defined?(:encode)
file_contents.encode!('UTF-8', 'UTF-8', :invalid => :replace)
else
ic = Iconv.new('UTF-8', 'UTF-8//IGNORE')
file_contents = ic.iconv(file_contents)
end
or if you have really troublesome input you can do a double conversion from UTF-8 to UTF-16 and back to UTF-8:
require 'iconv' unless String.method_defined?(:encode)
if String.method_defined?(:encode)
file_contents.encode!('UTF-16', 'UTF-8', :invalid => :replace, :replace => '')
file_contents.encode!('UTF-8', 'UTF-16')
else
ic = Iconv.new('UTF-8', 'UTF-8//IGNORE')
file_contents = ic.iconv(file_contents)
end

The accepted answer nor the other answer work for me. I found this post which suggested
string.encode!('UTF-8', 'binary', invalid: :replace, undef: :replace, replace: '')
This fixed the problem for me.

My current solution is to run:
my_string.unpack("C*").pack("U*")
This will at least get rid of the exceptions which was my main problem

Try this:
def to_utf8(str)
str = str.force_encoding('UTF-8')
return str if str.valid_encoding?
str.encode("UTF-8", 'binary', invalid: :replace, undef: :replace, replace: '')
end

I recommend you to use a HTML parser. Just find the fastest one.
Parsing HTML is not as easy as it may seem.
Browsers parse invalid UTF-8 sequences, in UTF-8 HTML documents, just putting the "�" symbol. So once the invalid UTF-8 sequence in the HTML gets parsed the resulting text is a valid string.
Even inside attribute values you have to decode HTML entities like amp
Here is a great question that sums up why you can not reliably parse HTML with a regular expression:
RegEx match open tags except XHTML self-contained tags

attachment = file.read
begin
# Try it as UTF-8 directly
cleaned = attachment.dup.force_encoding('UTF-8')
unless cleaned.valid_encoding?
# Some of it might be old Windows code page
cleaned = attachment.encode( 'UTF-8', 'Windows-1252' )
end
attachment = cleaned
rescue EncodingError
# Force it to UTF-8, throwing out invalid bits
attachment = attachment.force_encoding("ISO-8859-1").encode("utf-8", replace: nil)
end

This seems to work:
def sanitize_utf8(string)
return nil if string.nil?
return string if string.valid_encoding?
string.chars.select { |c| c.valid_encoding? }.join
end

I've encountered string, which had mixings of English, Russian and some other alphabets, which caused exception. I need only Russian and English, and this currently works for me:
ec1 = Encoding::Converter.new "UTF-8","Windows-1251",:invalid=>:replace,:undef=>:replace,:replace=>""
ec2 = Encoding::Converter.new "Windows-1251","UTF-8",:invalid=>:replace,:undef=>:replace,:replace=>""
t = ec2.convert ec1.convert t

While Nakilon's solution works, at least as far as getting past the error, in my case, I had this weird f-ed up character originating from Microsoft Excel converted to CSV that was registering in ruby as a (get this) cyrillic K which in ruby was a bolded K. To fix this I used 'iso-8859-1' viz. CSV.parse(f, :encoding => "iso-8859-1"), which turned my freaky deaky cyrillic K's into a much more manageable /\xCA/, which I could then remove with string.gsub!(/\xCA/, '')

Before you use scan, make sure that the requested page's Content-Type header is text/html, since there can be links to things like images which are not encoded in UTF-8. The page could also be non-html if you picked up a href in something like a <link> element. How to check this varies on what HTTP library you are using. Then, make sure the result is only ascii with String#ascii_only? (not UTF-8 because HTML is only supposed to be using ascii, entities can be used otherwise). If both of those tests pass, it is safe to use scan.

There is also the scrub method to filter invalid bytes.
string.scrub('')

If you don't "care" about the data you can just do something like:
search_params = params[:search].valid_encoding? ? params[:search].gsub(/\W+/, '') : "nothing"
I just used valid_encoding? to get passed it. Mine is a search field, and so i was finding the same weirdness over and over so I used something like: just to have the system not break. Since i don't control the user experience to autovalidate prior to sending this info (like auto feedback to say "dummy up!") I can just take it in, strip it out and return blank results.

Related

How to use :replace, :invalid and :undef args for encoding using CSV.read?

Since ruby 1.9, CSV uses a parser that can perform encoding, if you use methods like:
::foreach, ::open, ::read, and ::readlines.
For example: CSV.read('path/to/file', encoding: "windows-1252:UTF-8") tries to read a file in windows-1252 and returns an array with utf-8 encoded strings.
If the encode conversion between charsets has undefined characters it gives an Encoding::UndefinedConversionError.
The String.encode method has some nice args to deal with this undefined characters:
str = str.encode('UTF-8', invalid: :replace, undef: :replace, replace: "" )
Is there a way to use this kind of replace rules for undefined conversions between charsets with CSV parser?
Thank you.
There is, indeed, a way. The trick is to define a custom converter that does the conversion you want using String#encode. Converters are run before CSV tries to do its automatic conversion to UTF-8. We pass the custom converter to CSV.read as the :converters option, along with the original :encoding:
UTF8_CONVERTER = ->(field) { field.encode('utf-8', invalid: :replace, undef: :replace, replace: "") }
CSV.read('foo.csv', encoding: 'windows-1252', converters: UTF8_CONVERTER)
Since there aren't any characters in Windows-1252 that aren't also in UTF-8, I'll demonstrate the other way around. Suppose you have this UTF-8 CSV file:
foo,bar
yes👍,no💩
And suppose I want to convert it to ASCII-8BIT (because reasons?). This gives me an error:
CSV.read('emoji.csv', encoding: 'utf-8:ascii-8bit')
# => Encoding::UndefinedConversionError: U+1F44D from UTF-8 to ASCII-8BIT
But if I define a custom converter that replaces those undefined characters, it works perfectly:
ASCII_CONVERTER = ->(field) { field.encode('ascii-8bit', replace: "#") }
CSV.read('emoji.csv', encoding: 'utf-8', converters: ASCII_CONVERTER)
# => [ [ "foo", "bar" ],
# [ "yes#", "no#"] ]
(Note that encoding: 'utf-8' isn't strictly necessary here, since UTF-8 is the default, but it will be necessary if your file has a different encoding.)
If you want to use the replace behavior of String#encode, you will either have to encode the whole file content with it or do it line by line. You will lose information with this.
This is one way of doing it though:
file = File.open('path/to/file.csv')
file.each do |line|
# keep in mind that the first parameter here is the destination encoding,
# the second is the source encoding
sanitized_line = line.encode('UTF-8', 'windows-1252', invalid: :replace, undef: :replace, replace: '')
fields_array = CSV.parse_line(sanitized_line)
# do whatever you want with the fields you extracted
end
If your conversion from one encoding to another is pretty much guaranteed to not loose information (iso-8859-1 to utf-8 for example) I would really recommend to simply convert the file on reading.
Another thing to keep in mind is, that ruby does not try to figure out the encoding of a file you are reading on it's own. If you omit the parameter it only uses the default encoding for it's external and internal encoding. So you have to specify the encoding the file is in yourself. Ruby has no really reliable way of doing this, so in my case I ended up doing this (on a Ubuntu system):
encoding = `file --mime-encoding #{path_to_file} | awk '{print $2}'`.strip
arr_of_arrs = CSV.read(path_to_file, encoding: "#{encoding}:utf-8")

Ruby converting string encoding from ISO-8859-1 to UTF-8 not working

I am trying to convert a string from ISO-8859-1 encoding to UTF-8 but I can't seem to get it work. Here is an example of what I have done in irb.
irb(main):050:0> string = 'Norrlandsvägen'
=> "Norrlandsvägen"
irb(main):051:0> string.force_encoding('iso-8859-1')
=> "Norrlandsv\xC3\xA4gen"
irb(main):052:0> string = string.encode('utf-8')
=> "Norrlandsvägen"
I am not sure why Norrlandsvägen in iso-8859-1 will be converted into Norrlandsvägen in utf-8.
I have tried encode, encode!, encode(destinationEncoding, originalEncoding), iconv, force_encoding, and all kinds of weird work-arounds I could think of but nothing seems to work. Can someone please help me/point me in the right direction?
Ruby newbie still pulling hair like crazy but feeling grateful for all the replies here... :)
Background of this question: I am writing a gem that will download an xml file from some websites (which will have iso-8859-1 encoding) and save it in a storage and I would like to convert it to utf-8 first. But words like Norrlandsvägen keep messing me up. Really any help would be greatly appreciated!
[UPDATE]: I realized running tests like this in the irb console might give me different behaviors so here is what I have in my actual code:
def convert_encoding(string, originalEncoding)
puts "#{string.encoding}" # ASCII-8BIT
string.encode(originalEncoding)
puts "#{string.encoding}" # still ASCII-8BIT
string.encode!('utf-8')
end
but the last line gives me the following error:
Encoding::UndefinedConversionError - "\xC3" from ASCII-8BIT to UTF-8
Thanks to #Amadan's answer below, I noticed that \xC3 actually shows up in irb if you run:
irb(main):001:0> string = 'ä'
=> "ä"
irb(main):002:0> string.force_encoding('iso-8859-1')
=> "\xC3\xA4"
I have also tried to assign a new variable to the result of string.encode(originalEncoding) but got an even weirder error:
newString = string.encode(originalEncoding)
puts "#{newString.encoding}" # can't even get to this line...
newString.encode!('utf-8')
and the error is Encoding::UndefinedConversionError - "\xC3" to UTF-8 in conversion from ASCII-8BIT to UTF-8 to ISO-8859-1
I am still quite lost in all of this encoding mess but I am really grateful for all the replies and help everyone has given me! Thanks a ton! :)
You assign a string, in UTF-8. It contains ä. UTF-8 represents ä with two bytes.
string = 'ä'
string.encoding
# => #<Encoding:UTF-8>
string.length
# 1
string.bytes
# [195, 164]
Then you force the bytes to be interpreted as if they were ISO-8859-1, without actually changing the underlying representation. This does not contain ä any more. It contains two characters, Ã and ¤.
string.force_encoding('iso-8859-1')
# => "\xC3\xA4"
string.length
# 2
string.bytes
# [195, 164]
Then you translate that into UTF-8. Since this is not reinterpretation but translation, you keep the two characters, but now encoded in UTF-8:
string = string.encode('utf-8')
# => "ä"
string.length
# 2
string.bytes
# [195, 131, 194, 164]
What you are missing is the fact that you originally don't have an ISO-8859-1 string, as you would from your Web-service - you have gibberish. Fortunately, this is all in your console tests; if you read the response of the website using the proper input encoding, it should all work okay.
For your console test, let's demonstrate that if you start with a proper ISO-8859-1 string, it all works:
string = 'Norrlandsvägen'.encode('iso-8859-1')
# => "Norrlandsv\xE4gen"
string = string.encode('utf-8')
# => "Norrlandsvägen"
EDIT For your specific problem, this should work:
require 'net/https'
uri = URI.parse("https://rusta.easycruit.com/intranet/careerbuilder_se/export/xml/full")
options = {
:use_ssl => uri.scheme == 'https',
:verify_mode => OpenSSL::SSL::VERIFY_NONE
}
response = Net::HTTP.start(uri.host, uri.port, options) do |https|
https.request(Net::HTTP::Get.new(uri.path))
end
body = response.body.force_encoding('ISO-8859-1').encode('UTF-8')
There's a difference between force_encoding and encode. The former sets the encoding for the string, whereas the latter actually transcodes the contents of the string to the new encoding. Consequently, the following code causes your problem:
string = "Norrlandsvägen"
string.force_encoding('iso-8859-1')
puts string.encode('utf-8') # Norrlandsvägen
Whereas the following code will actually correctly encode your contents:
string = "Norrlandsvägen".encode('iso-8859-1')
string.encode!('utf-8')
Here's an example running in irb:
irb(main):023:0> string = "Norrlandsvägen".encode('iso-8859-1')
=> "Norrlandsv\xE4gen"
irb(main):024:0> string.encoding
=> #<Encoding:ISO-8859-1>
irb(main):025:0> string.encode!('utf-8')
=> "Norrlandsvägen"
irb(main):026:0> string.encoding
=> #<Encoding:UTF-8>
The above answer was spot on. Specifically this point here:
There's a difference between force_encoding and encode. The former
sets the encoding for the string, whereas the latter actually
transcodes the contents of the string to the new encoding.
In my situation, I had a text file with iso-8859-1 encoding. By default, Ruby uses UTF-8 encoding, so if you were to try to read the file without specifying the encoding, then you would get an error:
results = File.read(file)
results.encoding
=> #<Encoding:UTF-8>
results.split("\r\n")
ArgumentError: invalid byte sequence in UTF-8
You get an invalid byte sequence error because the characters in different encodings are represented by different byte lengths. Consequently, you would need to specify the encoding to the File API. Think of it like force_encoding:
results = File.read(file, encoding: "iso-8859-1")
So everything is good right? No, not if you want to start parsing the iso-8859-1 string with UTF-8 character encodings:
results = File.read(file, encoding: "iso-8859-1")
results.each do |line|
puts line.split('¬')
end
Encoding::CompatibilityError: incompatible character encodings: ISO-8859-1 and UTF-8
Why this error? Because '¬' is represented as UTF-8. You are using a UTF-8 character sequence against an ISO-8859-1 string. They are incompatible encodings. Consequently, after you read the File as a ISO-8859-1, then you can ask Ruby to encode that ISO-8859-1 into a UTF-8. And now you will be working with UTF-8 strings and thus no problems:
results = File.read(file, encoding: "iso-8859-1").encode('UTF-8')
results.encoding
results = results.split("\r\n")
results.each do |line|
puts line.split('¬')
end
Ultimately, with some Ruby APIs, you do not need to use force_encoding('ISO-8859-1'). Instead, you just specify the expected encoding to the API. However, you must convert it back to UTF-8 if you plan to parse it with UTF-8 strings.

how to select dropdown having Encoding::UndefinedConversionError in watir?

I want to select dropdown having text="Côte d'Ivoire".
ie.select_list(:id, "name01").select("#{text}")
I tried these codes,
1.encoding: UTF-8 #not working
2.text.force_encoding("ASCII-8BIT").encode('UTF-8', undef: :replace, replace:'')
#text=Cte d'Ivoire
what should I do for it?
I also want to save this text to my DB.Please help.
If you know the string is UTF-8 encoded, why not just force encoding to UTF-8?
#encoding: ASCII-8BIT
str = "C\xC3\xB4te d'Ivoire" # => "C\xC3\xB4te d'Ivoire"
str.encoding # => #<Encoding:ASCII-8BIT>
str.force_encoding('UTF-8')
str # => "Côte d'Ivoire"
str.encoding # => #<Encoding:UTF-8>
If you are using Côte d'Ivoire as a literal anywhere in your Ruby source files, be sure to add
#encoding: UTF-8
as the first line of the file to tell Ruby that the file is UTF-8 encoded.
I would have expected your solutions to work, unless the software you are using to save/execute the files is overriding the setting. I recall having that issue with NetBeans.
An alternative, if you cannot fix the actual encoding, is to use a regex to match just the standard characters.
text = /C.te d'Ivoire/
browser.select_list.select(text)
The regex has replaced all accented characters with a ..
Not a great solution, but perhaps a solution if nothing else works.

UTF-8 Error in Ruby

I'm scraping a few websites and eventually I hit a UTF-8 error that looks like this:
/usr/local/lib/ruby/gems/1.9.1/gems/dm-core-1.2.0/lib/dm-core/support/ext/blank.rb:19:in
`=~': invalid byte sequence in UTF-8 (ArgumentError)
Now, I don't care about the websites being 100% accurate. Is there a way I can take the page I get and strip out any problem encodings and then pass it around inside my program?
I'm using ruby 1.9.3p0 (2011-10-30 revision 33570) [x86_64-darwin11.2.0] if that matters.
Update:
def self.blank?(value)
return value.blank? if value.respond_to?(:blank?)
case value
when ::NilClass, ::FalseClass
true
when ::TrueClass, ::Numeric
false
when ::Array, ::Hash
value.empty?
when ::String
value !~ /\S/ ###This is the line 19 that has the issue.
else
value.nil? || (value.respond_to?(:empty?) && value.empty?)
end
end
end
When I try to save the following line:
What Happens in The Garage Tin Sign2. � � Newsletter Our monthly newsletter,
It throws the error. It's on page: http://www.stationbay.com/. But what is odd is that when I view it in my web browser it doesn't show the funny symbols in the source.
What do I do next?
The problem is that your string contains non-UTF-8 characters, but seems to have UTF-8 encoding forced. The following short code demonstrates the issue:
a = "\xff"
a.force_encoding "utf-8"
a.valid_encoding? # returns false
a =~ /x/ # provokes ArgumentError: invalid byte sequence in UTF-8
The best way to fix this is to apply the proper encoding right from the beginning. If this is not an option, you can use String#encode:
a = "\xff"
a.force_encoding "utf-8"
a.valid_encoding? # returns false
a.encode!("utf-8", "utf-8", :invalid => :replace)
a.valid_encoding? # returns true now
a ~= /x/ # works now

In Ruby/Rails, how can I encode/escape special characters in URLs?

How do I encode or 'escape' the URL before I use OpenURI to open(url)?
We're using OpenURI to open a remote url and return the xml:
getresult = open(url).read
The problem is the URL contains some user-input text that contains spaces and other characters, including "+", "&", "?", etc. potentially, so we need to safely escape the URL. I saw lots of examples when using Net::HTTP, but have not found any for OpenURI.
We also need to be able to un-escape a similar string we receive in a session variable, so we need the reciprocal function.
Don't use URI.escape as it has been deprecated in 1.9.
Rails' Active Support adds Hash#to_query:
{foo: 'asd asdf', bar: '"<#$dfs'}.to_query
# => "bar=%22%3C%23%24dfs&foo=asd+asdf"
Also, as you can see it tries to order query parameters always the same way, which is good for HTTP caching.
Ruby Standard Library to the rescue:
require 'uri'
user_text = URI.escape(user_text)
url = "http://example.com/#{user_text}"
result = open(url).read
See more at the docs for the URI::Escape module. It also has a method to do the inverse (unescape)
The main thing you have to consider is that you have to escape the keys and values separately before you compose the full URL.
All the methods which get the full URL and try to escape it afterwards are broken, because they cannot tell whether any & or = character was supposed to be a separator, or maybe a part of the value (or part of the key).
The CGI library seems to do a good job, except for the space character, which was traditionally encoded as +, and nowadays should be encoded as %20. But this is an easy fix.
Please, consider the following:
require 'cgi'
def encode_component(s)
# The space-encoding is a problem:
CGI.escape(s).gsub('+','%20')
end
def url_with_params(path, args = {})
return path if args.empty?
path + "?" + args.map do |k,v|
"#{encode_component(k.to_s)}=#{encode_component(v.to_s)}"
end.join("&")
end
def params_from_url(url)
path,query = url.split('?',2)
return [path,{}] unless query
q = query.split('&').inject({}) do |memo,p|
k,v = p.split('=',2)
memo[CGI.unescape(k)] = CGI.unescape(v)
memo
end
return [path, q]
end
u = url_with_params( "http://example.com",
"x[1]" => "& ?=/",
"2+2=4" => "true" )
# "http://example.com?x%5B1%5D=%26%20%3F%3D%2F&2%2B2%3D4=true"
params_from_url(u)
# ["http://example.com", {"x[1]"=>"& ?=/", "2+2=4"=>"true"}]
Ruby has the built-in URI library, and the Addressable gem, in particular Addressable::URI
I prefer Addressable::URI. It's very full featured and handles the encoding for you when you use the query_values= method.
I've seen some discussions about URI going through some growing pains so I tend to leave it alone for handling encoding/escaping until these things get sorted out:
http://osdir.com/ml/ruby-core/2010-06/msg00324.html
http://osdir.com/ml/lang-ruby-core/2009-06/msg00350.html
http://osdir.com/ml/ruby-core/2011-06/msg00748.html

Resources