Encoding::UndefinedConversionError when using open-uri - ruby

When I do this:
require 'open-uri'
response = open('some-html-page-url-here')
response.read
On a certain url I get the following error (due to wrong encoding in the returned url?!):
Encoding::UndefinedConversionError: U+00A0 from UTF-8 to US-ASCII
Any way around this to still get the html content?

In the introduction to the open-uri module, the docs say this,
It is possible to open an http, https or ftp URL as though it were a file
And if you know anything about reading files, then you have to know the encoding of the file you are trying to read. You need to know the encoding so that you can tell ruby how to read the file(i.e. how many bytes(or how much space) each character will occupy).
In the first code example in the docs, there is this:
open("http://www.ruby-lang.org/en") {|f|
f.each_line {|line| p line}
p f.base_uri # <URI::HTTP:0x40e6ef2 URL:http://www.ruby-lang.org/en/>
p f.content_type # "text/html"
p f.charset # "iso-8859-1"
p f.content_encoding # []
p f.last_modified # Thu Dec 05 02:45:02 UTC 2002
}
So if you don't know the encoding of the "file" you are trying to read, you can get the encoding with f.charset. If that encoding is different than your default external encoding, you will most likely get an error. Your default external encoding is the encoding ruby uses to read from external sources. You can check what your default external encoding is set to like this:
The default external Encoding is pulled from your environment...Have a
look:
$ echo $LC_CTYPE
en_US.UTF-8
or
$ ruby -e 'puts Encoding.default_external.name'
UTF-8
http://graysoftinc.com/character-encodings/ruby-19s-three-default-encodings
On Mac OSX, I actually have to do the following to see the default external encoding:
$ echo $LANG
You can set your default external encoding with the method Encoding.default_external=(), so you might want to try something like this:
open('some_url_here') do |f|
Encoding.default_external = f.charset
html = f.read
end
Setting an IO object to binmode, like you have done, tells ruby that the encoding of the file is BINARY (or ruby's confusing synonym ASCII-8BIT), which means you are telling ruby that each character in the file takes up one byte. In your case, you are telling ruby to read the character U+00A0, whose UTF-8 representation takes up two bytes 0xC2 0xA0, as two characters instead of just one character, so you have eliminated your error, but you have produced two junk characters instead of the original character.

Doing a response.binmode before the response.read stops the error from happening.

Had the same issue, will add my solution here:
After reading the open-uri documentation further, it turns out you could set the encoding of the io before reading using the set_encoding method, like this:
result = open('some-page-uri') do |io|
io.set_encoding(Encoding.default_external)
io.read
end
Hope it helps!

Related

Decode base64 string and write to file

I'm trying to read file which contains encoded base64 string and write decoded output into another file. My Input.txt contains a base64 string, something like:
PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48cmV2aWV3LWNhc2UgY3JlYXRl\r\nZGF0ZT0iMTMvTWFyLzIwMTQgMDk6MDQ6NTEiIHN5c3RlbT0iVHJhZmlndXJhX1RlbXBsYXRlX01h\r\nbmFnZW1lbnRfdjUuMSIgYmF0Y2hpZD0iMCIgdHJhbnNhY3Rpb25ubz0iMSIgYmF0Y2huYW1lPSJH\r\nVUlEKGY1NWRmYjgwODQ4ZDQ3YzliZmVhYTg3YzMyZDQyNDQyKS1HTE9CQUxfSU5WT0lDRS1FTkdM\r\nSVNIIiB2ZXJzaW9uPSI1LjEuMi44ICBidWlsZCA1MjUzOSI+PHRyYW5zYWN0aW9uPjxvYmplY3Rz\r\nPjxvYmplY3QgY2xhc3M9IlRoXzE5NTQwMDk3OTRfNl9tb2RlbCIgbmFtZT0ibW9kZWwiPjxwcm9w\r\nZXJ0eSBuYW1lPSJUaXRsZSIgdmFsdWU9IlByb3Zpc2lvbmFsIEludm9pY2UiLz48cHJvcGVydHkg\r\nbmFtZT0iR3JvdXBDb21wYW55Ij48b2JqZWN0IGNsYXNzPSJUaF8xOTU0MDA5Nzk0XzZfR3JvdXBD\r\nb21wYW55IiBuYW1lPSJHcm91cENvbXBhbnkiPjxwcm9wZXJ0eSBuYW1lPSJOYW1lIiB2YWx1ZT0i\r\nVHJhZmlndXJhIEJlaGVlciBCLlYuIEFNU1RFUkRBTSwgQlJBTkNIIE9GRklDRSBMVUNFUk5FIi8+\r\nPHByb3BlcnR5IG5hbWU9IkFkZHJlc3MiIHZhbHVlPSJaPz9yaWNoc3RyYXNzZSAzMSIgaW5kZXg9\r\nIjAiLz48cHJvcGVydHkgbmFtZT0iQWRkcmVzcyIgdmFsdWU9Ikx1Y2VybmUiIGluZGV4PSIxIi8+\r\nPHByb3BlcnR5IG5hbWU9IkFkZHJlc3MiIHZhbHVlPSI2MDAyIiBpbmRleD0iMiIvPjxwcm9wZXJ0\r\neSBuYW1lPSJBZGRyZXNzIiB2YWx1ZT0iU3dpdHplcmxhbmQiIGluZGV4PSIzIi8+PHByb3BlcnR5\r\nIG5hbWU9IlBob25lTnVtYmVyIiB2YWx1
This string is created on server side with Java apache codec.binary.Base64 library. This string is captured with Fiddler when two different web services communicates with each other. Sometimes I have no access to the another web-service, that is why I sniff messages between services. In addition I use Ruby to automate some routine tasks and decided this time to use Ruby again. For encoding captured base64 string I use next snippet of code:
require "base64"
content = File.read('Input.txt')
decode_base64_content = Base64.decode64(content)
File.open("Output.txt", "wb") do |f|
f.write(decode_base64_content)
end
But output looks malformed, like <?xml version="1.0" encoding="UTF-8"?><review-case create®vFFSТ#2фЦ"у#B“ЈCЈS"7—7FVУТ%G&f–wW&хFVЧЖFUфЦзnagement_v5.1" ba and so on. Can you please advise on what I'm doing wrong? I use Ruby 1.9.3 on Windows 7 and Ubuntu 12.04.
I do not know how you manage to do this, but the line endings \r\n in your string seem to be there as 4-byte character sequences, not as 2-byte escaped CRLF. If I copy your file into a ruby string with single ticks:
unescaped='PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48cmV2aWV3LWNhc2UgY3JlYXRl\r\nZGF0ZT0iMTMvTWFyLzIwMTQgMDk6MDQ6NTEiIHN5c3RlbT0iVHJhZmlndXJhX1RlbXBsYXRlX01h\r\nbmFnZW1lbnRfdjUuMSIgYmF0Y2hpZD0iMCIgdHJhbnNhY3Rpb25ubz0iMSIgYmF0Y2huYW1lPSJH'
Base64.decode64(unescaped)
#=> garbled text for every second line
if I do the same with double quotes (which respect the escape sequences):
escaped="PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48cmV2aWV3LWNhc2UgY3JlYXRl\r\nZGF0ZT0iMTMvTWFyLzIwMTQgMDk6MDQ6NTEiIHN5c3RlbT0iVHJhZmlndXJhX1RlbXBsYXRlX01h\r\nbmFnZW1lbnRfdjUuMSIgYmF0Y2hpZD0iMCIgdHJhbnNhY3Rpb25ubz0iMSIgYmF0Y2huYW1lPSJH"
Base64.decode64(escaped)
#=> all is well that ends well
Therefore the problem seems to occur when you write the file. It can be amended in Ruby though:
unescaped='PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48cmV2aWV3LWNhc2UgY3JlYXRl\r\nZGF0ZT0iMTMvTWFyLzIwMTQgMDk6MDQ6NTEiIHN5c3RlbT0iVHJhZmlndXJhX1RlbXBsYXRlX01h\r\nbmFnZW1lbnRfdjUuMSIgYmF0Y2hpZD0iMCIgdHJhbnNhY3Rpb25ubz0iMSIgYmF0Y2huYW1lPSJH'
Base64.decode64(unescaped)
escaped=unescaped.gsub('\\r', "\r").gsub('\\n', "\n")
Base64.decode64(escaped)
#=> now you should be fine again
but of course the correct solution would be to store the file correctly.
Given your current file the following should work:
require "base64"
content = File.read('Input.txt')
content.gsub!('\\r', "\r")
content.gsub!('\\n', "\n")
decode_base64_content = Base64.decode64(content)
File.open("Output.txt", "wb") do |f|
f.write(decode_base64_content)
end
Please do post some output if it does not.

Zlib and utf-8 in ruby

I'm trying to use zlib to compress out some lengthy strings, some of which may contain unicode characters. At the moment, I'm doing this in ruby, but I think this would apply across any language really. Here's the super basic implementation:
require 'zlib'
example = "“hello world”" # note the unicode quotes
compressed = Zlib.deflate(example)
puts Zlib.inflate(compressed)
The issue here is that the text comes out as this:
\xE2\x80\x9Chello world\xE2\x80\x9
...no unicode quotes, just weird unrecognizable characters. Does anyone know of a way that Zlib can be used while retaining unicode characters? Bonus points for an answer in ruby : )
It seems Zlib produces ASCII-8BIT as the default encoding upon inflating. To fix it just force the original encoding:
require 'zlib'
input = "“hello world”"
compressed = Zlib.deflate(input)
output = Zlib.inflate(compressed).force_encoding(input.encoding)
Or set the encoding manually:
output = Zlib.inflate(compressed).force_encoding('utf-8')

Incompatible character encodings error

I'm trying to run a ruby script which generates translated HTML files from a JSON file. However I get this error:
incompatible character encodings: UTF-8 and CP850
Ruby
translation_hash = JSON.parse(File.read('translation_master.json').force_encoding("ISO-8859-1").encode("utf-8", replace: nil))
It seems to get stuck on this line of the JSON:
Json
"3": "Klassisch geschnittene Anzüge",
because there is a special character "ü". The JSON file's encoding is ANSI. Any ideas what could be wrong?
Try adding # encoding: UTF-8 to the top of the ruby file. This tells ruby to interpret the file with a different encoding. If this doesn't work try to find out what kind of encoding the text uses and change the line accordingly.
IMHO your code should work if the encoding of the json file is "ISO-8859-1" and if it is a valid json file.
So you should first verify if "ISO-8859-1" is the correct encoding and
by the way if the file is a valid json file.
# read the file with the encoding, you assume it is correct
json_or_not = File.read('translation_master.json').force_encoding("ISO-8859-1")
# print result and ckeck if something is obscure
puts json_or_not

Encoding issue when using Nokogiri replace

I have this code:
# encoding: utf-8
require 'nokogiri'
s = "<a href='/path/to/file'>Café Verona</a>".encode('UTF-8')
puts "Original string: #{s}"
#doc = Nokogiri::HTML::DocumentFragment.parse(s)
links = #doc.css('a')
only_text = 'Café Verona'.encode('UTF-8')
puts "Replacement text: #{only_text}"
links.first.replace(only_text)
puts #doc.to_html
However, the output is this:
Original string: <a href='/path/to/file'>Café Verona</a>
Replacement text: Café Verona
Café Verona
Why does the text in #doc end up with the wrong encoding?
I tried with and without encode('UTF-8') or using Document instead of DocumentFragment, but it's the same problem.
I'm using Nokogiri v1.5.6 with Ruby 1.9.3p194.
Seems that if you pass a nokogiri text object it does the thing ;)
links.first.replace Nokogiri::XML::Text.new(only_text, #doc)
I can't duplicate the problem, but I have two different things to try:
Instead of using:
s = "<a href='/path/to/file'>Café Verona</a>".encode('UTF-8')
Try:
s = "<a href='/path/to/file'>Café Verona</a>"
Your string is already UTF-8 encoded, because of your statement # encoding: utf-8. That's why you put that in the script, to tell Ruby the literal string is in UTF-8. It's possible that you're double-encoding it, though I don't think Ruby will -- it should silently ignore the second attempt because it's already UTF-8.
Another thing I wonder about is, output like:
Café Verona
is an indicator that the language/character-set encoding of your system and your terminal aren't right. Trying to output UTF-8 strings on a system set to something else can get mismatches in the terminal and/or browser. Windows systems are typically Win-1252, ISO-8859-1 or something similar, not UTF-8. On my Mac OS system I have these environment variables set:
LANG=en_US.UTF-8
LC_ALL=en_US.UTF-8
"Open iso-8859-1 encoded html with nokogiri messes up accents" might be useful too.

Why do I get an "Invalid Byte Sequence in UTF-8" error reading a text file?

I'm writing a Ruby script to process a large text file, and keep getting an odd encoding error.
Here's the situation:
input_data = File.new(in_path, 'r').read
p input_data.encoding.name # UTF-8
break_char = "\r".encode("UTF-8")
p break_char # "\r"
p break_char.encoding.name # "UTF-8"
input_data.split(",".encode("UTF-8"))
p Encoding.compatible?(input_data, break_char) # # Encoding:UTF-8>
This produces the error :in 'split': invalid byte sequence in UTF-8 (ArgumentError)
I read http://blog.grayproductions.net/articles/ruby_19s_string and looked at other solutions to apparently the same problem, but still can't work out why it's happening when I believe I am controlling the encodings.
I'm on OSX working with ruby 1.9.2
Obviously your input file is not UTF-8 (or at least, not entirely). If you don't care about non-ascii characters, you can simply assume your file is ascii-8bit encoded. BTW, your separator (break_char) is not causing problems as comma is encoded the same way in UTF-8 as in ASCII.
fname = 'test.in'
# create example file and fill it with invalid UTF-8 sequence
File.open(fname, 'w') do |f|
f.write "\xc3\x28"
end
# then try to read and parse it
s = File.open(fname) do |f| # file opened as UTF-8
#s = File.open(fname, 'r:ascii-8bit') do |f| # file opened as ascii-8bit
f.read
end
p s.split ','
I fail to get an error here on Linux even when the input file is not UTF-8. (I'm using Ruby 1.9.2, as well.)
Logically, either this problem is linked with OS-X, or it's something to do with your input data. Does it happen regardless of the data in the input file?
(I realise that this is not a proper answer, but I lack the rep to add a comment. And since no-one has responded yet, I thought it better than nothing...)
You read the file using the default encoding your system provides. So ruby tags the string as utf8, which doesn't mean it's really utf8-data. Try file <input file> to guess what kind of encoding is in there, then tell ruby it's that one (unclean: force_encoding(<encoding>), clean: tell the File object what encoding it is, I don't know how to do that) and then use encode!("utf8") to convert it to utf8.
Here are 2 common situations and how to deal with them:
Situation 1
You have an UTF-8 input-file with possibly a few invalid bytes
Remove the invalid bytes:
test = "Partly valid\xE4 UTF-8 encoding: äöüß"
File.open( 'input_file', 'w' ) {|f| f.write(test)}
str = File.read( 'input_file' )
str.scrub('')
=> "Partly valid UTF-8 encoding: äöüß"
Situation 2
You have an input-file that could be in either UTF-8 or ISO-8859-1 encoding
Check which encoding it is and convert to UTF-8 (if necessary):
test = "String in ISO-8859-1 encoding: \xE4\xF6\xFC\xDF"
File.open( 'input_file', 'w' ) {|f| f.write(test)}
str = File.read( 'input_file' )
unless str.valid_encoding?
str.encode!( 'UTF-8', 'ISO-8859-1', invalid: :replace )
end #unless
=> "String in ISO-8859-1 encoding: äöüß"
Notes
The above code snippets assume that Ruby encodes all your strings in UTF-8 by default. Even though, this is almost always the case, you can make sure of this by starting your scripts with # encoding: UTF-8.
If invalid, it is programmatically possible to detect most multi-byte encodings like UTF-8 (in Ruby, see: #valid_encoding?). However, it is NOT possible (or at least extremely hard) to programmatically detect invalidity of single-byte-encodings like ISO-8859-1. Thus the above code snippet does not work the other way around, i.e. detecting if a String is valid ISO-8859-1 encoding.
Even though UTF-8 has become increasingly popular as the default encoding in computer-systems, ISO-8859-1 and other Latin1 flavors are still very popular in the Western countries, especially in North America. Be aware that there a several single-byte encodings out there that are very similar, but slightly vary from ISO-8859-1. Examples: CP1252 (a.k.a. Windows-1252), ISO-8859-15
[ruby] [encoding] [utf8] [file-encoding] [character-encoding]
Please try this one:-
input_data = File.open("path/your_file.pdf", "rb") {|io| io.read}
Thanks

Resources