Converting gsub() pattern from ruby 1.8 to 2.0 - ruby

I have a ruby program that I'm trying to upgrade form ruby 1.8 to ruby 2.0.0-p247.
This works just fine in 1.8.7:
begin
ARGF.each do |line|
# a collection of pecluliarlities, appended as they appear in data
line.gsub!("\x92", "'")
line.gsub!("\x96", "-")
puts line
end
rescue => e
$stderr << "exception on line #{$.}:\n"
$stderr << "#{e.message}:\n"
$stderr << #line
end
But under ruby 2.0, this results in this an exxeption when encountering the 96 or 92 encoded into a data file that otherwise contains what appears to be ASCII:
invalid byte sequence in UTF-8
I have tried all manner of things: double backslashes, using a regex object instead of the string, force_encoding(), etc. and am stumped.
Can anybody fill in the missing puzzle piece for me?
Thanks.
=============== additions: 2013-09-25 ============
Changing \x92 to \u2019 did not fix the problem.
The program does not error until it actually hits a 92 or 96 in the input file, so I'm confused as to how the character pattern in the string is the problem when there are hundreds of thousands of lines of input data that are matched against the patterns without incident.

It's not the regex that's throwing the exception, it's the Ruby compiler. \x92 and \x96 are how you would represent ’ and – in the windows-1252 encoding, but Ruby expects the string to be UTF-8 encoded. You need to get out of the habit of putting raw byte values like \x92 in your string literals. Non-ASCII characters should be specified by Unicode escape sequences (in this case, \u2019 and \u2013).
It's a Unicode world now, stop thinking of text in terms of bytes and think in terms of characters instead.

Related

Ruby incompatible character encodings

I am currently trying to write a script that iterates over an input file and checks data on a website. If it finds the new data, it prints out to the terminal that it passes, if it doesn't it tells me it fails. And vice versa for deleted data. It was working fine until the input file I was given contains the "™" character. Then when ruby gets to that line, it is spitting out an error:
PDAPWeb.rb:73:in `include?': incompatible character encodings: UTF-8 and IBM437
(Encoding::CompatibilityError)
The offending line is a simple check to see if the text exists on the page.
if browser.text.include? (program_name)
Where the program_name variable is a parsed piece of information from the input file. In this instance, the program_name contains the 'TM' character mentioned before.
After some research I found that adding the line # encoding: utf-8 to the beginning of my script could help, but so far has not proven useful.
I added this to my program_name variable to see if it would help(and it allowed my script to run without errors), but now it is not properly finding the TM character when it should be.
program_name = record[2].gsub("\n", '').force_encoding("utf-8").encode("IBM437", replace: nil)
This seemed to convert the TM character to this: Γäó
I thought maybe i had IBM437 and utf-8 parts reversed, so I tried the opposite
program_name = record[2].gsub("\n", '').force_encoding("IBM437").encode("utf-8", replace: nil)
and am now receiving this error when attempting to run the script
PDAPWeb.rb:48:in `encode': U+2122 from UTF-8 to IBM437 (Encoding::UndefinedConve
rsionError)
I am using ruby 1.9.3p392 (2013-02-22) and I'm not sure if I should upgrade as this is the standard version installed in my company.
Is my encoding incorrect and causing it to convert the TM character with errors?
Here’s what it looks like is going on. Your input file contains a ™ character, and it is in UTF-8 encoding. However when you read it, since you don’t specify the encoding, Ruby assumes it is in your system’s default encoding of IBM437 (you must be on Windows).
This is basically the same as this:
>> input = "™"
=> "™"
>> input.encoding
=> #<Encoding:UTF-8>
>> input.force_encoding 'ibm437'
=> "\xE2\x84\xA2"
Note that force_encoding doesn’t change the actual string, just the label associated with it. This is the same outcome as in your case, only you arrive here via a different route (by reading the file).
The web page also has a ™ symbol, and is also encoded as UTF-8, but in this case Ruby has the encoding correct (Watir probably uses the headers from the page):
>> web_page = '™'
=> "™"
>> web_page.encoding
=> #<Encoding:UTF-8>
Now when you try to compare these two strings you get the compatibility error, because they have different encodings:
>> web_page.include? input
Encoding::CompatibilityError: incompatible character encodings: UTF-8 and IBM437
from (irb):11:in `include?'
from (irb):11
from /Users/matt/.rvm/rubies/ruby-2.2.1/bin/irb:11:in `<main>'
If either of the two strings only contained ASCII characters (i.e. code points less that 128) then this comparison would have worked. Both UTF-8 and IBM437 are both supersets of ASCII, and are only incompatible if they both contain characters outside of the ASCII range. This is why you only started seeing this behaviour when the input file had a ™.
The fix is to inform Ruby what the actual encoding of the input file is. You can do this with the already loaded string:
>> input.force_encoding 'utf-8'
=> "™"
You can also do this when reading the file, e.g. (there are a few ways of reading files, they all should allow you to explicitly specify the encoding):
input = File.read("input_file.txt", :encoding => "utf-8")
# now input will be in the correct encoding
Note in both of these the string isn’t being changed, it still contains the same bytes, but Ruby now knows its correct encoding.
Now the comparison should work okay:
>> web_page.include? input
=> true
There is no need to encode the string. Here’s what happens if you do. First if you correct the encoding to UTF-8 then encode to IBM437:
>> input.force_encoding("utf-8").encode("IBM437", replace: nil)
Encoding::UndefinedConversionError: U+2122 from UTF-8 to IBM437
from (irb):16:in `encode'
from (irb):16
from /Users/matt/.rvm/rubies/ruby-2.2.1/bin/irb:11:in `<main>'
IBM437 doesn’t include the ™ character, so you can’t encode a string containing it to this encoding without losing data. By default Ruby raises an exception when this happens. You can force the encoding by using the :undef option, but the symbol is lost:
>> input.force_encoding("utf-8").encode("IBM437", :undef => :replace)
=> "?"
If you go the other way, first using force_encoding to IBM437 then encoding to UTF-8 you get the string Γäó:
>> input.force_encoding("IBM437").encode("utf-8", replace: nil)
=> "Γäó"
The string is already in IBM437 encoding as far as Ruby is concerned, so force_encoding doesn’t do anything. The UTF-8 representation of ™ is the three bytes 0xe2 0x84 0xa2, and when interpreted as IBM437 these bytes correspond to the three characters seen here which are then converted into their UTF-8 representations.
(These two outcomes are the other way round from what you describe in the question, hence my comment above. I’m assuming that this is just a copy-and-paste error.)

CSV writes Ñ into its code form instead of its actual form

I have a CSV file. I checked its encoding using this:
File.open('C:\path\to\file\myfile.txt').read.encoding
and it returned the encoding as:
=> #<Encoding:IBM437>
I'm reading this CSV per row -- stripping spaces and doing other stuff. After "cleansing" it, I push it to a new file. I'm doing it like this:
CSV.foreach(file_read, encoding: "IBM437:UTF-8") do |r|
# some code
CSV.open(file_appended, "a", col_sep: "|") do |csv|
csv << r
end
end
Now my problem is, inside the CSV I'm reading, there's a word with an accented character -- Ñ to be exact. This character is being appended to the new file as
\u2564
Its a problem considering that the accented character is a vital part of that word, and I wanted that character to appear to the new file as-is.
Am I missing something? I tried the ff. source:destination encoding but to no avail:
ISO-8859-1:UTF8 (and vice versa)
ISO-8859-1:Windows-1252 (and vice versa)
Am I missing something?
Here is my ruby version, just if you'd need to know:
ruby 1.9.3p392 (2013-02-22) [i386-mingw32]
Thanks in advance!
The line below solved my problem:
Encoding.default_external = "iso-8859-1"
It tells Ruby that the file being read is encoded in ISO-8859-1, and therefore correctly interprets the Ñ character.
Credit goes to Darshan Computing's answer here. Just look for Update #2.

Why do I get ArgumentError - invalid byte sequence in UTF-8?

While trying to print Duplicaci¾n out of a CSV file, I get the following error:
ArgumentError - invalid byte sequence in UTF-8
I'm using Ruby 1.9.3-p362 and opening the file using:
CSV.foreach(fpath, headers: true) do |row|
How can I skip an invalid character without using iconv or str.encode(undef: :replace, invalid: :replace, replace: '')?
I tried answers from the following questions, but nothing worked:
ruby 1.9: invalid byte sequence in UTF-8
Ruby Invalid Byte Sequence in UTF-8
ruby 1.9: invalid byte sequence in UTF-8
This is from the CSV.open documentation:
You must provide a mode with an embedded Encoding designator unless your data is in Encoding::default_external(). CSV will check the Encoding of the underlying IO object (set by the mode you pass) to determine how to parse the data. You may provide a second Encoding to have the data transcoded as it is read just as you can with a normal call to IO::open(). For example, "rb:UTF-32BE:UTF-8" would read UTF-32BE data from the file but transcode it to UTF-8 before CSV parses it.
That applies to any method in CSV that opens a file.
Also start reading in the documentation at the part beginning with:
CSV and Character Encodings (M17n or Multilingualization)
Ruby is expecting UTF-8 but is seeing characters that don't fit. I'd suspect WIN-1252 or ISO-8859-1 or a variant.

ruby unicode escapes as command line arguments

It looks like this question has been asked by a python dev (Allowing input of Unicode escapes as command line arguments), which I think partially relates, but it doesn't fully give me a solution for my immediate problem in Ruby. I'm curious if there is a way to take escaped unicode sequences as command line arguments, assign to a variable, then have the escaped unicode be processed and displayed as normal unicode after the script runs. Basically, I want to be able to choose a unicode number, then have Ruby stick that in a filename and have the actual unicode character displayed.
Here are a few things I've noticed that cause problems:
unicode = ARGV[0] #command line argument is \u263a
puts unicode
puts unicode.inspect
=> u263a
=> "u263a"
The forward slash needed to have the string be treated as a unicode sequence gets stripped.
Then, if we try adding another "\" to escape it,
unicode = ARGV[0] #command line argument is \\u263a
puts unicode
puts unicode.inspect
=> \u263a
=> "\\u263a"
but it still won't be processed properly.
Here's some more relevant code where I'm actually trying to make this happen:
unicode = ARGV[0]
filetype = ARGV[1]
path = unicode + "." + filetype
File.new(path, "w")
It seems like this should be pretty simple, but I've searched and searched and cannot find a solution. I should add, I do know that supplying the hard-coded escaped unicode in a string works just fine, like File.new("\u263a.#{filetype}", "w"), but getting it from an argument/variable is what I'm having an issue with. I'm using Ruby 1.9.2.
To unescape the unicode escaped command line argument and create a new file with the user supplied unicode string in the filename, I used #mu is too short's method of using pack and unpack, like so:
filetype = ARGV[1]
unicode = ARGV[0].gsub(/\\u([\da-fA-F]{4})/) {|m| [$1].pack("H*").unpack("n*").pack("U*")}
path = unicode + "." + filetype
File.new(path, "w")

`gsub': incompatible character encodings: UTF-8 and IBM437

I try to use search, google but with no luck.
OS: Windows XP
Ruby version 1.9.3po
Error:
`gsub': incompatible character encodings: UTF-8 and IBM437
Code:
require 'rubygems'
require 'hpricot'
require 'net/http'
source = Net::HTTP.get('host', '/' + ARGV[0] + '.asp')
doc = Hpricot(source)
doc.search("p.MsoNormal/a").each do |a|
puts a.to_plain_text
end
Program output few strings but when text is ”NOŻYCE” I am getting error above.
Could somebody help?
You could try converting your HTML to UTF-8 since it appears the original is in vintage-retro DOS format:
source.encode!('UTF-8')
That should flip it from 8-bit ASCII to UTF-8 as expected by the Hpricot parser.
The inner encoding of the source variable is UTF-8 but that is not what you want.
As tadman wrote, you must first tell Ruby that the actual characters in the string are in the IBM437 encoding. Then you can convert that string to your favourite encoding, but only if such a conversion is possible.
source.force_encoding('IBM437').encode('UTF-8')
In your case, you cannot convert your string to ISO-8859-2 because not all IBM437 characters can be converted to that charset. Sticking to UTF-8 is probably your best option.
Anyway, are you sure that that file is actually transmitted in IBM437? Maybe it is stored as such in the HTTP server but it is sent over-the-wire with another encoding. Or it may not even be exactly in IBM437, it may be CP852, also called MS-DOC Latin 2 (different from ISO Latin 2).

Resources