Problems with Ruby encoding in Windows - ruby

I wrote a simple code that reads an email from MS-Outlook, using 'win32ole', and then save its subjects to an CSV file. Everything goes well except the encoding system. When I open my CSV file the words such as "André" are printed as "Andr\x82". I want my output format to be equal to my input.
# encoding: 'CP850'
require 'win32ole'
require 'CSV'
Encoding.default_external = 'CP850'
ol = WIN32OLE.new('Outlook.Application')
inbox = ol.GetNamespace("MAPI").GetDefaultFolder(6)
email_subjecs = []
inbox.Items.each do |m|
email_subjects << m.Subject
end
CSV.open('MyFile.csv',"w") do |csv|
csv << email_subjects
end
O.S: Windows 7 64bit
Encoding.default_external -> CP850
Languadge -> PT
ruby -v -> 1.9.2p290 (2011-07-09) [i386-mingw32]
It seems a simple problem related to external windows encoding and I tryied many solution posted here but I realy can't solve this.

1) Your file name is missing a closing quote.
2) The default open mode for CSV.open() is 'rb', so you can't possibly write to a file with the code you posted.
3) You didn't post the encoding of the text you are trying to write to the file.
4) You didn't post the encoding that you want the the data to be written in.
5)
When I open my CSV file the words such as "é" are printed as "\x82"
Tell your viewing device not to do that.

The magic comment only sets the encoding the current (.rb) file should be read as. It does not set default_external. Try set RUBYOPT=-E utf-8, open your file with CSV.open('MyFile.csv', encoding: 'UTF-8'), or set Encoding.default_external at the top of your file (discouraged).

Related

Why is Ruby failing to convert CP-1252 to UTF-8?

I have a CSV files saved from Excel which is CP-1252/Windows-1252. I tried the following, but it still comes out corrupted. Why?
csv_text = File.read(arg[:file], encoding: 'cp1252').encode('utf-8')
# csv_text = File.read(arg[:file], encoding: 'cp1252')
csv = CSV.parse csv_text, :headers => true
csv.each do |row|
# create model
p model
The result
>rake import:csv["../file.csv"] | grep Brien
... name: "Oâ?TBrien ...
However it works in the console
> "O\x92Brien".force_encoding("cp1252").encode("utf-8")
=> "O'Brien"
I can open the CSV file in Notepad++, Encoding > Character Sets > Western European > Windows-1252, see the correct characters, then Encoding > Convert to UTF-8. However, there are many files an I want Ruby to handle this.
Similar: How to change the encoding during CSV parsing in Rails. But this doesn't explain why this is failing.
Ruby 2.4, Reference: https://ruby-doc.org/core-2.4.3/IO.html#method-c-read
Wow, it was caused by the shitty grep in DevKit.
>rake import:csv["../file.csv"]
... name: "O'Brien ...
>where grep
C:\DevKit2\bin\grep.exe
I also did not need the .encode('utf-8').
Let that be a lesson kids. Never take anything for granted. Trust no one!

Why does a file written out encoded as UTF-8 end up being ISO-8859-1 instead?

I am reading an ISO-8859-1 encoded text file, transcoding it to UTF-8, and writing out a different file as UTF-8. However, when I inspect the output file, it is still encoded as ISO-8859-1! What am I doing wrong?
Here is my ruby class:
module EF
class Transcoder
# app_path ......... Path to the java console application (InferEncoding.jar) that infers the character encoding.
# target_encoding .. Transcodes the text loaded from the file into this encoding.
attr_accessor :app_path, :target_encoding
def initialize(consoleAppPath)
#app_path = consoleAppPath
#target_encoding = "UTF-8"
end
def detect_encoding(filename)
encoding = `java -jar #{#app_path} \"#{filename}\"`
encoding = encoding.strip
end
def transcode(filename)
original_encoding = detect_encoding(filename)
content = File.open(filename, "r:#{original_encoding}", &:read)
content = content.force_encoding(original_encoding)
content.encode!(#target_encoding, :invalid => :replace)
end
def transcode_file(input_filename, output_filename)
content = transcode(input_filename)
File.open(output_filename, "w:#{#target_encoding}") do |f|
f.write content
end
end
end
end
By way of explanation, #app_path is the path to a Java jar file. This console application will read a text file and tell me what its current encoding is (printing it to stdout). It uses the ubiquitous ICU library. (I tried using the ruby gem charlock-holmes, but I cannot get it to compile on Windows for MINGW. The Java bindings to ICU are good, so I wrote a Java application instead.)
To call the above class, I do this in irb:
require_relative 'transcoder'
tc = EF::Transcoder.new("C:/Users/paul.chernoch/Documents/java/InferEncoding.jar")
tc.detect_encoding "C:/temp/infer-encoding-test/ISO-8859-1.txt"
tc.transcode_file("C:/temp/infer-encoding-test/ISO-8859-1.txt", "C:/temp/infer-encoding-test/output-utf8.txt")
tc.detect_encoding "C:/temp/infer-encoding-test/output-utf8.txt"
The file ISO-8859-1.txt is encoded like it sounds. I used Notepad++ to write the file using that encoding.
I used my Java application to test the file. It concurs that it is in ISO-8859-1 format.
I also created a file in Notepad++ and saved it as UTF-8. I then verified using my java app that it was in UTF-8.
After I perform the above in irb, I used my java app to test the output file and it says the format is still ISO-8859-1.
What am I doing wrong? If you hard-code the method detect_encoding to return "ISO-8859-1", you do not need my java application to replicate the part that reads the file.
Any solution must NOT use charlock-holmes.

Ruby character encoding for shreelipi Indian devenagari font

I am new to Ruby and I'm trying to write a script (Ruby 1.9.3, on Windows XP) which will automate text extraction from an InDesign document using the WIN32OLE library.
The InDesign document has text in ShreeLipi font (an Indian Devanagari script). My Ruby script is:
require 'win32ole'
app = WIN32OLE.new('InDesign.Application')
doc = app.activeDocument
text_frame = doc.textFrames(1)
text = text_frame.contents #=> "emhy ‘hmamOm§Mo ñ‘maH$ emhy {‘b‘ܶo C^mam"
puts text.encoding.name #=> "IBM437"
file = File.open('D:/try.txt','w')
file.puts text
file.close
When I open this same file to view the text using Notepad it shows:
"emhy `hmamOmMo ¤`maH$ emhy {`b`šo C^mam"
I can't understand why this is happening. Please help me correct it.
I tried to resolve it using Windows-1252 encoding and also with ISO-8859-1 but could not find a solution.
You aren't preserving your bytes when you write the file. Instead of doing:
file = File.open('D:/try.txt','w')
use:
file = File.open('D:/try.txt','wb')
The b means to write as binary, which, in plain English, means to do no line-end conversions.

Ruby file writes in windows returning wrong file sizes?

I'm still learning ruby, so I'm sure I'm doing something wrong here, but using ruby 1.9.3 on windows, I'm having a problem writing a file with random ascii garbage to be a specific size. I need to be able to write these files for a test on an application I'm QAing. On Mac and on *nix, the file size is written correctly every time. But on windows, it generates files of random size, generally between 1,024 bytes and 1,031 bytes.
I'm sure the problem is one of the characters that the rstr is generating is counting as two characters but... it seems like this shouldn't happen.
Here is my code:
num = 10
k = 1
for i in 1..num
fname = "f#{i}.txt"
f = File.new(fname, "w")
for k in 1..size
rstr = "#{(1..1024).map{rand(255).chr}.join}"
f.write rstr
print " #{rstr.size} " # this returns 1024 every time.
rstr = ""
end
f.close
end
Also tried:
opts = {}
opts[:encoding] = "UTF-8"
fname = "f#{i}.txt"
f = File.new(fname, "w", opts)
By default files open in Windows are open with text mode meaning that line endings and other details are adjusted.
If you want the files be written byte-to-byte exactly as you want, you need to open the files in binary mode:
File.new("foo", "wb") do |f|
# ...
end
The b is a ignored on POSIX operating systems, so your scripts are now cross-platform compatible.
Note: I used block syntax to manage the file so it properly closes and disposes the file handler once the block is executed. You no longer need to worry about closing the file ;-)
Hope this helps.
There is not any 255 ASCII. The values goes from 0~254.
If you try to printf 255.chr, you'll get a multibyte character.
As Windows does not standard utf-8, you'll get incorrect values. Hence the problem you're facing!
Try adding #coding: utf-8 at the top of your file. It should get things working.

Does Ruby auto-detect a file's codepage?

If a save a text file with the following character б U+0431, but save it as an ANSI code page file.
Ruby returns ord = 63. Saving the file with UTF-8 as the codepage returns ord = 208, 177
Should I be specifically telling Ruby to handle the input encoded with a certain code page? If so, how do you do this?
Is that in ruby source code or in a file which is read with File.open? If it's in the ruby source code, you can (in ruby 1.9) add this to the top of the file:
# encoding: utf-8
Or you could specify most other encodings (like iso-8859-1).
If you are reading a file with File.open, you could do something like this:
File.open("file.txt", "r:utf-8") {|f| ... }
As with the encoding comment, you can pass in different types of encodings here too.

Resources