I have a small file containing the é char and a linebreak. The file is encoded in UTF-8.
When I write urlEncodedFormat(trim(content)) it yields "%C3%A9" and this is correct.
Now, urlEncodedFormat accepts a second argument, according to the docs : "The character encoding in which the string is encoded.".
So, my string is encoded as UTF-8, so if I provide 'utf-8' it should yield the same thing.
But not at all :
urlEncodedFormat(trim(content), 'utf-8') yields "%C3%83%C2%A9".
It works using a iso-8859-1 file, passing this encoding as a second argument does not change the output.
This is how I read the file :
<cffile action="read" file=#filename# variable="content"/>
So why is that ? I can't figure out how encoding works in this language.
Thank you very much.
Related
I have a text file which if viewed in the Scite editor with the encoding set to utf-8, displays all text correctly, including capital letters with an accute accent (i.e. Á).
However, if I write a ruby script and use mystring.encode("utf-8") it will give me this error on capital letters that carry an acute accent (i.e. Á):
encode': "\x81" to UTF-8 in conversion from Windows-1252 to UTF-8 (Encoding::UndefinedConversionError)
Is this expected behaviour? How can I encode the whole text to utf-8 using ruby, knowing that otherwise it does get successfully encoded in the Scite editor?
Code:
ine_file = File.open("../../_data/ine_spain_demographics.csv", 'r')
ine_towns_population_hash = Hash.new
ine_file.each do|line|
values = line.split(";")
town_name = values[3]
population = values[4]
begin
ine_towns_population_hash[town_name.encode("utf-8")] = population
rescue
puts "problematic string: " + town_name
end
end
You say that ine_file.external_encoding says Windows-1252 so the file is being opened as a Windows-1252 encoded file. Then you say town_name.encode("utf-8") in an attempt to encoded a string as UTF-8 and Ruby complains. But the file is actually UTF-8; reading UTF-8 bytes as Windows-1252 and then trying to recode those bytes as UTF-8 isn't going to work.
You need to open the file in UTF-8 mode:
File.open("../../_data/ine_spain_demographics.csv", 'r:UTF-8')
and stop trying to change the encoding of town_name, just use town_name as-is.
It seems like it's misinterpreting the encoding of ine_spain_demographics.csv.
Looking at the doc's for encode and open you have two options:
Use replace in encode to tell Ruby what character to use town_name.encode("utf-8", replace: '').
Identify the correct file encoding and specify it: File.open("../../_data/ine_spain_demographics.csv", 'r:ISO-8859-1')
I have an application where users can upload text-based files (xml, csv, txt) that are persisted to S3. Some of these files are pretty big. There are a variety of operations that need to be performed on the data in these files, so rather than read them from S3 and have it time out occasionally I download the files locally, then turn the operations loose on them.
Here's the code I use to download the file from S3. Upload is the name of the AR model I use to store this information. This method is an instance method on the Upload model:
def download
basename = File.basename(self.text_file_name.path)
filename = Rails.root.join(basename)
host = MyFitment::Utility.get_host_without_www(self.text_file_name.url)
Net::HTTP.start(host) do |http|
f = open(filename)
begin
http.request_get(self.text_file_name.url) do |resp|
resp.read_body do |segment|
f.write(segment) # Fails when non-ASCII 8-bit characters are included.
end
end
ensure
f.close()
end
end
filename
end
So you see that line above where the load fails. This code somehow thinks all files that are downloaded are encoded in ASCII 8-bit. How can I:
1) Check the encoding of a remote file like that
2) Download it and write it successfully.
Here's the error that is happening with a particular file right now:
Encoding::UndefinedConversionError: "\x95" from ASCII-8BIT to UTF-8
from /Users/me/code/myapp/app/models/upload.rb:47:in `write'
Thank you for any help you can offer!
How can I: 1) Check the encoding of a remote file like that.
You can check the Content-Type header of the response, which, if present, may look something like this:
Content-Type: text/plain; charset=utf-8
As you can see, the encoding is specified there. If there's no Content-Type header, or if the charset is not specified, or if the charset is specified incorrectly, then you can't know the encoding of the text. There are gems that can try to guess the encoding(with increasing accuracy), e.g. rchardet, charlock_holmes, but for complete accuracy, you have to know the encoding before reading the text.
This code somehow thinks all files that are downloaded are encoded in
ASCII 8-bit.
In ruby, ASCII-8BIT is equivalent to binary, which means the Net::HTTP library just gives you a string containing a series of single bytes, and it's up to you to decide how to interpret those bytes.
If you want to interpret those bytes as UTF-8, then you do that with String#force_encoding():
text = text.force_encoding("UTF-8")
You might want to do that if, for instance, you want to do some regex matching on the string, and you want to match full characters(which might be multi-byte) rather than just single bytes.
Encoding::UndefinedConversionError: "\x95" from ASCII-8BIT to UTF-8
Using String#encode('UTF-8') to convert ASCII-8BIT to UTF-8 doesn't work for bytes whose ascii codes are greater than 127:
(0..255).each do |ascii_code|
str = ascii_code.chr("ASCII-8BIT")
#puts str.encoding #=>ASCII-8BIT
begin
str.encode("UTF-8")
rescue Encoding::UndefinedConversionError
puts "Can't encode char with ascii code #{ascii_code} to UTF-8."
end
end
--output:--
Can't encode char with ascii code 128 to UTF-8.
Can't encode char with ascii code 129 to UTF-8.
Can't encode char with ascii code 130 to UTF-8.
...
...
Can't encode char with ascii code 253 to UTF-8.
Can't encode char with ascii code 254 to UTF-8.
Can't encode char with ascii code 255 to UTF-8.
Ruby just reads one byte at a time from the ASCII-8BIT string and tries to convert the character in the byte to UTF-8. So, while 128 may be a legal byte in UTF-8 when part of a multi-byte character sequence, 128 is not a legal UTF-8 character as a single byte.
As for writing the strings to a file, instead of this:
f = open(filename)
...if you want to output UTF-8 to the file, you would write:
f = open(filename, "w:UTF-8")
By default, ruby uses whatever the value of Encoding.default_external is to encode output to a file. The default_external encoding is pulled from your system's environment, or you can set it explicitly.
I've got a rather large JSON file on my Windows machine and it contains stuff like \xE9. When I JSON.parse it, it works fine.
However, when I push the code to my server running CentOS, I always get this: "\xE9" on US-ASCII (Encoding::InvalidByteSequenceError)
Here is the output of file on both machines
Windows:
λ file data.json
data.json: UTF-8 Unicode English text, with very long lines, with no line terminators
CentOS:
$ file data.json
data.json: UTF-8 Unicode English text, with very long lines, with no line terminators
Here is the error I get when trying to parse it:
$ ruby -rjson -e 'JSON.parse(File.read("data.json"))'
/usr/local/rvm/rubies/ruby-2.0.0-p353/lib/ruby/2.0.0/json/common.rb:155:in `encode': "\xC3" on US-ASCII (Encoding::InvalidByteSequenceError)
What could be causing this problem? I've tried using iconv to change the file into every possible encoding I can, but nothing seems to work.
"\xE9" is é in ISO-8859-1 (and various other ISO-8859-X encodings and Windows-1250 and ...) and is certainly not UTF-8.
You can get File.read to fix up the encoding for you by using the encoding options:
File.read('data.json',
:external_encoding => 'iso-8859-1',
:internal_encoding => 'utf-8'
)
That will give you a UTF-8 encoded string that you can hand to JSON.parse.
Or you could let JSON.parse deal with the encoding by using just :external_encoding to make sure the string comes of the disk with the right encoding flag:
JSON.parse(
File.read('data.json',
:external_encoding => 'iso-8859-1',
)
)
You should have a close look at data.json to figure out why file(1) thinks it is UTF-8. The file might incorrectly have a BOM when it is not UTF-8 or someone might be mixing UTF-8 and Latin-1 encoded strings in one file.
I'm trying to run a ruby script which generates translated HTML files from a JSON file. However I get this error:
incompatible character encodings: UTF-8 and CP850
Ruby
translation_hash = JSON.parse(File.read('translation_master.json').force_encoding("ISO-8859-1").encode("utf-8", replace: nil))
It seems to get stuck on this line of the JSON:
Json
"3": "Klassisch geschnittene Anzüge",
because there is a special character "ü". The JSON file's encoding is ANSI. Any ideas what could be wrong?
Try adding # encoding: UTF-8 to the top of the ruby file. This tells ruby to interpret the file with a different encoding. If this doesn't work try to find out what kind of encoding the text uses and change the line accordingly.
IMHO your code should work if the encoding of the json file is "ISO-8859-1" and if it is a valid json file.
So you should first verify if "ISO-8859-1" is the correct encoding and
by the way if the file is a valid json file.
# read the file with the encoding, you assume it is correct
json_or_not = File.read('translation_master.json').force_encoding("ISO-8859-1")
# print result and ckeck if something is obscure
puts json_or_not
I try to use search, google but with no luck.
OS: Windows XP
Ruby version 1.9.3po
Error:
`gsub': incompatible character encodings: UTF-8 and IBM437
Code:
require 'rubygems'
require 'hpricot'
require 'net/http'
source = Net::HTTP.get('host', '/' + ARGV[0] + '.asp')
doc = Hpricot(source)
doc.search("p.MsoNormal/a").each do |a|
puts a.to_plain_text
end
Program output few strings but when text is ”NOŻYCE” I am getting error above.
Could somebody help?
You could try converting your HTML to UTF-8 since it appears the original is in vintage-retro DOS format:
source.encode!('UTF-8')
That should flip it from 8-bit ASCII to UTF-8 as expected by the Hpricot parser.
The inner encoding of the source variable is UTF-8 but that is not what you want.
As tadman wrote, you must first tell Ruby that the actual characters in the string are in the IBM437 encoding. Then you can convert that string to your favourite encoding, but only if such a conversion is possible.
source.force_encoding('IBM437').encode('UTF-8')
In your case, you cannot convert your string to ISO-8859-2 because not all IBM437 characters can be converted to that charset. Sticking to UTF-8 is probably your best option.
Anyway, are you sure that that file is actually transmitted in IBM437? Maybe it is stored as such in the HTTP server but it is sent over-the-wire with another encoding. Or it may not even be exactly in IBM437, it may be CP852, also called MS-DOC Latin 2 (different from ISO Latin 2).