I am processing a csv file uploaded by users, the csv only has one column with the header row "API"
when i process the CSV, for one of the file i see that
"API".downcase.length displays 4
could it be a encoding issue. when i do header[0].downcase.bytes for the string i see
[239, 187, 191, 97, 112, 105]
when i do "api".bytes i see
[97, 112, 105]
Any help in understanding why "API".downcase.length in above example display 4 would be really great.
I parse the file like
CSV.foreach(#file_path, headers: true) do |row|
Thanks.
It looks like in this case the extra character is coming from a BOM (Byte Order Mark). These are hidden characters that are sometimes used to indicate the encoding type of the file.
One way to handle BOM characters is to specify the bom|utf-* encoding when reading the file:
CSV.open(#file_path, "r:bom|utf-8", headers: true)
When bom|utf-*is used, Ruby will check for a Unicode BOM in the input document to help determine the encoding, and if a BOM is found it is stripped out - Ruby's IO docs cover this in more detail.
Related
Both of these byte sequences seem to render correctly in Chrome and my text editor, but the latter is causing some layout problems in a PDF document.
Here are the byte sequences (in decimal):
å: 195, 165
å: 97, 204, 138
I can see that 195, 165 is the expected sequence for UTF-8: https://en.wikipedia.org/wiki/%C3%85#On_computers
Is 97, 204, 138 also a valid way to encode the character for a UTF-8 string? Or is this a different encoding that just happens to work in some contexts?
I am using the Ruby programming language. Is there any way that I could detect when a user submits this kind of character using the 97, 204, 138 encoding, and safely convert these characters into the 195, 165 encoding?
I have discovered that the first å character is a single character called: "latin small letter a with ring above".
The second å character is a plain letter "a" followed by the "combining ring above" character, so it's actually two separate characters that are merged together.
I used this service to inspect the characters: https://apps.timwhitlock.info/unicode/inspect
To answer the second part of the question, Ruby does have a #unicode_normalize method that will automatically convert the two character 97, 204, 138 sequence into a single character: 195, 165.
There are multiple ways to normalize Unicode (NFD, NFC, NFKD and NFKC), so this article goes into much more detail: Unicode Normalization in Ruby
How can I load a YAML file regardlessly of its encoding?
My YAML file can be encoded in UTF-8 or ANSI (that's what Notepad++ calls it - I guess it's Windows-1252):
:key1:
:key2: "ä"
utf8.yml is encoded in UTF-8, ansi.yml is encoded in ANSI. I load the files as follows:
# encoding: utf-8
Encoding.default_internal = "utf-8"
utf8_load = YAML::load(File.open('utf8.yml'))
utf8_load_file = YAML::load_file('utf8.yml')
ansi_load = YAML::load(File.open('ansi.yml'))
ansi_load_file = YAML::load_file('ansi.yml')
It seems like Ruby doesn't recognize the encoding correctly:
utf8_load [:key1][:key2].encoding #=> "UTF-8"
utf8_load_file [:key1][:key2].encoding #=> "UTF-8"
ansi_load [:key1][:key2].encoding #=> "UTF-8"
ansi_load_file [:key1][:key2].encoding #=> "UTF-8"
because the bytes aren't the same:
utf8_load [:key1][:key2].bytes #=> [195, 164]
utf8_load_file [:key1][:key2].bytes #=> [195, 164]
ansi_load [:key1][:key2].bytes #=> [239, 191, 189]
ansi_load_file [:key1][:key2].bytes #=> [239, 191, 189]
If I miss Encoding.default_internal = "utf-8", the bytes are also different:
utf8_load [:key1][:key2].bytes #=> [195, 131, 194, 164]
utf8_load_file [:key1][:key2].bytes #=> [195, 164]
ansi_load [:key1][:key2].bytes #=> [195, 164]
ansi_load_file [:key1][:key2].bytes #=> [239, 191, 189]
What happens actually when I don't set the default_internal to utf-8?
Which encodings do the strings in both examples have?
How can I load a file even if I don't know its encoding?
I believe officially YAML only supports UTF-8 (and maybe UTF-16). There have historically been all sorts of encoding confusions in YAML libraries. I think you are going to run into trouble trying to have YAML in something other than a Unicode encoding.
What happens actually when I don't set the default_internal to utf-8?
Encoding.default_internal controls the encoding your input will be converted to when it is read in, at least by some operations that respect Encoding.default_internal, not everything does. Rails seems to set it to UTF-8. So if you don't set the Encoding.default_internal to UTF-8, it might be UTF-8 already anyway.
If Encoding.default_internal is nil, then those operations that respect it, and try to convert any input to Encoding.default_internal upon reading it in won't do that, they'll leave any input in the encoding it was believed to originate in, not try to convert it.
If you set it to something else, like say "WINDOWS-1252" Ruby would automatically convert your stuff to WINDOWS-1252 when it read it in with File.open, which would possibly confuse YAML::load when you pass the string that's now encoded and tagged as WINDOWS-1252 to it. Generally there's no good reason to do this, so leave Encoding.default_internal alone.
Note: The Ruby docs say:
"You should not set ::default_internal in Ruby code as strings created before changing the value may have a different encoding from strings created after the change. Instead you should use ruby -E to invoke Ruby with the correct default_internal."
See also: http://ruby-doc.org/core-1.9.3/Encoding.html#method-c-default_internal
Which encodings do the strings in both examples have?
I don't really know. One would have to have to look at the bytes and try to figure out if they are legal bytes for various plausible encodings, and beyond being legal, if they mean something likely to be intended.
For example take: "ÉGÉìÉRÅ[ÉfÉBÉìÉOÇÕìÔǵÇ≠ǻǢ". That's a perfectly legal UTF-8 string, but as humans we know it's probably not intended, and is probably garbage, quite likely from the result of an encoding misinterpretation. But a computer has no way to know that, it's perfectly legal UTF-8, and, hey, maybe someone actually did mean to write "ÉGÉìÉRÅ[ÉfÉBÉìÉOÇÕìÔǵÇ≠ǻǢ", after all, I just did, when writing this post!
So you can try to interpret the bytes according to various encodings and see if any of them make sense.
You're really just guessing at this point. Which means...
How can I load a file even if I don't know it's encoding?
Generally, you can not. You need to know and keep track of encodings. There's no real way to know what the bytes mean without knowing their encoding.
If you have some legacy data for which you've lost this, you've got to try to figure it out. Manually, or with some code that tries to guess likely encodings based on heuristics. Here's one Ruby gem Charlock Holmes that tries to guess, using the ICU library heuristics (this particular gem only works on MRI).
What Ruby says in response to string.encoding is just the encoding the string is tagged with. The string can be tagged with the wrong encoding, the bytes in the string don't actually mean what is intended in the encoding it's tagged with... in which case you'll get garbage.
Ruby will do the right things with your string instead of creating garbage only if the string's encoding tag is correct. The string's encoding tag is determined by Encoding.default_external for most input operations by default (Encoding.default_external usually starts out as UTF-8, or ASCII-8BIT which really means the null encoding, binary data, not tagged with an encoding), or by passing an argument to File.open: File.open("something", "r:UTF-8" or, means the same thing, File.open("something", "r", :encoding => "UTF-8"). The actual bytes are determined by whatever is in the file. It's up to you to tell Ruby the correct encoding to interpret those bytes as text meaning what they were intended to mean.
There were a couple posts recently to reddit /r/ruby that try to explain how to troubleshoot and workaround encoding issues that you may find helpful:
http://www.justinweiss.com/articles/how-to-get-from-theyre-to-theyre/
http://www.justinweiss.com/articles/3-steps-to-fix-encoding-problems-in-ruby/
Also, this is my favorite article on understanding encoding generally: http://kunststube.net/encoding/
For YAML files in particular, if I were you, I'd just make sure they are all in UTF-8. Life will be much easier and you won't have to worry about it. If you have some legacy ones that have become corrupted, it's going to be a pain to fix them, but that's what you've got to do, unless you can just rewrite them from scratch. Try to fix them to be in valid and correct UTF-8, and from here on out keep all your YAML in UTF-8.
The YAML specification says in "5.1. Character Set":
To ensure readability, YAML streams use only the printable subset of the Unicode character set. The allowed character range explicitly excludes the C0 control block #x0-#x1F (except for TAB #x9, LF #xA, and CR #xD which are allowed), DEL #x7F, the C1 control block #x80-#x9F (except for NEL #x85 which is allowed), the surrogate block #xD800-#xDFFF, #xFFFE, and #xFFFF.
This means that Windows-1252 or ISO-8859-1 encoding are acceptable as long as the characters being output are within the defined range. Windows users tend to use the "the C1 control block #x80-#x9F" range for diacritical and accented characters, so if those are present in a YAML file the file is not going to meet the spec and the YAML generator didn't do its job correctly. And that explains why "ä" isn't acceptable.
On output, a YAML processor must only produce acceptable characters. Any excluded characters must be presented using escape sequences. In addition, any allowed characters known to be non-printable should also be escaped. This isn’t mandatory since a full implementation would require extensive character property tables.
These days, by default, Ruby uses UTF-8, however YAML isn't limited to that. The spec goes on to say in "5.2. Character Encodings":
On input, a YAML processor must support the UTF-8 and UTF-16 character encodings. For JSON compatibility, the UTF-32 encodings must also be supported.
If a character stream begins with a byte order mark, the character encoding will be taken to be as as indicated by the byte order mark. Otherwise, the stream must begin with an ASCII character. This allows the encoding to be deduced by the pattern of null (#x00) characters.
So, UTF-8, 16 and 32 are supported, but Ruby will assume UTF-8. If the BOM is present you'll see it when you view the file in an editor. I haven't tried loading a UTF-16 or 32 file to see what Ruby's YAML does, so that's left as an experiment.
Sometimes I have evil non-printable characters in the middle of a string. These strings are user input, so I must make my program receive it well instead of try to change the source of the problem.
For example, they can have zero width no-break space in the middle of the string. For example, while parsing a .po file, one problematic part was the string "he is a man of god" in the middle of the file. While it everything seems correct, inspecting it with irb shows:
"he is a man of god".codepoints
=> [104, 101, 32, 105, 115, 32, 97, 32, 65279, 109, 97, 110, 32, 111, 102, 32, 103, 111, 100]
I believe that I know what a BOM is, and I even handle it nicely. However sometimes I have such characters on the middle of the file, so it is not a BOM.
My current approach is to remove all characters that I found evil in a really smelly fashion:
text = (text.codepoints - CODEPOINTS_BlACKLIST).pack("U*")
The most close I got was following this post which leaded me to :print: option on regexps. However it was no good for me:
"m".scan(/[[:print:]]/).join.codepoints
=> [65279, 109]
so the question is: How can I remove all non-printable characters from a string in ruby?
try this:
>>"aaa\f\d\x00abcd".gsub(/[^[:print:]]/,'.')
=>"aaa.d.abcd"
Ruby can help you convert from one multi-byte character set to another. Check into the search results, plus read up on Ruby String's encode method.
Also, Ruby's Iconv is your friend.
Finally, James Grey wrote a series of articles which cover this in good detail.
One of the things you can do using those tools is to tell them to transcode to a visually similar character, or ignore them completely.
Dealing with alternate character sets is one of the most... irritating things I've ever had to do, because files can contain anything, but be marked as text. You might not expect it and then your code dies or starts throwing errors, because people are so ingenious when coming up with ways to insert alternate characters into content.
Codepoint 65279 is a zero-width no-break space.
It is commonly used as a byte-order mark (BOM).
You can remove it from a string with:
my_new_string = my_old_string.gsub!("\xEF\xBB\xBF".force_encoding("UTF-8"), '')
A fast way to check if you have any invisible characters is to check the length of the string, if it's higher than what you can see in IRB, you do.
I have a ISO-8859-1 encoded csv-file that I try to open and parse with ruby:
require 'csv'
filename = File.expand_path('~/myfile.csv')
file = File.open(filename, "r:ISO-8859-1")
CSV.parse(file.read, col_sep: "\t") do |row|
puts row
end
If I leave out the encoding from the call to File.open, I get an error
ArgumentError: invalid byte sequence in UTF-8
My problem is that the call to puts row displays strange characters instead of the norwegian characters æ,ø,å:
BOKF�RINGSDATO
I get the same if I open the file in textmate, forcing it to use UTF-8 encoding.
By assigning the file content to a string, I can check the encoding used for the string. As expected, it shows ISO-8859-1.
So when I puts each row, why does it output the string as UTF-8?
Is it something to do with the csv-library?
I use ruby 1.9.2.
Found myself an answer by trying different things from the documentation:
require 'csv'
filename = File.expand_path('~/myfile.csv')
File.open(filename, "r:ISO-8859-1") do |file|
CSV.parse(file.read.encode("UTF-8"), col_sep: "\t") do |row|
# ↳ returns a copy transcoded to UTF-8.
puts row
end
end
As you can see, all I have done, is to encode the string to an UTF-8 string before the CSV-parser gets it.
Edit:
Trying this solution on macruby-head, I get the following error message from encode( ):
Encoding::InvalidByteSequenceError: "\xD8" on UTF-8
Even though I specify encoding when opening the file, macruby use UTF-8.
This seems to be an known macruby limitation: Encoding is always UTF-8
Maybe you could use Iconv to convert the file contents to UTF-8 before parsing?
ISO-8859-1 and Win-1252 are reaallly close in their character sets. Could some app have processed the file and converted it? Or could it have been received from a machine that was defaulting to Win-1252, which is Window's standard setting?
Software that senses the code-set can get the encoding wrong if there are no characters in the 0x80 to 0x9F byte-range so you might try setting file = File.open(filename, "r:ISO-8859-1") to file = File.open(filename, "r:Windows-1252"). (I think "Windows-1252" is the right encoding name.)
I used to write spiders, and HTML is notorious for being mis-labeled or for having encoded binary characters from one character set embedded in another. I used some bad language many times over these problems several years ago, before most languages had implemented UTF-8 and Unicode so I understand the frustration.
ISO/IEC_8859-1,
Windows-1252
I need to do something where the file sizes are crucial. This is producing strange results
filename = "testThis.txt"
total_chars = 0
file = File.new(filename, "r")
file_for_writing = nil
while (line = file.gets)
total_chars += line.length
end
puts "original size #{File.size(filename)}"
puts "Totals #{total_chars}"
like this
original size 20121
Totals 20061
Why is the second one coming up short?
Edit: Answerers' hunches are right: the test file has 60 lines in it. If I change this line
total_chars += line.length + 1
it works perfectly. But on *nix this change would be wrong?
Edit: Follow up is now here. Thanks!
There are special characters stored in the file that delineate the lines:
CR LF (0x0D 0x0A) (\r\n) on Windows/DOS and
0x0A (\n) on UNIX systems.
Ruby's gets uses the UNIX method. So, if you read a Windows file you would lose 1 byte for every line you read as the \r\n bytes are converted to \n.
Also String.length is not a good measure of the size of the string (in bytes). If the String is not ASCII, one character may be represented by more than one byte (Unicode). That is, it returns the number of characters in the String, not the number of bytes.
To get the size of a file, use File.size(file_name).
My guess would be that you are on Windows, and your "testThis.txt" file has \r\n line endings. When the file is opened in text mode, each line ending will be converted to a single \n character. Therefore you'll lose 1 character per line.
Does your test file have 60 lines in it? That would be consistent with this explanation.
The line-ending issues is the most likely culprit here.
It's also worth noting that if the character encoding of the text file is something other than ASCII, you will have a discrepancy between the 2 as well. If the file is UTF-8, this will work for english and some european languages that use just standard ASCII alphabet symbols. Beyond that, the file size and character counts can vary wildly (up to 4 or even 6 times the file size compared to the character count).
Relying on '1 character = 1 byte' is just asking for trouble as it is almost certainly going to fail at some point.