Filename containing superstring causes gsutil error - utf-8

I have a script I run from terminal, uploading files with gsutil in Centos 7, I receive an error about one of my file names.
Caught non-retryable exception while listing file:///home//:
CommandException: Invalid Unicode path encountered
('/home/mysite/public_html/images/office-100-m\xe2\xb2.jpg').
I checked in python like this (notice the superstring):
>>> "office-100-m²-2.jpg".decode("utf-8")
u'office-100-m\xb2-2.jpg'
It decodes? I was expecting to see an error. When I checked locale
python -c "import locale; print locale.getdefaultlocale()"
('en_US', 'UTF-8')
So what is wrong with it?

That string is not a valid unicode sequence. If you take the bytes and try to decode it, you'll see the error:
>>> s = '/home/mysite/public_html/images/office-100-m\xe2\xb2.jpg'
>>> s.decode('utf-8')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/encodings/utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode bytes in position 44-45: invalid continuation byte
Similarly, if you encode the unicode string you have in your question, the encoded bytes are different from what's shown in the error. This is the correct utf-8 encoded version of that string:
>>> u"office-100-m²-2.jpg".encode('utf-8')
'office-100-m\xc2\xb2-2.jpg'
Notice \xc2\xb2 vs. \xe2\xb2. I'm not sure what encoding your filesystem filename is, but it doesn't appear to be UTF-8.

Related

iconvert does not change encoding, no errors

I have a "gzip" byte in my plain text file (I can read it, there is not weird symbol, I cant cat or nano) that causes an error with a tool in python:
'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte in line 49
My file is actualy encoded in us-ascii, and I want to try to get rid of the error encoding conversion. It should go from us-ascii to utf-8.
Used the command :
iconv -f US-ASCII -t UTF-8 id_to_genome_file.tsv -o id_to_genome_file2.tsv
without error but checking the file shows it is still in us-ascii:
file -i id_to_genome_file2.tsv
id_to_genome_file2.tsv: text/plain; charset=us-ascii
Hwy isnt the conversion taking effect ? And would the conversiona really solve the issue ?
EDIT: the error wasn't in the file but somewhere else.The errors shown by the tool and its documentation are poorly written. Solved.

Ruby invalid multibyte char error (Sep 2019)

My script fails on this bad encoding, even I brought all files to UTF-8 but still some won't convert or just have wrong chars inside.
It fails actually on var assignment step.
Can I set some kind of error handling for this case like below so my loop will continue. That ¿ causes all problem.
Need to run this script all the way without errors. Tried already encoding und force_encoding and shebang line. Is Ruby has any kind of error handling routing so I can handle that bad case and continue with the rest of script? How to get rid of this error invalid multibyte char (UTF-8)
line = '¿USE [Alpha]'
lineOK = ' USE [Alpha] OK line'
>ruby ReadFile_Test.rb
ReadFile_Test.rb:15: invalid multibyte char (UTF-8)
I could reproduce your issue by saving the file with ISO-8859-1 encoding.
Running your code with the file in this non UTF8-encoding the error popped up. My solution was to save the file as UTF-8.
I am using Sublime as text editor and there is the option 'file > save with encoding'. I have chosen 'UTF-8' and was able to run the script.
Using puts line.encoding showed me UTF-8 then and no error anymore.
I suggest to re-check the encoding of your saved script file again.

Encoding issue on subprocess.Popen args

Yet another encoding question on Python.
How can I pass non-ASCII characters as parameters on a subprocess.Popen call?
My problem is not on the stdin/stdout as the majority of other questions on StackOverflow, but passing those characters in the args parameter of Popen.
Python script used for testing:
import subprocess
cmd = 'C:\Python27\python.exe C:\path_to\script.py -n "Testç on ã and ê"'
process = subprocess.Popen(cmd,stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
output, err = process.communicate()
result = process.wait()
print result, '-', output
For this example call, the script.py receives Testç on ã and ê. If I copy-paste this same command string on a CMD shell, it works fine.
What I've tried, besides what's described above:
Checked if all Python scripts are encoded in UTF-8. They are.
Changed to unicode (cmd = u'...'), received an UnicodeEncodeError: 'ascii' codec can't encode character u'\xe7' in position 128: ordinal not in range(128) on line 5 (Popen call).
Changed to cmd = u'...'.decode('utf-8'), received an UnicodeEncodeError: 'ascii' codec can't encode character u'\xe7' in position 128: ordinal not in range(128) on line 3 (decode call).
Changed to cmd = u'...'.encode('utf8'), results in Testç on ã and ê
Added PYTHONIOENCODING=utf-8 env. variable with no luck.
Looking on tries 2 and 3, it seems like Popen issues a decode call internally, but I don't have enough experience in Python to advance based on this suspicious.
Environment: Python 2.7.11 running on an Windows Server 2012 R2.
I've searched for similar problems but haven't found any solution. A similar question is asked in what is the encoding of the subprocess module output in Python 2.7?, but no viable solution is offered.
I read that Python 3 changed the way string and encoding works, but upgrading to Python 3 is not an option currently.
Thanks in advance.
As noted in the comments, subprocess.Popen in Python 2 is calling the Windows function CreateProcessA which accepts a byte string in the currently configured code page. Luckily Python has an encoding type mbcs which stands in for the current code page.
cmd = u'C:\Python27\python.exe C:\path_to\script.py -n "Testç on ã and ê"'.encode('mbcs')
Unfortunately you can still fail if the string contains characters that can't be encoded into the current code page.

Why do I get ArgumentError - invalid byte sequence in UTF-8?

While trying to print Duplicaci¾n out of a CSV file, I get the following error:
ArgumentError - invalid byte sequence in UTF-8
I'm using Ruby 1.9.3-p362 and opening the file using:
CSV.foreach(fpath, headers: true) do |row|
How can I skip an invalid character without using iconv or str.encode(undef: :replace, invalid: :replace, replace: '')?
I tried answers from the following questions, but nothing worked:
ruby 1.9: invalid byte sequence in UTF-8
Ruby Invalid Byte Sequence in UTF-8
ruby 1.9: invalid byte sequence in UTF-8
This is from the CSV.open documentation:
You must provide a mode with an embedded Encoding designator unless your data is in Encoding::default_external(). CSV will check the Encoding of the underlying IO object (set by the mode you pass) to determine how to parse the data. You may provide a second Encoding to have the data transcoded as it is read just as you can with a normal call to IO::open(). For example, "rb:UTF-32BE:UTF-8" would read UTF-32BE data from the file but transcode it to UTF-8 before CSV parses it.
That applies to any method in CSV that opens a file.
Also start reading in the documentation at the part beginning with:
CSV and Character Encodings (M17n or Multilingualization)
Ruby is expecting UTF-8 but is seeing characters that don't fit. I'd suspect WIN-1252 or ISO-8859-1 or a variant.

Incompatible character encodings error

I'm trying to run a ruby script which generates translated HTML files from a JSON file. However I get this error:
incompatible character encodings: UTF-8 and CP850
Ruby
translation_hash = JSON.parse(File.read('translation_master.json').force_encoding("ISO-8859-1").encode("utf-8", replace: nil))
It seems to get stuck on this line of the JSON:
Json
"3": "Klassisch geschnittene Anzüge",
because there is a special character "ü". The JSON file's encoding is ANSI. Any ideas what could be wrong?
Try adding # encoding: UTF-8 to the top of the ruby file. This tells ruby to interpret the file with a different encoding. If this doesn't work try to find out what kind of encoding the text uses and change the line accordingly.
IMHO your code should work if the encoding of the json file is "ISO-8859-1" and if it is a valid json file.
So you should first verify if "ISO-8859-1" is the correct encoding and
by the way if the file is a valid json file.
# read the file with the encoding, you assume it is correct
json_or_not = File.read('translation_master.json').force_encoding("ISO-8859-1")
# print result and ckeck if something is obscure
puts json_or_not

Resources