I am trying compile this Ruby code with option --1.9:
\# encoding: utf-8
module Modd
def cpd
#"_¦+?" mySQL
"ñ,B˜"
end
end
I used the GVim editor and compiled then got the following error:
SyntaxError: f3.rb:6: invalid multibyte char (UTF-8)
After that I used Notepad++ and changed to Encode as UTF-8 and compiled with this option:
jruby --1.9 f3.rb
then I get:
SyntaxError: f3.rb:1: \273Invalid char `\273' ('╗') in expression
I have seen this happen when the BOM gets messed up during a charset conversion (the BOM in octal is 357 273 277). If you open the file with a hexadecimal editor (:%!xxd on vi), you will more than likely see characters at the beginning of the file, before the first #.
If you recreate that file directly in utf-8, or get rid of these spurious characters, this should solve your problem.
Related
My script fails on this bad encoding, even I brought all files to UTF-8 but still some won't convert or just have wrong chars inside.
It fails actually on var assignment step.
Can I set some kind of error handling for this case like below so my loop will continue. That ¿ causes all problem.
Need to run this script all the way without errors. Tried already encoding und force_encoding and shebang line. Is Ruby has any kind of error handling routing so I can handle that bad case and continue with the rest of script? How to get rid of this error invalid multibyte char (UTF-8)
line = '¿USE [Alpha]'
lineOK = ' USE [Alpha] OK line'
>ruby ReadFile_Test.rb
ReadFile_Test.rb:15: invalid multibyte char (UTF-8)
I could reproduce your issue by saving the file with ISO-8859-1 encoding.
Running your code with the file in this non UTF8-encoding the error popped up. My solution was to save the file as UTF-8.
I am using Sublime as text editor and there is the option 'file > save with encoding'. I have chosen 'UTF-8' and was able to run the script.
Using puts line.encoding showed me UTF-8 then and no error anymore.
I suggest to re-check the encoding of your saved script file again.
While trying to print Duplicaci¾n out of a CSV file, I get the following error:
ArgumentError - invalid byte sequence in UTF-8
I'm using Ruby 1.9.3-p362 and opening the file using:
CSV.foreach(fpath, headers: true) do |row|
How can I skip an invalid character without using iconv or str.encode(undef: :replace, invalid: :replace, replace: '')?
I tried answers from the following questions, but nothing worked:
ruby 1.9: invalid byte sequence in UTF-8
Ruby Invalid Byte Sequence in UTF-8
ruby 1.9: invalid byte sequence in UTF-8
This is from the CSV.open documentation:
You must provide a mode with an embedded Encoding designator unless your data is in Encoding::default_external(). CSV will check the Encoding of the underlying IO object (set by the mode you pass) to determine how to parse the data. You may provide a second Encoding to have the data transcoded as it is read just as you can with a normal call to IO::open(). For example, "rb:UTF-32BE:UTF-8" would read UTF-32BE data from the file but transcode it to UTF-8 before CSV parses it.
That applies to any method in CSV that opens a file.
Also start reading in the documentation at the part beginning with:
CSV and Character Encodings (M17n or Multilingualization)
Ruby is expecting UTF-8 but is seeing characters that don't fit. I'd suspect WIN-1252 or ISO-8859-1 or a variant.
this is my ruby code
require 'json'
a=Array.new
value="¿value"
data=value.gsub('¿','-')
a[0]=data
puts a
puts "json is"
puts jsondata=a.to_json
getting following error
C:\Ruby193>new.rb
C:/Ruby193/New.rb:3: invalid multibyte char (US-ASCII)
C:/Ruby193/New.rb:3: syntax error, unexpected tIDENTIFIER, expecting $end
value="┐value"
^
That's not a JSON problem — Ruby can't decode your source because it contains a multibyte character. By default, Ruby tries to decode files as US-ASCII, but ¿ isn't representable in US-ASCII, so it fails. The solution is to provide a magic comment as described in the documentation. Assuming your source file's encoding is UTF-8, you can tell Ruby that like so:
# encoding: UTF-8
# ...
value = "¿value"
# ...
With an editor or an IDE the soluton of icktoofay (# encoding: UTF-8 - in the first line) is perfect.
In a shell with IRB or PRY it is difficult to find a working configuration. But there is a workaround that at least worked for my encoding problem which was to enter German umlaut characters.
Workaround for PRY:
In PRY I use the edit command to edit the contents of the input buffer
as described in this pry wiki page.
This opens an external editor (you can configure which editor you want). And the editor accepts special characters that can not be entered in PRY directly.
I've searched all over and tried everything but I still get:
invalid multibyte char (UTF-8)
When doing something like:
some_string.gsub(/…/)
Even though I added this to the top of the file:
# encoding: utf-8
Any help?
Try:
some_string.gsub(/\u2026/)
You can also take a look at this question for more information.
Save rb file with ASCII, ® can't be right displayed
Save rb file with Unicode, it would cause error
Invalid char \357' in expression
Invalid char\273' in expression
Invalid char `\277' in expression
You could also try Array#pack.
puts [174].pack('U*')
That won't require any non-ASCII characters in your source code.
You have to declare the source encoding:
# coding: utf-8
p "®"
(just add the # coding: utf-8 line in your file to declare its encoding to utf-8)
If you're displaying this in a web browser, use the ® or ® HTML entities. The browser should interpret them as the correct character.