I'd like to download urls that contain Japanese characters
http://ja.wikipedia.org/wiki/%E3%82%A2%E3%82%A4%E3%82%B6%E3%83%83%E3%82%AF%E3%83%BB%E3%82%A2%E3%82%B7%E3%83%A2%E3%83%95
When using wget to download the file, the file name becomes gibberish.
In wget's manual, there is something about the problem:
If you specify ‘nocontrol’, then the escaping of the control characters is also switched off. This option may make sense when you are downloading URLs whose names contain UTF-8 characters, on a system which can save and display filenames in UTF-8 (some possible byte values used in UTF-8 byte sequences fall in the range of values designated by Wget as “controls”).
so I tried to write this. However, it didn't work. What is the problem?
wget --restrict-file-names=nocontrol http://ja.wikipedia.org/wiki/%E3%82%A2%E3%82%A4%E3%82%B6%E3%83%83%E3%82%AF%E3%83%BB%E3%82%A2%E3%82%B7%E3%83%A2%E3%83%95
Related
I am trying to use wget -m <address> to download the contents of an FTP server. A lot of the content is icelandic and so contains a bunch of weird characters that I think are causing issues as I keep seeing:
Incomplete or invalid multibyte sequence encountered
I have tried adding flags such as --restrict-file-names=nocontrol but to no avail.
I have also tried using lftp but doesn't seem to make any difference.
According to wget manual
If you specify ‘nocontrol’, then the escaping of the control
characters is also switched off.
that is it as actually more permissive than default, bunch of weird characters suggest you have some issues with getting encoding right and therefore ascii seems to be best fit for your use case
The ‘ascii’ mode is used to specify that any bytes whose values are
outside the range of ASCII characters (that is, greater than 127)
shall be escaped. This can be useful when saving filenames whose
encoding does not match the one used locally.
As I do not have ability to test, please try it and write about result it give.
I have a CSV with content that is UTF-8 encoded. However, various applications and systems errorneously detect the encoding of the CSV as Windows-1252, which breaks all the special characters in the file (e.g. Umlauts).
I can see that Sublime Text (on Windows) for example also automatically detects the wrong Windows-1252 encoding, when opening the file for the first time, showing garbled text where special characters are supposed to be.
When I choose Reopen with Encoding » UTF-8, everything will look fine, as expected.
Now, to find the source of the error I thought it might help to figure out, why these applications are not automatically detecting the correct encoding in the first place. May be there is a stray character somewhere with the wrong encoding for example.
The CSV in question is actually an automatically generated product export of a Magento 2 installation. Recently the character encodings broke and I am currently trying to figure out what happened - hence my investigation on why this export is detected as Windows-1252.
Is there any reliable way of figuring out why the automatic detection of applications like Sublime Text assume the wrong character encoding?
This is what I did in the end to find out why the file was not detected as UTF-8, i.e. to find the characters that were not encoded in UTF-8. Since PHP is more readily available to me, I decided to simply use the following script, to force convert anything that is not UTF-8 to UTF-8, using the very handy neitanod/forceutf8 library.
$before = file_get_contents('export.csv');
$after = \ForceUTF8\Encoding::toUTF8($before);
file_put_contents('export.fixed.csv', $after);
Then I used a file comparison tool like Beyond Compare to compare the two resulting CSVs, in order to see more easily which characters were not originally encoded in UTF-8.
This in turn showed me that only one particular column of the export was affected. Upon further investigation I found out that the contents of that column were processed in PHP with the following preg_replace:
$value = preg_replace('/([^\pL0-9 -])+/', '', $value);
Using \p in the regular expression had an unknown side effect: all the special characters were converted to another encoding. A quick solution to this is to use the u flag on the regex (see regex pattern modifiers reference). This forces the resulting encoding of this preg_replace to be UTF-8. See also this answer.
I'm writing up a game in P5.js which draws emojis to a canvas.
I was originally using the sublime text editor to copy paste special ASCII characters straight into the code which works fine, but I now only have access to nano which doesn't seem to accept this way.
Nano manages to convert what I have already done, into some different characters. Presumably, this is Nano's way of interpreting those ASCII characters.
I am using this because phones and browsers now automatically convert these ASCII characters into emojis.
Example: the heart emoji is converted from the special ASCII character ♥ in sublime, to âö¥ in Nano automatically when you open the file.
I am wondering if there is a reference sheet somewhere where I can find other conversions for emojis I would like to use.
Just forget ASCII. HTML uses Unicode characters. JavaScript uses Unicode's UTF-16 encoding. Your files might use Unicode's UTF-8 encoding.
ASCII does not have the character ♥.
Special characters in JavaScript include quote, double quote, backslash, and similar. If you wish or need to, you can escape UTF-16 code units using the "\uABCD" notation. Special characters in HTML are <, >, & and similar. If you wish or need to, you can use named or numeric character entity references like & or 🚲
♥ is not special; It's just a character with no particular purpose, just like tens of thousands of others.
Conversions from typed characters to other characters is an input function, typically performed by the OS or other input software. So, that's generally outside the scope of HTML and JavaScript.
A text file has an encoding. Some programs help you when opening a file by guessing; You then have to correct them.
It's generally easiest if all files are UTF-8. Sometimes a BOM helps, sometimes not. The fundamental rule about character encodings is to read using the encoding that was used to write with.
The list of Unicode characters is here. There several other good sites for searching and coding including http://www.fileformat.info/.
I get text file of random encoding format, usc-2le, ansi, utf-8, usc-2be etc. I have to convert this files to utf8.
For conversion am using the following command
iconv options -f from-encoding -t utf-8 <inputfile > outputfile
But if incorrect from-encoding is provided, then incorrect file is generated.
I want a way to find the input file encoding type.
Thanks in advance
On Linux you could try using file(1) on your unknown input file. Most of the time it would guess the encoding correctly. Or else try several encodings to iconv till you "feel" that the result is acceptable (for example if you know that the file is some Russian poetry, you might try KOI-8, UTF-8, etc.... till you recognize a good Russian poem).
But character encoding is a nightmare and can be ambiguous. The provider of the file should tell you what encoding he used (and there is no way to get that encoding reliably and in all cases : there are some byte sequences which would be valid and interpreted differently with various encodings).
(notice that the HTTP protocol mentions and explicits the encoding)
In 2017, better use UTF-8 everywhere (and you should follow that http://utf8everywhere.org/ link) so ask your human partners to send you UTF-8 (hopefully most of your files are in UTF-8, since today they all should be).
(so encoding is more a social issue than a technical one)
I get text file of random encoding format
Notice that "random encoding" don't exist. You want and need to find out what character encoding (and file format) has been used by the provider of that file (so you mean "unknown encoding", not "random" one).
BTW, do you have a formal, unambiguous, sound and precise definition of text file, beyond file without zero bytes, or files with few control characters? LaTeX, C source, Markdown, SQL, UUencoding, shar, XPM, and HTML files are all text files, but very different ones!
You probably want to expect UTF-8, and you might use the file extension as some hint. Knowing the media-type could help.
(so if HTTP has been used to transfer the file, it is important to keep (and trust) the Content-Type...; read about HTTP headers)
[...] then incorrect file is generated.
How do you know that the resulting file is incorrect? You can only know if you have some expectations about that result (e.g. that it contains Russian poetry, not junk characters; but perhaps these junk characters are some bytecode to some secret interpreter, or some music represented in weird fashion, or encrypted, etc....). Raw files are just sequences of bytes, you need some extra knowledge to use them (even if you know that they use UTF-8).
We do file encoding conversion with
vim -c "set encoding=utf8" -c "set fileencoding=utf8" -c "wq" filename
It's working fine , no need to give source encoding.
I'm downloading via FTP some files with chinese names (BIG5 encoded), and Filezilla displays those filenames as garbage (as FTP cannot handle any encoding other than ASCII and UTF-8, as least the standard compliant ones).
Given a filename with garbled characters, is it possible for me to repair the encoding and get a proper filename String given that I already know the source encoding? Will the FTP client misinterpreting BIG5 as UTF-8 insert bytes that make conversion back to BIG5 difficult?
My proposed steps (in Java):
1. get the garbled filename using File object.
2. getbytes using UTF-8.
3. create a new string using those bytes in BIG5.
4. Write the decoded filename back to the file.
Will the above method work?
Not every sequence of bytes is a valid ASCII or UTF-8 string so it's quite likely that some of the bytes will have been discarded, converted to the replacement character, or otherwise irreversibly mangled. So it looks like you won't be able to retrieve the original filenames if they have been modified by FileZilla to become correctly formed UTF-8 or ASCII.
You might be lucky to be able to get a certain percentage of the original characters back, where they just happened to be both valid BIG5 and valid UTF-8, but I doubt you will be able to recover the entire filename.
You could post a few examples of your garbled filenames (as raw bytes encoded in hex) to get a more definite answer. That way we can see exactly what the damage is.