German Umlaut displayed wrong despite correct Charset - visual-studio-2010

I am encountering a weird problem regarding the encoding of my files.
I have a site which is multilingual; Users can set this viá a dropdown on the site itself, the default value being German.
When the user logs in, some settings are being set depending on the language (charset, codepage and LCID). At this point I also want to point out, that all my files are ANSI-encoded.
Recently, I had to make some changes.
So I fire up Visual Studio 2010, edit the files in question and upload them to my server using Filezilla.
And now, all of a sudden, the German umlauts (Ää, Öö, Üü, ß) are being displayed incorrectly (something like ä) - but only on the files I opened with VS2010.
I checked the charset on the site itself and also displaying it with Response.CharSet and it was ISO-8859-1, which is correct.
So I tried some converting with notepad++, but no success.
I know that setting the charset to UTF-8 would solve this problem, but a) the charset is set from a database-value and b) it kind of messes things up in other languages.

You are displaying a utf-8 encoded file with a iso-8859-1 view. Usually you want to see just one character, but why do you see two instead of one? This is because in utf-8 a german small 'a' letter with 'two dots' is a 2-byte sequence with utf-8 (0xC3 and 0xA4). If this gets NOT displayed as utf-8 but as iso-8859-1 encoding - which means one byte one character - you'll get that what you have mentioned. You'll get the startbyte 0xC3 as a single iso-8859-1 character and the following byte 0xA4 as as a single iso-8859-1 character. In utf-8 this 2-byte sequence must become decoded by extracting the payload bits of the startbyte and the following byte like this:
Startbyte: 11000011
Following: 10100100
So 110 of the startbyte must get stripped off, so 11 is left.
So 10 of the following byte must get stripped off, so 100100 is left.
Chained together this becomes 11100100 which is decimal 228 which should be equal to the german character 'a with two dots' unicode codepoint.
I recommend to let the encoding as it is, utf-8. It is just the encoding of your viewer/editor that should display utf-8 encoded files as utf-8 and not as iso-8859-1. Configure your viewer/editor with utf-8. In other words, configure the viewer's/editor's encoding according to the encoding of the file's content (which is in your case utf-8 and NOT iso-8859-1).
To convert your files or check them for a certain encoding, just use madedit. madedit has a built-in hex-editor which wraps a rectangle around utf-8 sequences, displaying just one character on the right side (the encoded codepoint). It's easy to identify single-byte characters and/or 2/3/4-byte sequences within utf-8 encoded files. It also wraps a rectangle around the 3-byte utf-8 BOM (if any).

Encoding problems have several failure points:
Check template file encoding
Check response encoding
Check database encoding
Check that they are coherent to what you want to output.
Also note that Notepad++ has a "Encode as..." and a "Convert to..."
1st one reads file as encoding specified and 2nd reads file and writes it back to selected encoding (changing file)

Related

Octal, Hex, Unicode

I have a character appearing over the wire that has a hex value and octal value \xb1 and \261.
This is what my header looks like:
From: "\261Central Station <sip#...>"
Looking at the ASCII table the character in the picture is "±":
What I don't understand:
If I try to test the same by passing "±Central Station" in the header I see it converted to "\xC2\xB1". Why?
How can I have "\xB1" or "\261" appearing over the wire instead of "\xC2\xB1".
e. If I try to print "\xB1" or "\261" I never see "±" being printed. But if I print "\u00b1" it prints the desired character, I'm assuming because "\u00b1" is the Unicode format.
From the page you linked to:
The extended ASCII codes (character code 128-255)
There are several different variations of the 8-bit ASCII table. The table below is according to ISO 8859-1, also called ISO Latin-1.
That's worth reading twice. The character codes 128–255 aren't ASCII (ASCII is a 7-bit encoding and ends at 127).
Assuming that you're correct that the character in question is ± (it's likely, but not guaranteed), your text could be encoded ISO 8850-1 or, as #muistooshort kindly pointed out in the comments, any of a number of other ISO 8859-X or CP-12XX (Windows-12XX) encodings. We do know, however, that the text isn't (valid) UTF-8, because 0xb1 on its own isn't a valid UTF-8 character.
If you're lucky, whatever client is sending this text specified the encoding in the Content-Type header.
As to your questions:
If I try to test the same by passing ±Central Station in header I see it get converted to \xC2\xB1. Why?
The text you're passing is in UTF-8, and the bytes that represent ± in UTF-8 are 0xC2 0xB1.
How can I have \xB1 or \261 appearing over the wire instead of \xC2\xB1?
We have no idea how you're testing this, so we can't answer this question. In general, though: Either send the text encoded as ISO 8859-1 (Encoding::ISO_8859_1 in Ruby), or whatever encoding the original text was in, or as raw bytes (Encoding::ASCII_8BIT or Encoding::BINARY, which are aliases for each other).
If I try to print \xB1 or \261 I never see ± being printed. But if I print \u00b1 it prints the desired character. (I'm assuming because \u00b1 is the unicode format but I will love If some can explain this in detail.)
That's not a question, but the reason is that \xB1 (\261) is not a valid UTF-8 character. Some interfaces will print � for invalid characters; others will simply elide them. \u00b1, on the other hand, is a valid Unicode code point, which Ruby knows how to represent in UTF-8.
Brief aside: UTF-8 (like UTF-16 and UTF-32) is a character encoding specified by the Unicode standard. U+00B1 is the Unicode code point for ±, and 0xC2 0xB1 are the bytes that represent that code point in UTF-8. In Ruby we can represent UTF-8 characters using either the Unicode code point (\u00b1) or the UTF-8 bytes (in hex: \xC2\xB1; or octal: \302\261, although I don't recommend the latter since fewer Rubyists are familiar with it).
Character encoding is a big topic, well beyond the scope of a Stack Overflow answer. For a good primer, read Joel Spolsky's "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)", and for more details on how character encoding works in Ruby read Yehuda Katz's "Encodings, Unabridged". Reading both will take you less than 30 minutes and will save you hundreds of hours of pain in the future.

Special French characters in HTML

French characters in HTML with utf-8 charset still display incorrectly. I have a small sample page in ShopAndBind.com/Sample.asp with META HTTP-EQUIV='Content-Type' CONTENT='text/html;charset=utf-8' that still does not display Véhicules Terrestres à Moteur correctly, whether it is in the source or loaded from MySQL data in a database. It displays fine everywhere else. I'm using Visual InterDev 6.0 from Visual Studio 2008 for development. NotePad, Kedit works. The hex in the file is'E0' and 'E9' respectively for é and à.
The page http://shopandbind.com/Sample.asp is served with HTTP headers that do not specify character encoding, the data does not start with BOM, but it contains a meta tag that specifies UTF-8 as the character encoding. However, the data contains bytes that are invalid in UTF-8. This explains the failure.
The data is in fact in ISO-8859-1 (or compatible) encoding, as you can see by manually selecting that encoding (often under the name “Western European”) in the View → Encoding menu of your browser. Byes E0 and E9 denote é and à in ISO-8859-1, byt definitely not in UTF-8.
Thus, the minimal fix is to replace UTF-8 by ISO-8859-1 in the meta tag. A better fix might be to make the process that produces the HTML file to generate UTF-8 encoded data.

Find out character encoding of straße

I'm struggling with the encoding of the content of an external interface. In the MySQL database the collation is latin1_swedish_ci. Also the collation of the field ist latin1_swedish_ci. The php script is encoded in UTF-8 and the output in the browser gives me UTF-8. Everything is working fine except the content of this database. The database connection should be UTF-8 (Typo3 4.7) and the content is
straße
but it should be straße.
mb_detect_encoding($data['street'],'UTF-8') says it is UTF-8. If I use utf8_decode() I get
stra�?e
If I use utf8_encode() I get
straße
My assumption was that UTF-8 encoded data is stored in ISO-8859-1, but if this would be the case this shouldn't make such problems here. How do I find out what the real encoding is?
PS: I cannot change the encoding of the source!
My solution for my initial problem:
I had to set the datbase connection from UTF-8 to ISO-8859-1 with this line of code
$res = $GLOBALS['TYPO3_DB']->sql_query("SET NAMES latin1");
The character ß 'LATIN SMALL LETTER SHARP S' (U+00DF) exist in UTF-8 of bytes 0xC3 and 0x9F as per the linked site:
UTF-8 (hex) 0xC3 0x9F (c39f)
If we look at the ISO-8859-1 codepage layout, then those bytes represent the characters à and a character not definied in the ISO-8859-1 codepage layout. This is thus not it. Another common character encoding which has some overlap with ISO-8859-1 is Windows CP1252 (also known as ANSI, used by default when saving a text file in Notepad — which is overridable by using Save As instead). If we look at CP1252 codepage layout, then those bytes represent the characters à and Ÿ which confirms what you're initially retrieving.
So, it's most likely CP1252 encoded.
What you see as “ß” is really the windows-1252 (also known as CP1252) interpretation of the two bytes 0xC3 and 0x9F that constitute the UTF-8 encoding of “ß”. But this seems to mean that the data is actually UTF-8 encoded and just gets misinterpreted as windows-1252 encoded. So I think it should be simply processed as UTF-8, with due precautions.
i recommend that you proceed to verify what charset is being used by your sql connection. it is NOT necessarily the same as the charset that you define for your databse.
FROM PHP
// Opens a connection to a MySQL server
$connection = mysql_connect ($server, $username, $password);
$charset = mysql_client_encoding($connection);
$flagChange = mysql_set_charset('utf8', $connection);
echo "The character set is: $charset</br>mysql_set_charset result:$flagChange</br>";
INSIDE PHPMYADMIN
open database information_schema
open table schemata
check out your mysql default collation
you may or may not be able to change these parameters, depending on user privileges.
as shown above, i solved my conflicting character set problems in mysql by appending the following line to my connection.php file (which i call at the beginning of every page that uses db access):
$flagChange = mysql_set_charset('utf8', $connection);

How to force Firefox addon charset/encoding?

I'm having a trouble with a mobile addon: it shows me the new elements added by scripting with a different charset of the page. E.g. I can read "cuadrúpedo" but the same word in my plugin show "cuadr¡pedo".
I tryed writing the next line to the beginning of my addon, but it didn't work:
document.getElementsByTagName("html")[0].setAttribute("lang", "es");
Then, I wrote a "converter function" which replaces the special characters with unicode, like the next line, but it didn't work.
str.replace( /ú/g, "/xfa־" );
What can I do?
Probably it's a matter of text encoding.
Make sure the file that contains the literal "cuadrúpedo" is saved as utf-8, not ansi.
Keep in mind that a few key files must be ansi encoded. These are install.rdf, chrome.manifest and bootstrap.js. In this case use unicode escapes, "cuadr\u00fapedo".
When the JavaScript file is loaded (in Gecko 1.8 and later) from a chrome:// URL, a Byte Order Mark is used to determine the character encoding of the script. Otherwise, the character encoding will be the same as the one used by the XUL file. So, one solution is the HTTP header can contain a character encoding declaration as part of the Content-Type header, for example:
Content-Type: application/javascript; charset=UTF-8
For cross version compatibility you must limit yourself to ASCII. However, you can use unicode escapes – the earlier example rewritten using them would be:
var text = "Ein sch\u00F6nes Beispiel eines mehrsprachigen Textes: \u65E5\u672C\u8A9E";
JavaScript and Navigator support for UTF-8/Unicode means you can use non-Latin, international, and localized characters, plus special technical symbols in JavaScript programs. Unicode provides a standard way to encode multilingual text: since the UTF-8 encoding of Unicode is compatible with ASCII, programs can use ASCII characters. To receive non-ASCII character input, the client needs to send the input as Unicode.
There is a webpage for text escaping and unescaping in Javascript:
http://0xcc.net/jsescape/
Sources:
https://developer.mozilla.org/en-US/docs/International_characters_in_XUL_JavaScript
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Values,_variables,_and_literals#Unicode

Different querystring urlencoding based on codepage. ASP classic

We are currently converting our webapp to UTF-8 from ISO-8859-1. And everything works great but requesting get/post variables from other sites (Signup forms).
Some of this sites that post to our site have ISO-8859-1 encoding and som have UTF-8.
The problem is that special characters gets URLencoded differently depending on the site charset.
For example:
ø = %F8 in ISO-8859-1
ø = %C3%B8 in UTF-8
I cant get %F8 right when i have UTF-8 charset. I only get a Unicode Character 'REPLACEMENT CHARACTER' (U+FFFD).
Any tips on how to fix this would be much appreciated:)
Torbjørn
You can specify the encoding explicitly using <form accept-charset="UTF-8">.
If you don't want to do that, the browser has to guess the encoding you want. For that it usually takes the encoding of the page in which the form is. So if you serve the HTML files as UTF-8 your forms will be sent back as UTF-8, too.
I'd suggest you did a preanalysis of the inputs before converting them. Essentially, scan for the iso-8859-1 codes for Æ, Ø and Å (upper and lower case). If you find any, do a search/replace for the entire request, where you swap the iso-char codes to the UTF-8 charcodes.

Resources