I am printing a character variable in a pdf via the standard Genexus reporting procedure, but when I try to print Cyrillic characters, they are skipped or replaced by question marks. In the record these characters are saved correctly, it is only the printout on pdf that does not show them. Do I need to change anything in the pdfreport.ini file? Or are there other ways?
I solved setting in my language object (Italian in my case) the codepage as 1251 instead of 1252. In this way it should use windows-1251 encoding that include cyrillic characters. For some reason I had to set the printed variable to Microsoft Sans Serif too
Related
I have already set the UTF-8 as the overall encoding:
when I set encoding to 936(the one for GBK, which is for Chinese)
Chinese characters are displayed properly.
When I changed encoding to 65001(UTF-8, which is for all chars, text)
Chinese characters are displayed improperly.
My question
How to make Windows cmd handle all characters properly?
I don't want to use GBK, which can only handle Chinese, which means I have to switch to another encoding when I handle other languages (e.g. Japanese, Korean). So I want to change all of them to UTF-8 and get out of the encoding hell.
I'm working on a .bat program, and the program is written in Finnish. The problem is that CMD doesn't know these "special" letters, such as Ä, Ö, Å.
Is there a way to make those work? I'd also like it if the user could use those letters too.
Part of my code:
#echo off
/u
title JustATestProgram
goto test123
:test123
echo Letters : Ää Öö Åå
pause
exit
When I open this file, the letters look like this:
Try putting this line at the top of the batch file:
chcp 65001
It should change the console encoding to UTF-8, and you should be able to read the file properly in the script after that.
Theoretically you just need to use the /u (Unicode) switch:
c:\>cmd /u
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
c:\>echo Ä
Ä
If you use Notepad++, you can simply change the charset. Doing this will allow you to write letters from desired charset. The western region -US. should support it.
You can do it in a drop down menu in Notepad++ or by hand by writing chcp 437. But I recommend doing this in Notepad++ as it will show you the output as it will be in the batch. So you will then easily see if you use the right code page. And at same time it's easy to switch if you want more special symbols. You can also as stated in previous posts. Try UTF-8.
You can read more about this here: http://ss64.com/nt/chcp.html. And here's a list over different code pages (check out the OEM pages): Code Page Identifiers
The command prompt uses DOS encoding. Windows uses ANSI or Unicode.
PS I'm assuming you are in the US with code page 437 rather than international English/Western European 850.
So I used Character Map to get the DOS code then find out what ANSI character that code maps to.
This is the notepad contents.
echo Ž„™”†
Which was made by putting the DOS codes for your characters into notepad.
0142, 0132, 0153, 0148, 0143, 0134 which display as the above ANSI characters.
Command prompt output
C:\Windows\system32>echo ÄäÖöÅå
ÄäÖöÅå
Alt + Character Code [Prev | Next | Contents]
Holding down alt and pressing the character code on the numeric keypad will enter that character. The keyboard language in use must support entering that character. If your keyboard supports it the code is shown on the right hand side of the status bar in Character Map else this section of the status bar is empty. The status bar us also empty for characters with well known keys, like the letters A to Z.
However there is two ways of entering codes. The point to remember here that the characters are the same for the first 127 codes. The difference is if the first number typed is a zero of not. If it is then the code will insert the character from the current character set else it will insert a character from the OEM character set. Codes over 255 enter the unicode character and are in decimal. Characters entered are converted to OEM for Dos applications and either ANSI or Unicode depending on the Windows' application. See Converting Between Decimal and Hexadecimal.
E.G., Alt + 0 then 6 then 5 then release Alt enters the letter A
From Shortcut Keys and Key Modifiers by Me at https://1drv.ms/f/s!AvqkaKIXzvDieQFjUcKneSZhDjw
I am encountering a weird problem regarding the encoding of my files.
I have a site which is multilingual; Users can set this viá a dropdown on the site itself, the default value being German.
When the user logs in, some settings are being set depending on the language (charset, codepage and LCID). At this point I also want to point out, that all my files are ANSI-encoded.
Recently, I had to make some changes.
So I fire up Visual Studio 2010, edit the files in question and upload them to my server using Filezilla.
And now, all of a sudden, the German umlauts (Ää, Öö, Üü, ß) are being displayed incorrectly (something like ä) - but only on the files I opened with VS2010.
I checked the charset on the site itself and also displaying it with Response.CharSet and it was ISO-8859-1, which is correct.
So I tried some converting with notepad++, but no success.
I know that setting the charset to UTF-8 would solve this problem, but a) the charset is set from a database-value and b) it kind of messes things up in other languages.
You are displaying a utf-8 encoded file with a iso-8859-1 view. Usually you want to see just one character, but why do you see two instead of one? This is because in utf-8 a german small 'a' letter with 'two dots' is a 2-byte sequence with utf-8 (0xC3 and 0xA4). If this gets NOT displayed as utf-8 but as iso-8859-1 encoding - which means one byte one character - you'll get that what you have mentioned. You'll get the startbyte 0xC3 as a single iso-8859-1 character and the following byte 0xA4 as as a single iso-8859-1 character. In utf-8 this 2-byte sequence must become decoded by extracting the payload bits of the startbyte and the following byte like this:
Startbyte: 11000011
Following: 10100100
So 110 of the startbyte must get stripped off, so 11 is left.
So 10 of the following byte must get stripped off, so 100100 is left.
Chained together this becomes 11100100 which is decimal 228 which should be equal to the german character 'a with two dots' unicode codepoint.
I recommend to let the encoding as it is, utf-8. It is just the encoding of your viewer/editor that should display utf-8 encoded files as utf-8 and not as iso-8859-1. Configure your viewer/editor with utf-8. In other words, configure the viewer's/editor's encoding according to the encoding of the file's content (which is in your case utf-8 and NOT iso-8859-1).
To convert your files or check them for a certain encoding, just use madedit. madedit has a built-in hex-editor which wraps a rectangle around utf-8 sequences, displaying just one character on the right side (the encoded codepoint). It's easy to identify single-byte characters and/or 2/3/4-byte sequences within utf-8 encoded files. It also wraps a rectangle around the 3-byte utf-8 BOM (if any).
Encoding problems have several failure points:
Check template file encoding
Check response encoding
Check database encoding
Check that they are coherent to what you want to output.
Also note that Notepad++ has a "Encode as..." and a "Convert to..."
1st one reads file as encoding specified and 2nd reads file and writes it back to selected encoding (changing file)
I have a string contain some special char like "\u2012" i.e. FIGURE DASH. When i am trying to print this on console I am getting a '?' mark instead of its symbol. I have an editor where in I can insert the symbol using alt+numpad like alt+2012. In editor it I could see the symbol save it in a xml file and get the value using nodevalue, I get a '?' mark.
To summerize I am facing problem to read extended latin a charset. What i need is When i insert such symbols and read it, i should get something like &#xXXXX;.
Please help!
TIA :)
Simply I have a String inpath = "À";, I want to get its unicode value..like &#xXXXX;
The default console encoding in Windows is some MS-DOS code page and they don't support the character. You can try running chcp 65001 before running the program but you might also need to change the console font as well.
You don't need to do anything you wouldn't do with any other character, as long as you use UTF-8. You aren't doing that in many places. You need to explicitly write in your code to save and read the file in UTF-8, and not rely on the platform default encoding.
I'm having a trouble with a mobile addon: it shows me the new elements added by scripting with a different charset of the page. E.g. I can read "cuadrúpedo" but the same word in my plugin show "cuadr¡pedo".
I tryed writing the next line to the beginning of my addon, but it didn't work:
document.getElementsByTagName("html")[0].setAttribute("lang", "es");
Then, I wrote a "converter function" which replaces the special characters with unicode, like the next line, but it didn't work.
str.replace( /ú/g, "/xfa־" );
What can I do?
Probably it's a matter of text encoding.
Make sure the file that contains the literal "cuadrúpedo" is saved as utf-8, not ansi.
Keep in mind that a few key files must be ansi encoded. These are install.rdf, chrome.manifest and bootstrap.js. In this case use unicode escapes, "cuadr\u00fapedo".
When the JavaScript file is loaded (in Gecko 1.8 and later) from a chrome:// URL, a Byte Order Mark is used to determine the character encoding of the script. Otherwise, the character encoding will be the same as the one used by the XUL file. So, one solution is the HTTP header can contain a character encoding declaration as part of the Content-Type header, for example:
Content-Type: application/javascript; charset=UTF-8
For cross version compatibility you must limit yourself to ASCII. However, you can use unicode escapes – the earlier example rewritten using them would be:
var text = "Ein sch\u00F6nes Beispiel eines mehrsprachigen Textes: \u65E5\u672C\u8A9E";
JavaScript and Navigator support for UTF-8/Unicode means you can use non-Latin, international, and localized characters, plus special technical symbols in JavaScript programs. Unicode provides a standard way to encode multilingual text: since the UTF-8 encoding of Unicode is compatible with ASCII, programs can use ASCII characters. To receive non-ASCII character input, the client needs to send the input as Unicode.
There is a webpage for text escaping and unescaping in Javascript:
http://0xcc.net/jsescape/
Sources:
https://developer.mozilla.org/en-US/docs/International_characters_in_XUL_JavaScript
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Values,_variables,_and_literals#Unicode