I was trying to Validate characters (the extended ones) and i see that in various PC's they have in deferent places the extended characters. I meane we are not see the same ASCII code number for a certain character (not in Latins).
Now My issue is what i have to do when my program starts to use always a certain ASCII code table?
For extended character of course.
This issue generally relates (since .NET strings are UTF-16) only to reading and writing text files. In which case, just use Encoding.GetEncoding(codePage) to choose the appropriate encoding, and use this when access any text files. All standard inbuilt text/file utility operations will take an encoding, for example:
string contents = File.ReadAllText("foo.txt", encoding);
Related
in windows normal text file can be saved as .txt extension.
at the same time people are using .text format also to save normal text files. what is the difference between normal text file with .txt extension and .text extension
A file extension is not a file format, it is just part of the name used by convention to indicate what is in the file, and what tool and options should be used to interact with it.
Some old systems - notably MS-DOS, and Windows before Windows 95 - had a fixed format for filenames allowing up to eight characters, a dot, then up to three characters. So there are many conventions for indicating file types in three characters. That restriction is now rarely relevant, so more recent conventions will often use longer extensions to be more obvious and less ambiguous.
".txt" is simply the conventional way to indicate "this is intended to be read as plain text" in three letters. ".text" indicates the same thing, but less abbreviated when not restricted to "8.3" filenames.
Neither name tells you any more about the file. For instance, to actually read the text you need to know what character encoding was intended. ASCII is a common baseline, sufficient for English text with simple punctuation, but there are a number of encodings which extend it (e.g. the ISO 8859 series, UTF-8), and also a number which are incompatible with it (e.g. UTF-16, many Chinese and Japanese encodings).
I would like to test some file character encoding detection functionality, where I input files of type UTF-8, windows-1252, ISO-8859-1, etc.
I also want to input files with unknown character encoding so that the user can be alerted.
I haven't found a good way to create files with an unknown or undetectable character encoding.
head -c1024 /dev/random > /tmp/badencoding
This is almost certainly what you want in practice (1kB of random data), but there isn't really a good definition of "undetectable character encoding." This random file is legal 8-bit ASCII. The fact that it certainly is not meant to be 8-bit ASCII is just a heuristic. So all you're going to wind up doing is testing that your algorithm works in ways that your users probably want it to; there is no ultimate "correct" here without reading the mind of the person who created the file.
An empty text file has an undetectable character encoding (except if it has a Unicode BOM).
But basically, you either have to require the user to tell which character encoding a file they are giving you uses, or tell them which one to use (or both, if you specify a default but allow it to be overridden [which is what many compilers do.]).
You can then test the contents for validity against the agreed character encoding. This will catch some errors but note that many character encodings allow any sequence of bytes with any value so any content is always valid (even if the character encoding is not what was used to write the file).
You can then test for consistency with expected values, such as some syntax or allowable character or words, to catch more errors (but you wouldn't necessarily be able to say the character encoding didn't match; it could be just the content is incorrect).
To create files with different character encodings, you could write a program or use a 3rd-party program such as iconv or PowerShell.
If you want an unknown character encoding, just generate a random integer map, convert a file, discard the map, and then not even you will know it.
Ultimately, text files are too technical for users to deal with. Give them some other option such as an open document or spreadsheet format such as .odt, .docx, .ods, or .xlsx. These are very easy to read by programs.
I would like to translate a game, this game loads the strings from a text file.
The destination language uses non-ascii characters, so I naïvely saved my file in utf8, but it does not work as letters with diacritics are not shown correctly.
Studying better in the configuration file where the string text filename is stored, I found a CHARSET option that can assume any of those values:
ANSI_CHARSET DEFAULT_CHARSET SYMBOL_CHARSET MAC_CHARSET SHIFTJIS_CHARSET HANGEUL_CHARSET JOHAB_CHARSET GB2312_CHARSET CHINESEBIG5_CHARSET GREEK_CHARSET TURKISH_CHARSET VIETNAMESE_CHARSET HEBREW_CHARSET ARABIC_CHARSET BALTIC_CHARSET RUSSIAN_CHARSET THAI_CHARSET EASTEUROPE_CHARSET OEM_CHARSET
That as far as I understood are fairly standard values in WinAPIs and charset and character encoding are synonymous.
So my question is, is there a correspondence between this names and standard names like utf8 or iso-8859-2? If it is the case what is it?
Try using EASTEUROPE_CHARSET
ISO 8859-2 is mostly equivalent to Windows-1250. According to this MSDN article, the 1250 code page is accessed using EASTEUROPE_CHARSET.
Note that you will need to save your text file in the 1250 code page as ISO 8859-2 is not exactly equivalent. From Wikipedia:
Windows-1250 is similar to ISO-8859-2 and has all the printable characters it has and more. However a few of them are rearranged (unlike Windows-1252, which keeps all printable characters from ISO-8859-1 in the same place). Most of the rearrangements seem to have been done to keep characters shared with Windows-1252 in the same place as in Windows-1252 but three of the characters moved (Ą,Ľ,ź) cannot be explained this way.
The names are symbolic identifiers for Windows code pages, which are character encodings (= charsets) defined or adopted by Microsoft. Many of them are registered at IANA with the prefix windows-. For example, EASTEUROPE_CHARSET stands for code page 1250, which has been registered as windows-1250 and is often called Windows Latin 2.
UTF-8 is something different. You need special routines to read and write UTF-8 encoded data. UTF-8 or UTF-16 is generally the only sensible choice for character encoding when you want to be truly global (support different languages and writing systems). For a single specific language, some of the code pages might be more practical in some cases.
You can get the the standard encoding names (as registered by IANA) using the table under the remarks section of this MSDN page.
Just find the Character set row and read the Code page number, the standard name is windows-[code page number].
I'm currently building a hash key string (collapsed from a map) where the values that are delimited by the special ASCII unit delimiter 31 (1F).
This nicely solves the problem of trying to guess what ASCII characters won't be used in the string values and I don't need to worry about escaping or quoting values etc.
However reading about the history of this is it appears to be a relic from the 1960s and I haven't seen many examples where strings are built and tokenised using this special character so it all seems too easy.
Are there any issues to using this delimiter in a modern application?
I'm currently doing this in a non-Unicode C++ application, however I'm interested to know how this applies generally in other languages such as Java, C# and with Unicode.
The lower 128 char map of ASCII is fully set in stone into the Unicode standard, this including characters 0->31. The only reason you don't see special ASCII chars in use in strings very often is simply because of human interfacing limitations: they do not visualize well (if at all) when displayed to screen or written to file, and you can't easily type them in from a keyboard either. They're also not allowed in un-escaped form within various popular 'human readable' file formats, such as XML.
For logical processing tasks within a program that do not need end-user interaction, however, they are perfectly suitable for whatever use you can find for them. Your particular use sounds novel and efficient and I think you should definitely run with it.
Your application is free to accept whatever binary format it pleases. However, if you need to embed arbitrary binary data in your input, you need to escape whatever delimiters or other special codes your format uses. This is true regardless of which ones you choose.
I'd also not ignore Unicode. It's 2012, by now it's rather silly to work with an outdated model for dealing with text. If your input data is textual, handle it as such.
The one issue that comes to mind is why invent another format instead of using XML or JSON; or if you need a compact encoding, a "binary" variant of those two (Fast Infoset, msgpack, who knows what else), or ASN.1? There's probably a whole bunch of other issues that you'll encounter when rolling your own that the design and tooling for those formats already solved.
I work with barcodes in a warehouse setting. We use ASCII code 31 as a field-separator so that a single scan can populate multiple data fields with a single scan. So, consider the ramifications if you think your hash key could end up on a barcode.
Objective : To have multi language characters in the user id in Enovia v6
I am using utf-8 encoding in tcl script and it seems it saves multi language characters properly in the database (after some conversion). But, in ui i literally see the saved information from the database.
While doing the same excercise throuhg Power Web, saved data somehow gets converted back into proper multi language character and displays properly.
Am i missing something while taking tcl approach?
Pasting one example to help understand better.
Original Name: Kátai-Pál
Name saved in database as: Kátai-Pál
In UI I see name as: Kátai-Pál
In Tcl I use below syntax
set encoded [encoding convertto utf-8 Kátai-Pál];
Now user name becomes: Kátai-Pál
In UI I see name as “Kátai-Pál”
The trick is to think in terms of characters, not bytes. They're different things. Encodings are ways of representing characters as byte sequences (internally, Tcl's really quite complicated, but you shouldn't ever have to care about that if you're not developing Tcl's implementation itself; suffice to say it's Unicode). Thus, when you use:
encoding convertto utf-8 "Kátai-Pál"
You're taking a sequence of characters and asking for the sequence of bytes (one per result character) that is the encoding of those characters in the given encoding (UTF-8).
What you need to do is to get the database integration layer to understand what encoding the database is using so it can convert back into characters for you (you can only ever communicate using bytes; everything else is just a simplification). There are two ways that can happen: either the information is correctly shared (via metadata or defined convention), or both sides make assumptions which come unstuck occasionally. It sounds like the latter is what's happening, alas.
If you can't handle it any other way, you can take the bytes produced out of the database layer and convert into characters:
encoding convertfrom $theEncoding $theBytes
Working out what $theEncoding should be is in general very tricky, but it sounds like it's utf-8 for you. Once you've got characters, Tcl/Tk will be able to display them correctly; it knows how to transfer them correctly into the guts of the platform's GUI. (And in scripts that you actually write, you're best off replacing non-ASCII characters with their \uXXXX escapes, because platforms don't agree on what encoding is right to use for scripts. Alas.)