Extended ASCII code above 255 - ascii

I was always under the assumption ASCII codes ranged from 0 to 255. Last night I had to deal with a character that I thought was an underscore but turned out to be Chr(8230). Three little dots resembling an underscore. This was in an AutoHotKey script. Problem solved but it left me with questions.
I found a table with Chr(8230) and more.
http://www.cjboco.com/blog.cfm/post/table-of-ascii-characters-and-symbols-for-coldfusion/
There's a vague reference to these codes and Coldfusion which just added to the Confusion.
Out of curiosity, what are the codes above 255 referred to as and are there more tables like this? I know they are not Extended ASCII (128 to 255) but can't find any reference to them other that the above chart.
A simple name will be enough. I'm a retired tech with limited programming and internet searching abilities and really don't care if a question like this is beneath some here. If it ruffles a few feathers then so be it, the voting system here is absolutely meaningless to me. :)

That sounds like a unicode character. A multi-byte character set that accommodates many characters from different languages that the standard ASCII character set could not reproduce. http://www.fileformat.info/info/unicode/char/2026/index.htm
Here is the Wikipedia entry on Unicode: https://en.wikipedia.org/wiki/Unicode
Here's one of many Unicode tables available online: http://unicode-table.com/en/#control-character

Related

What are Unicode codepoint types for?

I recently read the UTF-8 Everywhere manifesto, a document arguing for handling text with UTF-8 by default. The manifesto argues that Unicode codepoints aren't a generally useful concept and shouldn't be directly interacted with outside of programs/libraries specializing in text processing.
However, some modern languages that use the UTF-8 default have built-in codepoint types, such as rune in Go and char in Rust.
What are these types actually useful for? Are they legacy from times before the meaninglessness of codepoints was broadly understood? Or is that an incomplete perspective?
Texts have many different meaning and usages, so the question is difficult to answer.
First: about codepoint. We uses the term codepoint because it is easy, it implies a number (code), and not really confuseable with other terms. Unicode tell us that it doesn't use the term codepoint and character in a consistent way, but also that it is not a problem: context is clear, and they are often interchangeable (but for few codepoints which are not characters, like surrogates, and few reserved codepoints). Note: Unicode is mostly about characters, and ISO 10646 was most about codepoints. So original ISO was about a table with numbers (codepoint) and names, and Unicode about properties of characters. So we may use codepoints where Unicode character should be better, but character is easy confuseable with C char, and with font glyphs/graphemes.
Codepoints are one basic unit, so useful for most of programs, e.g. to store in databases, to exchange to other programs, to save files, for sorting, etc. For this exact reasons program languages uses the codepoint as type. UTF-8 code units may be an alternative, but it would be more difficult to navigate (see a UTF-8 as a tape disk where you should read sequentially, and codepoint text as an hard disk where you can just in middle of a text). Not a 100% appropriate, because you may need some context bytes. If you are getting user text, your program probably do not need to split in graphemes, to do liguatures, etc. if it will just store the data in a database. Codepoint is really low level and so fast for most operations.
The other part of text: displaying (or speech). This part is very complex, because we have many different scripts with very different rules, and then different languages with own special cases. So we needs a series of libraries, e.g. text layout (so word separation, etc. like pango), sharper engine (to find which glyph to use, combining characters, where to put next characters, e.g. HarfBuzz), and a font library which display the font (cairo plus freetype). it is complex, but most programmers do not need special handling: just reading text from database and sent to screen, so we just uses the relevant library (and it depends on operating system), and just going on. It is too complex for a language specification (and also a moving target, maybe in 30 years things are more standardized). So it is complex, and with many operation, so we may use complex structures (array of array of codepoint: so array of graphemes): not much a slow down. Note: fonts have codepoint tables to perform various operation before to find the glyph index. Various API uses Unicode strings (as codepoint array, UTF-16, UTF-8, etc.).
Naturally things are more complex, and it requires a lot of knowledge of different part of Unicode, if you are trying to program an editor (WYSIWYG, but also with terminals): you mix both worlds, and you need much more information (e.g. for selection of text). But in this case you must create your own structures.
And really: things are complex: do you want to just show first x characters on your blog? (maybe about assessment), or split at words (some language are not so linear, so the interpretation may be very wrong). For now just humans can do a good job for all languages, so also not yet need to a supporting type in different languages.
The manifesto argues that Unicode codepoints aren't a generally useful concept and shouldn't be directly interacted with outside of programs/libraries specializing in text processing.
Where? It merely outlines advantages and disadvantages of code points. Two examples are:
Some abstract characters can be encoded by different code points; U+03A9 greek capital letter omega and U+2126 ohm sign both correspond to the same abstract character Ω, and must be treated identically.
Moreover, for some abstract characters, there exist representations using multiple code points, in addition to the single coded character form. The abstract character ǵ can be coded by the single code point U+01F5 latin small letter g with acute, or by the sequence <U+0067 latin small letter g, U+0301 combining acute accent>.
In other words: code points just index which graphemes Unicode supports.
Sometimes they're meant as single characters: one prominent example would be € (EURO SIGN), having only the code point U+20AC.
Sometimes the same character has multiple code-points as per context: the dollar sign exists as:
﹩ = U+FE69 (SMALL DOLLAR SIGN)
$ = U+FF04 (FULLWIDTH DOLLAR SIGN)
💲 = U+1F4B2 (HEAVY DOLLAR SIGN)
Storage wise when searching for one variant you might want to match all 3 variants instead on relying on the exact code point only.
Sometimes multiple code points can be combined to form up a single character:
á = U+00E1 (LATIN SMALL LETTER A WITH ACUTE), also termed "precomposed"
á = combination of U+0061 (LATIN SMALL LETTER A) and U+0301 (COMBINING ACUTE ACCENT) - in a text editor trying to delete á (from the right side) will mostly result in actually deleting the acute accent first. Searching for either variant should find both variants.
Storage wise you avoid to need searching for both variants by performing Unicode normalization, i.e. NFC to always favor precombined code points over two combined code points to form one character.
As for homoglyphs code points clearly distinguish the contextual meaning:
A = U+0041 (LATIN CAPITAL LETTER A)
Α = U+0391 (GREEK CAPITAL LETTER ALPHA)
А = U+0410 (CYRILLIC CAPITAL LETTER A)
Copy the greek or cyrillic character, then search this website for that letter - it will never find the other letters, no matter how similar they look. Likewise the latin letter A won't find the greek or cyrillic one.
Writing system wise code points can be used by multiple alphabets: the CJK portion is an attempt to use as few code points as possible while supporting as many languages as possible - Chinese (simplified, traditional, Hong Kong), Japanese, Korean, Vietnamese:
今 = U+4ECA
入 = U+5165
才 = U+624D
Dealing as a programmer with code points has valid reasons. Programming languages which support these may (or may not) support correct encodings (UTF-8 vs. UTF-16 vs. ISO-8859-1) and may (or may not) correctly produce surrogates for UTF-16. Text wise users should not be concerned about code points, although it would help them distinguishing homographs.

Clarifications for FHIR R4 string element

There are a couple of things that I am having trouble with regarding HL7 FHIR R4 strings (https://www.hl7.org/fhir/datatypes.html#string):
The specification mentions: Note that strings SHALL NOT exceed 1MB (1024*1024 characters) in size. The trouble I am having with this is that 1024x1024 Unicode characters are not always 1MB in size. Besides that it is unclear to me what Unicode encoding is meant here, and I will assume the reasonable UTF-8 since that is the default for both XML and JSON. For example the character '🦁' needs 4 bytes to encode, therefore 1024x1024 of such characters would be 4MB in size. The Regex-es in the notes, though not normative, make this a bit clearer, but not much. It states that codes up to FFFF are ok, which means a max. byte use of 3 per characters which would still exceed the 1MB limit by a factor of 3. My interpretation is that we would like a reasonable limit that doesn't open up any denial-of-service attacks. Therefore I would like to suggest keeping the meaningful 1MB limit but drop the number of characters requirement OR add it as a separate requirement.
The specification mentions: Therefore strings SHOULD always contain non-whitespace content. It does not mention what it considers whitespace. Is this just the three codes mentioned earlier representing horizontal tab, carriage return and line feed or are more exotic whitespace characters also prohibited, like next line or no-break space?
Ok, that about sums up my questions about the string specifications. Hope that someone can help me out.
Best,
Dirk
The rule is clearly expressed in characters explicitly because Unicode characters have variable length. There is no maximum in bytes, only in characters (though given Unicode rules, you could calculate what the maximum possible length in bytes might be). If you feel this isn't sufficiently clear, feel free to submit a change request.
The expectation is a string SHOULD always have textual content. If you have nothing to say, omit the element. Trying to work around the "no empty string" limitation by transmitting a non-breaking space or some other non-visible character to meet the non-empty requirement while not actually conveying any human-readable information would be contrary to the intent of the specification. We don't demand that systems enforce this because trying to figure out all the creative ways implementers might have of conveying "no useful text" with Unicode isn't terribly practical. I believe the Java code just does a trim() and compares the result to empty string.

In a trie for inserting and searching normal paths, are ascii 1-31 worth considering?

I am working on a trie data structure which inserts and searches for normal paths.
A path can contain any character from unicode, so in order to represent it completely in utf-8, the array in trie needs to contain next nodes for all 256 ascii.
But I am also concerned about the space and insertion time taken by trie.
The conditions under which my trie is setup rarely would insert a character of unicode(I mean 128-255 ascii). So I just put an if condition to reject paths which contain above ascii 127. I don’t think the ascii 1-31 are relevant either, although I am unsure about this. As 1-31 chars are like carriage return, esc etc, can I simply continue the loop without inserting them? Like is it possible to encounter paths that are actually differentiable because of ascii 1-31 in a real scenario?
Answering this old question, on macOS ascii 13 is used to represent custom icons which may appear in many paths. Thanks to #EricPostpischil who told that in comments.
All other characters ranging between 1-31 appear pretty less in paths.
Also, macOS users mostly have a case-insensitive path, so generally considering both lowercase and uppercase is also useless.
PS:
Although this question seems to be opinion based, but it actually isn't because it can be answered quite concisely. It attempts to ask for frequency of appearance of characters in paths on macOS. (sorry for the confusing title, I was a noob that time, changing it now will make all comments on it absurd)

First Non Repeating Character: Bit Vector

Find the first non repeating character in a given string. You may assume that the string contains any character from any language in the world, for e.g. an Arabic
or Greek character even.
I came across a solution using bit vectors for the above problem. It used a bit vector of size 95000. Can somebody please explain why this size is used?
See How many characters can be mapped with Unicode? for part of an explanation.
According to that question, in Unicode 6.0, 109384 code points have been allocated. It's possible that, depending on how old the solution you found is, 95000 was large enough to hold all of the code points which had been allocated at that time, or that the author of your solution was happy with a "good enough" approach.

Why are non-printable ASCII characters actually printable?

Characters, that are not alphanumeric or punctuation are termed not printable:
Codes 20hex to 7Ehex, known as the printable characters
So why is e.g. 005 representable (and represented by clubs)?
Most of the original set of ASCII control characters are no longer useful, so many different vendors have recycled them as additional graphic characters, often dingbats as in your table. However, all such assignments are nonstandard, and usually incompatible with each other. If you can, it's better to use the official Unicode codepoints for these characters. (Similar things have been done with the additional block of control characters in the high half of the ISO 8859.x standards, which were already obsolete at the time they were specified. Again, use the official Unicode codepoints.)
The tiny print at the bottom of your table appears to say "Copyright 1982 Leading Edge Computer Products, Inc." That company was an early maker of IBM PC clones, and this is presumably their custom ASCII extension. You should only pay attention to the assignments for 000-031 and 127 in this table if you're writing software to convert files produced on those specific computers to a more modern format.
The representation of the "not printable" chars depends on the used charset (of the OS, of the Browser, what ever), see ISO 8859, Code Page 1252 for example.
In dos for example you do have funny Signs that were used for very old style window frames (ascii art like).

Resources