what if ASCII uses 12 bits per character? - ascii

If ASCII uses 7 bits to represent characters. Could someone explain what this means towards the number of characters that are supported. How would that change if ASCII used 12 bits per character?

A bit has two possible states. A group of n bits has 2n possible states.
Therefore 7 bits can represent 27 = 128 possible characters and 12 bits can represent 212 = 4096 possible characters.

This abridged excerpt from Wikipedia's table of character sets provides historical perspective:
BCDIC 1928 6 bits Introduced with
the IBM card
FIELDATA 1956 6/7 Battlefield
bits information (USA)
EBCDIC 1963 8 bits IBM computers
Teleprinters and
computers;
ASCII 1963-06-17 7 bits original
(ASA X3.4-1963) definition of
ASCII
ECMA-6 1965-04-30 7 bits ASCII localization
ISO 646 1967 (ISO/R646-1967) 7 bits ASCII localization
1967 (USAS Close to "modern"
ASCII X3.4-1967) 7 bits definition of
ASCII
IBM data
Braille ASCII 1969 6/7 Tactile print for
bits blind persons
Terminal text
ECMA-48 1972 7 bits manipulation and
colors
ISO/IEC 8859 1987 8 bits International
codes
Unified encoding
Unicode 1991 16/32 for most of the
bits world's writing
systems

A 12 bit code can support 2 to the twelve or 4096 characters, minus one or two for non-characters like null, maybe escape, and a few whitespace characters.
Now you could construct a computer with 12 bit bytes. But it would be an expensive re-engineering operation. Most computers have 8 bit bytes, at least partly because of ascii.
But the method chosen to extend ascii was Unicode, and the encoding that is emerging as standard is UTF-8 This is a superset of ascii in a sense - ascii is unicode. The unused top bit is set and additional bytes added to generate extended non-Latin characters. So it is variable width encoding, the codes are always a multiple of 8 bits, and its slightly open ended in that it is possible to add codes at the top of the range, but currently encoding never goes wider than four bytes.

Related

What are 7 bit / 8 bit environments for control functions according to ISO/IEC 6429:1992?

I am learning ECMA-48 and I see a lot of notes about 7 bit and 8 bit environments for control functions. For example:
NOTE LS0 is used in 8-bit environments only; in 7-bit environments
SHIFT-IN (SI) is used instead.
As I understand today all environments are 8 bits. If I am wrong could anyone give real examples where 7 bit environments are used.
For example character encodings.
Standard uses values 0x00 to 0x1F, and 0x80 to 0x9F as C0, and C1 control codes. And uses control functions, control sequences, etc. which start from either ESC (0x1B) or CSI (0x9B).
In the 8 bit environment there must be some kind of encoding defined, which specifies which character is represented by which values. The first 128 values will be according to ASCII (or some other standard which is compatible (doesn't use 0x00 to 0x1F as printable characters but reserves them for C0 control codes)) but what about the next 128 values?
Here we enter the world of code pages, which define the upper 128 values. Some existing code pages (like ISO8859-2) reserve the values 0x80 - 0x9F for C1 control codes but some other ones (like CP1250) do not, and use them for printable characters.
When such an encoding is used it is not possible to use the values 0x80 - 0x9F simultaneously for both purposes (printable characters and control codes). So even though there are 8 bits, they are not available for the purposes defined by the standard.
So from the point of view of this standard we treat this as a 7 bit environment and so for example CSI (0x9B) becomes a sequence of 0x1B 0x5B.
"Ok, forget the code pages, we live in the future now. unicode rules".
Ok, with utf-8, the 8 bit encoding for unicode, the story is the same.
Values 0x80 - 0xBF (which includes 0x80 - 0x9F) are in utf-8 treated as the last byte of a character (actually, a code point, but that's irrelevant) encoded by multiple bytes. Again, a conflict.
So if the control functions from the standard have to coexist with utf-8, again 7 bit environment has to be assumed for the purposes of this standard.
(Actually, unicode (so also utf-8) does allow to encode the C1 control codes as valid unicode code points but then they will only work if interpreted by a program which is aware of unicode. Assuming 7 bits removes that requirement)
Your quote uses LS0, SHIFT-IN (SI)
these are thigs defined in the ECMA-35 (ISO 2022) standard are a form of making it possible to encode more characters into the 7 or 8 available bits.
You probably don't have to deal with this part unless you actually want to support these kind of character encodings.

How does UTF-8 represent characters?

I'm reading UTF-8 Encoding, and I don't understand the following sentence.
For characters equal to or below 2047 (hex 0x07FF), the UTF-8
representation is spread across two bytes. The first byte will have
the two high bits set and the third bit clear (i.e. 0xC2 to 0xDF). The
second byte will have the top bit set and the second bit clear (i.e.
0x80 to 0xBF).
If I'm not mistaken, this means UTF-8 requires two bytes to represent 2048 characters. In other words, we need to choose 2048 candidates from 2 to the power of 16 to represent each character.
For characters equal to or below 2047 (hex 0x07FF), the UTF-8
representation is spread across two bytes.
What's the big deal about choosing 2048 out of 65,536? However, UTF-8 explicitly sets boundary to each byte.
With following statements, The number of combinations is 30 (0xDF - 0xC2 + 0x01) for first byte, and 64 (0xBF - 0x80 + 0x01) for second byte.
The first byte will have
the two high bits set and the third bit clear (i.e. 0xC2 to 0xDF). The
second byte will have the top bit set and the second bit clear (i.e.
0x80 to 0xBF).
How does 1920 numbers (64 times 30) accommodate 2048 combinations?
As you already know, 2047 (0x07FF) contains the raw bits
00000111 11111111
If you look at the bit distribution chart for UTF-8:
You will see that 0x07FF falls in the second line, so it is encoded as 2 bytes using this bit pattern:
110xxxxx 10xxxxxx
Substitute the raw bits into the xs and you get this result:
11011111 10111111 (0xDF 0xBF)
Which is exactly as the description you quoted says:
The first byte will have the two high bits set and the third bit clear (11011111). The second byte will have the top bit set and the second bit clear (10111111).
Think of it as a container, where the encoding reserves a few bits for its own synchronization, and you get to use the remaining bits.
So for the range in question, the encoding "template" is
110 abcde 10 fghijk
(where I have left a single space to mark the boundary between the template and the value from the code point we want to encode, and two spaces between the actual bytes)
and you get to use the 11 bits abcdefghijk for the value you actually want to transmit.
So for the code point U+07EB you get
0x07 00000111
0xEB 11101011
where the top five zero bits are masked out (remember, we only get 11 -- because the maximum value that the encoding can accommodate in two bytes is 0x07FF. If you have a larger value, the encoding will use a different template, which is three bytes) and so
0x07 = _____ 111 (template: _____ abc)
0xEB = 11 101011 (template: de fghijk)
abc de = 111 11 (where the first three come from 0x07, and the next two from 0xEB)
fghijk = 101011 (the remaining bits from 0xEB)
yielding the value
110 11111 10 101011
aka 0xDF 0xAB.
Wikipedia's article on UTF-8 contains more examples with nicely colored numbers to see what comes from where.
The range 0x00-0x7F, which can be represented in a single byte, contains 128 code points; the two-byte range thus needs to accommodate 1920 = 2048-128 code points.
The raw encoding would allow values in the range 0xC0-0xBF in the first byte, but the values 0xC0 and 0xC1 are not ever needed because those would represent code points which can be represented in a single byte, and thus are invalid as per the encoding spec. In other words, the 0x02 in 0xC2 comes from the fact that at least one bit in the high four bits out of the 11 that this segment of the encoding can represent (one of abcd) needs to be a one bit in order for the value to require two bytes.

Data size for ascii representation of 32bit floats

If the largest 32 bit number I can express is 0xFFFFFFFF, then in ascii representation, is this 64 bits in size (in hex)?
Yes, assuming you use one octet per ascii character, as is conventional. However you might also need a terminating nul, and maybe the "0x" prefix. Also you can use a 7 bit representation for ascii, it's still an ascii encoding (though hard to work with on 8-bit based platforms).

How many characters can UTF-8 encode?

If UTF-8 is 8 bits, does it not mean that there can be only maximum of 256 different characters?
The first 128 code points are the same as in ASCII. But it says UTF-8 can support up to million of characters?
How does this work?
UTF-8 does not use one byte all the time, it's 1 to 4 bytes.
The first 128 characters (US-ASCII) need one byte.
The next 1,920 characters need two bytes to encode. This covers the remainder of almost all Latin alphabets, and also Greek, Cyrillic, Coptic, Armenian, Hebrew, Arabic, Syriac and Tāna alphabets, as well as Combining Diacritical Marks.
Three bytes are needed for characters in the rest of the Basic Multilingual Plane, which contains virtually all characters in common use[12] including most Chinese, Japanese and Korean [CJK] characters.
Four bytes are needed for characters in the other planes of Unicode, which include less common CJK characters, various historic scripts, mathematical symbols, and emoji (pictographic symbols).
source: Wikipedia
UTF-8 uses 1-4 bytes per character: one byte for ascii characters (the first 128 unicode values are the same as ascii). But that only requires 7 bits. If the highest ("sign") bit is set, this indicates the start of a multi-byte sequence; the number of consecutive high bits set indicates the number of bytes, then a 0, and the remaining bits contribute to the value. For the other bytes, the highest two bits will be 1 and 0 and the remaining 6 bits are for the value.
So a four byte sequence would begin with 11110... (and ... = three bits for the value) then three bytes with 6 bits each for the value, yielding a 21 bit value. 2^21 exceeds the number of unicode characters, so all of unicode can be expressed in UTF8.
Unicode vs UTF-8
Unicode resolves code points to characters. UTF-8 is a storage mechanism for Unicode. Unicode has a spec. UTF-8 has a spec. They both have different limits. UTF-8 has a different upwards-bound.
Unicode
Unicode is designated with "planes." Each plane carries 216 code points. There are 17 Planes in Unicode. For a total of 17 * 2^16 code points. The first plane, plane 0 or the BMP, is special in the weight of what it carries.
Rather than explain all the nuances, let me just quote the above article on planes.
The 17 planes can accommodate 1,114,112 code points. Of these, 2,048 are surrogates, 66 are non-characters, and 137,468 are reserved for private use, leaving 974,530 for public assignment.
UTF-8
Now let's go back to the article linked above,
The encoding scheme used by UTF-8 was designed with a much larger limit of 231 code points (32,768 planes), and can encode 221 code points (32 planes) even if limited to 4 bytes.[3] Since Unicode limits the code points to the 17 planes that can be encoded by UTF-16, code points above 0x10FFFF are invalid in UTF-8 and UTF-32.
So you can see that you can put stuff into UTF-8 that isn't valid Unicode. Why? Because UTF-8 accommodates code points that Unicode doesn't even support.
UTF-8, even with a four byte limitation, supports 221 code points, which is far more than 17 * 2^16
According to this table* UTF-8 should support:
231 = 2,147,483,648 characters
However, RFC 3629 restricted the possible values, so now we're capped at 4 bytes, which gives us
221 = 2,097,152 characters
Note that a good chunk of those characters are "reserved" for custom use, which is actually pretty handy for icon-fonts.
* Wikipedia used show a table with 6 bytes -- they've since updated the article.
2017-07-11: Corrected for double-counting the same code point encoded with multiple bytes
2,164,864 “characters” can be potentially coded by UTF-8.
This number is 27 + 211 + 216 + 221, which comes from the way the encoding works:
1-byte chars have 7 bits for encoding
0xxxxxxx (0x00-0x7F)
2-byte chars have 11 bits for encoding
110xxxxx 10xxxxxx (0xC0-0xDF for the first byte; 0x80-0xBF for the second)
3-byte chars have 16 bits for encoding
1110xxxx 10xxxxxx 10xxxxxx (0xE0-0xEF for the first byte; 0x80-0xBF for continuation bytes)
4-byte chars have 21 bits for encoding
11110xxx 10xxxxxx 10xxxxxx 10xxxxxx (0xF0-0xF7 for the first byte; 0x80-0xBF for continuation bytes)
As you can see this is significantly larger than current Unicode (1,112,064 characters).
UPDATE
My initial calculation is wrong because it doesn't consider additional rules. See comments to this answer for more details.
UTF-8 is a variable length encoding with a minimum of 8 bits per character.
Characters with higher code points will take up to 32 bits.
Quote from Wikipedia: "UTF-8 encodes each of the 1,112,064 code points in the Unicode character set using one to four 8-bit bytes (termed "octets" in the Unicode Standard)."
Some links:
http://www.utf-8.com/
http://www.joelonsoftware.com/articles/Unicode.html
http://www.icu-project.org/docs/papers/forms_of_unicode/
http://en.wikipedia.org/wiki/UTF-8
Check out the Unicode Standard and related information, such as their FAQ entry, UTF-8 UTF-16, UTF-32 & BOM. It’s not that smooth sailing, but it’s authoritative information, and much of what you might read about UTF-8 elsewhere is questionable.
The “8” in “UTF-8” relates to the length of code units in bits. Code units are entities use to encode characters, not necessarily as a simple one-to-one mapping. UTF-8 uses a variable number of code units to encode a character.
The collection of characters that can be encoded in UTF-8 is exactly the same as for UTF-16 or UTF-32, namely all Unicode characters. They all encode the entire Unicode coding space, which even includes noncharacters and unassigned code points.
While I agree with mpen on the current maximum UTF-8 codes (2,164,864) (listed below, I couldn't comment on his), he is off by 2 levels if you remove the 2 major restrictions of UTF-8: only 4 bytes limit and codes 254 and 255 can not be used (he only removed the 4 byte limit).
Starting code 254 follows the basic arrangement of starting bits (multi-bit flag set to 1, a count of 6 1's, and terminal 0, no spare bits) giving you 6 additional bytes to work with (6 10xxxxxx groups, an additional 2^36 codes).
Starting code 255 doesn't exactly follow the basic setup, no terminal 0 but all bits are used, giving you 7 additional bytes (multi-bit flag set to 1, a count of 7 1's, and no terminal 0 because all bits are used; 7 10xxxxxx groups, an additional 2^42 codes).
Adding these in gives a final maximum presentable character set of 4,468,982,745,216. This is more than all characters in current use, old or dead languages, and any believed lost languages. Angelic or Celestial script anyone?
Also there are single byte codes that are overlooked/ignored in the UTF-8 standard in addition to 254 and 255: 128-191, and a few others. Some are used locally by the keyboard, example code 128 is usually a deleting backspace. The other starting codes (and associated ranges) are invalid for one or more reasons (https://en.wikipedia.org/wiki/UTF-8#Invalid_byte_sequences).
Unicode is firmly married to UTF-8. Unicode specifically supports 2^21 code points (2,097,152 characters) which is exactly the same number of code points supported by UTF-8. Both systems reserve the same 'dead' space and restricted zones for code points etc. ...as of June 2018 the most recent version, Unicode 11.0, contains a repertoire of 137,439 characters
From the unicode standard. Unicode FAQ
The Unicode Standard encodes characters in the range U+0000..U+10FFFF,
which amounts to a 21-bit code space.
From the UTF-8 Wikipedia page. UTF-8 Description
Since the restriction of the Unicode code-space to 21-bit values in
2003, UTF-8 is defined to encode code points in one to four bytes, ...

What is the maximum number of bytes for a UTF-8 encoded character?

What is the maximum number of bytes for a single UTF-8 encoded character?
I'll be encrypting the bytes of a String encoded in UTF-8 and therefore need to be able to work out the maximum number of bytes for a UTF-8 encoded String.
Could someone confirm the maximum number of bytes for a single UTF-8 encoded character please
The maximum number of bytes per character is 4 according to RFC3629 which limited the character table to U+10FFFF:
In UTF-8, characters from the U+0000..U+10FFFF range (the UTF-16
accessible range) are encoded using sequences of 1 to 4 octets.
(The original specification allowed for up to six byte character codes for code points past U+10FFFF.)
Characters with a code less than 128 will require 1 byte only, and the next 1920 character codes require 2 bytes only. Unless you are working with an esoteric language, multiplying the character count by 4 will be a significant overestimation.
Without further context, I would say that the maximum number of bytes for a character in UTF-8 is
answer: 6 bytes
The author of the accepted answer correctly pointed this out as the "original specification". That was valid through RFC-2279 1. As J. Cocoe pointed out in the comments below, this changed in 2003 with RFC-3629 2, which limits UTF-8 to encoding for 21 bits, which can be handled with the encoding scheme using four bytes.
answer if covering all unicode: 4 bytes
But, in Java <= v7, they talk about a 3-byte maximum for representing unicode with UTF-8? That's because the original unicode specification only defined the basic multi-lingual plane (BMP), i.e. it is an older version of unicode, or subset of modern unicode. So
answer if representing only original unicode, the BMP: 3 bytes
But, the OP talks about going the other way. Not from characters to UTF-8 bytes, but from UTF-8 bytes to a "String" of bytes representation. Perhaps the author of the accepted answer got that from the context of the question, but this is not necessarily obvious, so may confuse the casual reader of this question.
Going from UTF-8 to native encoding, we have to look at how the "String" is implemented. Some languages, like Python >= 3 will represent each character with integer code points, which allows for 4 bytes per character = 32 bits to cover the 21 we need for unicode, with some waste. Why not exactly 21 bits? Because things are faster when they are byte-aligned. Some languages like Python <= 2 and Java represent characters using a UTF-16 encoding, which means that they have to use surrogate pairs to represent extended unicode (not BMP). Either way that's still 4 bytes maximum.
answer if going UTF-8 -> native encoding: 4 bytes
So, final conclusion, 4 is the most common right answer, so we got it right. But, mileage could vary.
The maximum number of bytes to support US-ASCII, a standard English alphabet encoding, is 1. But limiting text to English is becoming less desirable or practical as time goes by.
Unicode was designed to represent the glyphs of all human languages, as well as many kinds of symbols, with a variety of rendering characteristics. UTF-8 is an efficient encoding for Unicode, although still biased toward English. UTF-8 is self-synchronizing: character boundaries are easily identified by scanning for well-defined bit patterns in either direction.
While the maximum number of bytes per UTF-8 character is 3 for supporting just the 2-byte address space of Plane 0, the Basic Multilingual Plane (BMP), which can be accepted as minimal support in some applications, it is 4 for supporting all 17 current planes of Unicode (as of 2019). It should be noted that many popular "emoji" characters are likely to be located in Plane 16, which requires 4 bytes.
However, this is just for basic character glyphs. There are also various modifiers, such as making accents appear over the previous character, and it is also possible to link together an arbitrary number of code points to construct one complex "grapheme". In real world programming, therefore, the use or assumption of a fixed maximum number of bytes per character will likely eventually result in a problem for your application.
These considerations imply that UTF-8 character strings should not "expanded" into arrays of fixed length prior to processing, as has sometimes been done. Instead, programming should be done directly, using string functions specifically designed for UTF-8.
Condidering just technical limitations - it's possible to have up to 7 bytes following current UTF8 encoding scheme. According to it - if first byte is not self-sufficient ASCII character, than it should have pattern: 1(n)0X(7-n), where n is <= 7.
Also theoretically it could be 8 but then first byte would have no zero bit at all. While other aspects, like continuation byte differing from leading, are still there (allowing error detection), I heared, that byte 11111111 could be invalid, but I can't be sure about that.
Limitatation for max 4 bytes is most likely for compatibility with UTF-16, which I tend to consider a legacy, because the only quality where it excels, is processing speed, but only if string byte order matches (i.e. we read 0xFEFF in the BOM).

Resources