How is size of characters in ascii table - utf-8

I was confused and i have questions, the first one is how many bites in chars ASCII table, i have heard that any char from ASCII is 7 or 8 bits but let's take an example, char "1" is 110001 so its size 6 bits, my question is "1" 6 bits or 7 bits or 8 bits ?
If the answer is 8 bit like 00110001, how can we encode 00110001 to utf8 .
thank you .

Related

How does UTF-8 represent characters?

I'm reading UTF-8 Encoding, and I don't understand the following sentence.
For characters equal to or below 2047 (hex 0x07FF), the UTF-8
representation is spread across two bytes. The first byte will have
the two high bits set and the third bit clear (i.e. 0xC2 to 0xDF). The
second byte will have the top bit set and the second bit clear (i.e.
0x80 to 0xBF).
If I'm not mistaken, this means UTF-8 requires two bytes to represent 2048 characters. In other words, we need to choose 2048 candidates from 2 to the power of 16 to represent each character.
For characters equal to or below 2047 (hex 0x07FF), the UTF-8
representation is spread across two bytes.
What's the big deal about choosing 2048 out of 65,536? However, UTF-8 explicitly sets boundary to each byte.
With following statements, The number of combinations is 30 (0xDF - 0xC2 + 0x01) for first byte, and 64 (0xBF - 0x80 + 0x01) for second byte.
The first byte will have
the two high bits set and the third bit clear (i.e. 0xC2 to 0xDF). The
second byte will have the top bit set and the second bit clear (i.e.
0x80 to 0xBF).
How does 1920 numbers (64 times 30) accommodate 2048 combinations?
As you already know, 2047 (0x07FF) contains the raw bits
00000111 11111111
If you look at the bit distribution chart for UTF-8:
You will see that 0x07FF falls in the second line, so it is encoded as 2 bytes using this bit pattern:
110xxxxx 10xxxxxx
Substitute the raw bits into the xs and you get this result:
11011111 10111111 (0xDF 0xBF)
Which is exactly as the description you quoted says:
The first byte will have the two high bits set and the third bit clear (11011111). The second byte will have the top bit set and the second bit clear (10111111).
Think of it as a container, where the encoding reserves a few bits for its own synchronization, and you get to use the remaining bits.
So for the range in question, the encoding "template" is
110 abcde 10 fghijk
(where I have left a single space to mark the boundary between the template and the value from the code point we want to encode, and two spaces between the actual bytes)
and you get to use the 11 bits abcdefghijk for the value you actually want to transmit.
So for the code point U+07EB you get
0x07 00000111
0xEB 11101011
where the top five zero bits are masked out (remember, we only get 11 -- because the maximum value that the encoding can accommodate in two bytes is 0x07FF. If you have a larger value, the encoding will use a different template, which is three bytes) and so
0x07 = _____ 111 (template: _____ abc)
0xEB = 11 101011 (template: de fghijk)
abc de = 111 11 (where the first three come from 0x07, and the next two from 0xEB)
fghijk = 101011 (the remaining bits from 0xEB)
yielding the value
110 11111 10 101011
aka 0xDF 0xAB.
Wikipedia's article on UTF-8 contains more examples with nicely colored numbers to see what comes from where.
The range 0x00-0x7F, which can be represented in a single byte, contains 128 code points; the two-byte range thus needs to accommodate 1920 = 2048-128 code points.
The raw encoding would allow values in the range 0xC0-0xBF in the first byte, but the values 0xC0 and 0xC1 are not ever needed because those would represent code points which can be represented in a single byte, and thus are invalid as per the encoding spec. In other words, the 0x02 in 0xC2 comes from the fact that at least one bit in the high four bits out of the 11 that this segment of the encoding can represent (one of abcd) needs to be a one bit in order for the value to require two bytes.

How does the UTF8 encoding support a range of 1 to 4 bytes in memory? [duplicate]

This question already has answers here:
How does UTF-8 "variable-width encoding" work?
(3 answers)
Closed 5 years ago.
I hope this is not a silly question at this time of night, but I can't seem to wrap my mind around it.
UTF-8 is a variable length encoding with a minimum of 8 bits per character. Characters with higher code points will take up to 32 bits.
So UTF-8 can encode unicode characters in a range of 1 to 4 bytes.
Does this mean that in a single UTF-8 encoded string, that one character may be 1 byte and another character may be 3 bytes?
If so, how in this example does a computer, when decoding from UTF-8, not try to treat those two separate characters as one 4 byte character?
If the data is held in memory as UTF-8 then, yes, it will be a variable width encoding.
However, the encoding allows a parser to know if the byte you are looking at is the start of a codepoint or an extra character.
From the Wikipedia page for UTF-8:
Bytes Bits First Last Bytes
1 7 U+000000 U+00007F 0xxxxxxx
2 11 U+000080 U+0007FF 110xxxxx 10xxxxxx
3 16 U+000800 U+00FFFF 1110xxxx 10xxxxxx 10xxxxxx
4 21 U+010000 U+10FFFF 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx

Determining number of bytes in MIPS code segment

So, I understand that MIPS runs as 32-bit and that words are 8 bits (4 bytes).
If I have the following code,
.data
.word 5
.asciiz "Hi"
I know that there is one word being stored and it must be 4 bytes, but how do I determine the number of bytes in the third line? I've asked my instructor for help but she keeps referencing me to the following example:
.asciiz "help"
Apparently this is 5 bytes, but I'm not able to see how or why it is 5 bytes. I would appreciate some clarification, my instructor is reluctant to share techniques and I can't find information on this in my textbook
.asciiz creates a zero-terminated ASCII string, i.e. a string of ASCII characters followed by a byte with the value 0 (terminator).
So the the number of bytes required is the number of characters plus 1. Hence asciiz "help" -> 5 bytes, and asciiz "Hi" -> 3 bytes.

what if ASCII uses 12 bits per character?

If ASCII uses 7 bits to represent characters. Could someone explain what this means towards the number of characters that are supported. How would that change if ASCII used 12 bits per character?
A bit has two possible states. A group of n bits has 2n possible states.
Therefore 7 bits can represent 27 = 128 possible characters and 12 bits can represent 212 = 4096 possible characters.
This abridged excerpt from Wikipedia's table of character sets provides historical perspective:
BCDIC 1928 6 bits Introduced with
the IBM card
FIELDATA 1956 6/7 Battlefield
bits information (USA)
EBCDIC 1963 8 bits IBM computers
Teleprinters and
computers;
ASCII 1963-06-17 7 bits original
(ASA X3.4-1963) definition of
ASCII
ECMA-6 1965-04-30 7 bits ASCII localization
ISO 646 1967 (ISO/R646-1967) 7 bits ASCII localization
1967 (USAS Close to "modern"
ASCII X3.4-1967) 7 bits definition of
ASCII
IBM data
Braille ASCII 1969 6/7 Tactile print for
bits blind persons
Terminal text
ECMA-48 1972 7 bits manipulation and
colors
ISO/IEC 8859 1987 8 bits International
codes
Unified encoding
Unicode 1991 16/32 for most of the
bits world's writing
systems
A 12 bit code can support 2 to the twelve or 4096 characters, minus one or two for non-characters like null, maybe escape, and a few whitespace characters.
Now you could construct a computer with 12 bit bytes. But it would be an expensive re-engineering operation. Most computers have 8 bit bytes, at least partly because of ascii.
But the method chosen to extend ascii was Unicode, and the encoding that is emerging as standard is UTF-8 This is a superset of ascii in a sense - ascii is unicode. The unused top bit is set and additional bytes added to generate extended non-Latin characters. So it is variable width encoding, the codes are always a multiple of 8 bits, and its slightly open ended in that it is possible to add codes at the top of the range, but currently encoding never goes wider than four bytes.

Byte and bits in computer science in C language

A language has 28 different letters in total. Each word in the language is
composed of maximum 7 letters. You want to create a data-type to store a word of this language. You decide to store the word as an array of letters. How many bits will you assign to the data-type to be able to store all kinds of words of the language.
Op 1: 7
Op 2: 35
Op 3: 28
Op 4: 196
With n bits, you can represent one of a set of 2**n characters, so 5 bits will be sufficient to represent one of the 28 characters in your set (2**5=32).
For each character in the string, you will need five bits, so a seven-character string will require 35 bits.
Note that you could map all 28^7 words onto 34 bits (ceiling(log(28^7)/log(2)) = 34), but I wouldn't recommend it.

Resources