What is meaning of assume char set is ASCII? - data-structures

I was solving below problem while reading its solution in first line I read this
can anyone help me in explaining assume char set is ASCII **I Don't want any other solution for this problem I just want to understand the statement **
Implement an algorithm to determine if a string has all unique characters. What if you can not use additional data structures
Thanks in advance for the help.

There is no text but encoded text.
Text is a sequence of "characters", members of a character set. A character set is a one-to-one mapping between a notional character and a non-negative integer, called a codepoint.
An encoding is a mapping between a codepoint and a sequence of bytes.
Examples:
ASCII, 128 codepoints, one encoding
OEM437, 256 codepoints, one encoding
Windows-1252, 251 codepoints, one encoding
ISO-8859-1, 256 codepoints, one encoding
Unicode, 1,114,112 codepoints, many encodings: UTF-8, UTF-16, UTF-32,…
When you receive a byte stream or read a file that represents text, you have to know the character set and encoding. Conversely, when you send a byte stream or write a file that represents text, you have let the receiver know the character set and encoding. Otherwise, you have a failed communication.
Note: Program source code is almost always text files. So, this communication requirement also applies between you, your editor/IDE and your compiler.
Note: Program console input and output are text streams. So, this communication requirement also applies between the program, its libraries and your console (shell). Go locale or chcp to find out what the encoding is.
Many character sets are a superset of ASCII and some encodings map the same characters with the same byte sequences. This causes a lot of confusion, limits learning, promotes usage of poor terminology and the partial interoperablity leads to buggy code. A deliberate approach to specifications and coding eliminates that.
Examples:
Some people say "ASCII" when they mean the common subset of characters between ASCII and the character set they are actually using. In Unicode and elsewhere this is called C0 Controls and Basic Latin.
Some people say "ASCII Code" when they just mean codepoint or the codepoint's encoded bytes (or code units).
The context of your question is unclear but the statement is trying to say that the distinct characters in your data are in the ASCII character set and therefore their number is less than or equal to 128. Due to the similarity between character sets, you can assume that the codepoint range you need to be concerned about is 0 to 127. (Put comments, asserts or exceptions as applicable in your code to make that clear to readers and provide some runtime checking.)
What this means in your programming language depends on the programming language and its libraries. Many modern programming languages use UTF-16 to represent strings and UTF-8 for streams and files. Programs are often built with standard libraries that account for the console's encoding (actual or assumed) when reading or writing from the console.
So, if your data comes from a file, you must read it using the correct encoding. If your data comes from a console, your program's standard libraries will possibly change encodings from the console's encoding to the encoding of the language's or standard library's native character and string datatypes. If your data comes from a source code file, you have to save it in one specific encoding and tell the compiler what that is. (Usually, you would use the default source code encoding assumed by the compiler because that generally doesn't change from system to system or person to person.)
The "additional" data structures bit probably refers to what a language's standard libraries provide, such as list, map or dictionary. Use what you've been taught so far, like maybe just an array. Of course, you can just ask.

Basically, assume that character codes will be within the range 0-127. You won't need to deal with crazy accented characters.
More than likely though, they won't use many, if any codes below 32; since those are mostly non-printables.

Characters such as 'a' 'b' '1' or '#' are encoded into a binary number when stored and used by a computer.
e.g.
'a' = 1100001
'b' = 1100010
There are a number of different standards that you could use for this encoding. ASCII is one of those standards. The other most common standard is called UTF-8.
Not all characters can be encoded by all standards. ASCII has a much more limited set of characters than UTF-8. As such an encoding also defines the set of characters "char set" that are supported by that encoding.
ASCII encodes each character into a single byte. It supports the letters A-Z, and lowercase a-z, the digits 0-9, a small number of familiar symbols, and a number of control characters that were used in early communication protocols.
The full set of characters supported by ASCII can be seen here: https://en.wikipedia.org/wiki/ASCII

Related

Can I represent arbitrary binary data in WTF-8 (or any extension of UTF-8)?

A long time ago, there was a two-byte Unicode encoding UCS-2, but then it was determined that two bytes are sometimes not enough. In order to cram more codepoints into 16 bit, surrogate pairs were introduced in UTF-16. Since Windows started out with UCS-2, it doesn't enforce rules around surrogate pairs in some places, most notably file systems.
Programs that want to use UTF-8 internally have a problem now dealing with these invalid UTF-16 sequences. For this, WTF-8 was developed. It is mostly relaxed UTF-8, but it is able to round-trip invalid surrogate pairs.
Now it seems like it should be possible to relax UTF-8 a bit further, and allow it to represent arbitrary binary data, round-tripping safe. The strings I am thinking about are originally 99.9% either valid UTF-8, or almost valid UTF-16 of the kind WTF-8 can stomach. But occasionally there will be invalid byte sequences thrown in.
WTF-8 defines generalized UTF-8 as:
an encoding of sequences of code points (not restricted to Unicode scalar values) using 8-bit bytes, based on the same underlying algorithm as UTF-8. It is a strict superset of UTF-8 (like UTF-8 is a strict superset of ASCII).
Would generalized UTF-8 allow me to store arbitrary 32 bit sequences, and thus arbitrary data? Or is there another way, such as a unicode escape character? Things I don't want to do are base64 encoding or percent-encoding, since I want to leave valid unicode strings unchanged.
Standard disclaimer: I encountered this problem a couple times before, but right now it is an academic question, and I'm just interested in a straight answer how to do this. There is no XY problem :-)

What are surrogate characters in UTF-8?

I have a strange validation program that validates wheather a utf-8 string is a valid host name(Zend Framework Hostname valdiator in PHP). It allows IDNs(internationalized domain names). It will compare each subdomain with sets of characters defined by their HEX bytes representation. Two such sets are D800-DB7F and DC00-DFFF. Php regexp comparing function called preg_match fails during these comparsions and it says that DC00-DFFF characters are not allowed in this function. From wikipedia I learned these bytes are called surrogate characters in UTF-8. What are thay and which characters they actually correspond to? I read in several places I still don't understand what they are.
What are surrogate characters in UTF-8?
This is almost like a trick question.
Approximate answer #1: 4 bytes (if paired and encoded in UTF-8).
Approximate answer #2: Invalid (if not paired).
Approximate answer #3: It's not UTF-8; It's Modified UTF-8.
Synopsis: The term doesn't apply to UTF-8.
Unicode codepoints have a range that needs 21 bits of data.
UTF-16 code units are 16 bits. UTF-16 encodes some ranges of Unicode codepoints as one code unit and others as pairs of two code units, the first from a "high" range, the second from a "low" range. Unicode reserves the codepoints that match the ranges of the high and low pairs as invalid. They are sometimes called surrogates but they are not characters. They don't mean anything by themselves.
UTF-8 code units are 8 bits. UTF-8 encodes several distinct ranges of codepoints in one to four code units, respectively.
#1 It happens that the codepoints that UTF-16 encodes with two 16-bit code units, UTF-8 encodes with 4 8-bit code units, and vice versa.
#2 You can apply the UTF-8 encoding algorithm to the invalid codepoints, which is invalid. They can't be decoded to a valid codepoint. A compliant reader would throw an exception or throw out the bytes and insert a replacement character (�).
#3 Java provides a way of implementing functions in external code with a system called JNI. The Java String API provides access to String and char as UTF-16 code units. In certain places in JNI, presumably as a convenience, string values are modified UTF-8. Modified UTF-8 is the UTF-8 encoding algorithm applied to UTF-16 code units instead of Unicode codepoints.
Regardless, the fundamental rule of character encodings is to read with the encoding that was used to write. If any sequence of bytes is to be considered text, you must know the encoding; Otherwise, you have data loss.

How to Decode UTF-8 Text Sequence \ud83e\udd14

I'm reading UTF-8 text that contains "\ud83e\udd14". Reading the specification, it says that U+D800 to U+DFFF are not used. Yet if I run this through a decoder such as Microsoft's System.Web.Helpers.Json.Decode, it yields the correct result of an emoticon of a face with a tongue hanging out. The text originates through Twitter's search api.
My question: how should this sequence be decoded? I'm looking for what the final hex sequence would be and how it is obtained. Thanks for any guidance. If my question isn't clear, please let me know and I will try to improve it.
You are coming at this from an interesting perspective. The first thing to note is that you're dealing with two levels of text: a JSON document and a string within it.
Synopsis: You don't need to write code to decode it. Use a library that deserializes JSON into objects, such as Newtonsoft's JSON.Net.
But, first, Unicode. Unicode is a character set with a bit of a history. Unlike almost every character set, 1) it has more than one encoding, and 2) it is still growing. A couple of decades ago, it had <65636 codepoints and that was thought to be enough. So, encoding each codepoint with as 2-byte integer was the plan. It was called UCS-2 or, simply, the Unicode encoding. (Microsoft has stuck with Encoding.Unicode in .NET, which causes some confusion.)
Aside: Codepoints are identified for discussion using the U+ABCD (hexadecimal) format.
Then the Unicode consortium decided to add more codepoints: all the way to U+10FFFF. For that, encodings need at least 21 bits. UTF-32, integers with 32 bits, is an obvious solution but not very dense. So, encodings that use a variable number of code units where invented. UTF-8 uses one to four 8-bit code units, depending on the codepoint.
But a lot of languages were adopting UCS-2 in the 1990s. Documents, of course, can be transformed at will but code that processes UCS-2 would break without a compatible encoding for the expanded character set. Since U+D800 to U+DFFF where unassigned, UCS-2 could stay the same and those "surrogate codepoints" could be used to encode new codepoints. The result is UTF-16. Each codepoint is encoded in one or two 16-bit code units. So, programs that processed UCS-2 could automatically process UTF-16 as long as they didn't need to understand it. Programs written in the same system could be considered to be processing UTF-16, especially with libraries that do understand it. There is still the hazard of things like string length giving the number of UTF-16 code units rather than the number of codepoints, but it has otherwise worked out well.
As for the \ud83e\udd14 notation, languages use Unicode in their syntax or literal strings desired a way to accept source files in a non-Unicode encoding and still support all the Unicode codepoints. Being designed in the 1990s, they simply wrote the UCS-2 code units in hexadecimal. Of course, that too is extended to UTF-16. This UTF-16 code unit escaped syntax allows intermediary systems to handle source code files with a non-Unicode encoding.
Now, JSON is based on JavaScript and JavaScript's strings are sequences of UTF-16 code units. So JSON has adopted th UTF-16 code unit escaped syntax from JavaScript. However, it's not very useful (unless you have to deal with intermediary systems that can't be made to use UTF-8 or treat files they don't understand as binary). The old JSON standard requires JSON documents exchanged between systems to be encoded with UTF-8, UTF-16 or UTF-32. The new RFC8259 requires UTF-8.
So, you don't have "UTF-8 text", you have Unicode text encoding with UTF-8. The text itself is a JSON document. JSON documents have names and values that are Unicode text as sequences of UTF-16 code units with escapes allowed. Your document has the codepoint U+1F914 written, not as "🤔" but as "\ud83e\udd14".
There are plenty of libraries that transform JSON to objects so you shouldn't need to decode the names or values in a JSON document. To do it manually, you'd recognize the escape prefix and take the next 4 characters as the bits of a surrogate, extracting the data bits, then combine them with the bits from the paired surrogate that should follow.
Thought I'd read up on UTF-16 to see if it gave me any clues, and it turns out this is what it calls a surrogate pair. The hex formula for decoding is:
(H - D800) * 400 + (L - DC00) + 10000
where H is the first (high) codepoint and L is the second (low) codepoint.
So \ud83e\udd14 becomes 1f914
Apparently UTF-8 decoders must anticipate UTF-16 surrogate pairs.

bits and bytes and what form are them

I'm still confused about the bits and bytes although I've been searching through the internet. Is that one character of ASCII = 1 bytes = 8 bits? So 8 bits have 256 unique pattern that covered all the ASCII code, what form is it stored in our computer?
And if I typed "Hello" does that mean this consists of 5 bytes?
Yes to everything you wrote. "Bit" is a binary digit: a 0 or a 1. Historically there existed bytes of smaller sizes; now "byte" only ever means "8 bits of information", or a number between 0 and 255.
No. ASCII is a character set with 128 codepoints stored as the values 0-127. Modern computers predominantly address 8-bit memory and disk locations so a 7-bit ASCII value takes up 8 bits.
There is no text but encoded text. An encoding maps a member of a character set to one or more bytes. Unless you absolutely know you are using ASCII, you probably aren't. There are quite a few character sets with encodings that cover all 256 byte values and use any combination of byte values to encode a string.
There are several character sets that are similar but have a few less than 256 characters. And others that use more than one byte to encode a codepoint and don't use every combination of byte values.
Just so you know, Unicode is the predominant character set except in very specialized situations. It has several encodings. UTF-8 is often used for storage and streams. UTF-16 is often used in memory, particularly in Java, .NET, JavaScript, XML, …. When text is communicated between systems, there has to be an agreement, specification, standard, or indication about which character set and encoding it uses so a sequence of bytes can be interpreted as characters.
To add to the confusion, programming languages have data types called char, Character, etc. You have to look at the specific language's reference manual to see what they mean. For example in C, char is simply an integer that is defined as the size of the encoding of character used by that C implementation. (C also calls this a "byte" and it is not necessarily 8 bits. In all other contexts, people mean 8 bits when they say "byte". If they want to be exceedingly unambiguous they might say "octet".)
"Hello" is five characters. In a specific character set, it is five codepoints. In a specific encoding for that character set, it could be 5, 10 or 20, or ??? bytes.
Also, in the source code of a specific language, a literal string like that might be "null-terminated". This means that you could say it is 6 "characters". Other languages might store a string as a counted sequence of code units. Again, you have to look at the language reference to know the underlying data structure of strings. Of, if the language and the libraries used with it are sufficiently high-level, you might never need to know such internals.

What is a multibyte character set?

Does the term multibyte refer to a charset whose characters can - but don't have to be - wider than 1 byte, (e.g. UTF-8) or does it refer to character sets which are in any case wider than 1 byte (e.g. UTF-16) ? In other words: What is meant if anybody talks about multibyte character sets?
The term is ambiguous, but in my internationalization work, we typically avoided the term "multibyte character sets" to refer to Unicode-based encodings. Generally, we used the term only for legacy encoding schemes that had one or more bytes to define each character (excluding encodings that require only one byte per character).
Shift-jis, jis, euc-jp, euc-kr, along with Chinese encodings are typically included.
Most of the legacy encodings, with some exceptions, require a sort of state machine model (or, more simply, a page swapping model) to process, and moving backwards in a text stream is complicated and error-prone. UTF-8 and UTF-16 do not suffer from this problem, as UTF-8 can be tested with a bitmask and UTF-16 can be tested against a range of surrogate pairs, so moving backward and forward in a non-pathological document can be done safely without major complexity.
A few legacy encodings, for languages like Thai and Vietnamese, have some of the complexity of multibyte character sets but are really just built on combining characters, and aren't generally lumped in with the broad term "multibyte."
What is meant if anybody talks about multibyte character sets?
That, as usual, depends on who is doing the talking!
Logically, it should include UTF-8, Shift-JIS, GB etc.: the variable-length encodings. UTF-16 would often not be considered in this group (even though it kind of is, what with the surrogates; and certainly it's multiple bytes when encoded into bytes via UTF-16LE/UTF-16BE).
But in Microsoftland the term would more typically be used to mean a variable-length default system codepage (for legacy non-Unicode applications, of which there are sadly still plenty). In this usage, UTF-8 and UTF-16LE/UTF-16BE cannot be included because the system codepage on Windows cannot be set to either of these encodings.
Indeed, in some cases “mbcs” is no more than a synonym for the system codepage, otherwise known (even more misleadingly) as “ANSI”. In this case a “multibyte” character set could actually be something as trivial as cp1252 Western European, which only uses one byte per character!
My advice: use “variable-length” when you mean that, and avoid the ambiguous term “multibyte”; when someone else uses it you'll need to ask for clarification, but typically someone with a Windows background will be talking about a legacy East Asian codepage like cp932 (Shift-JIS) and not a UTF.
All character sets where you dont have a 1 byte = 1 character mapping. All Unicode variants, but also asian character sets are multibyte.
For more information, I suggest reading this Wikipedia article.
A multibyte character will mean a character whose encoding requires more than 1 byte. This does not imply however that all characters using that particular encoding will have the same width (in terms of bytes). E.g: UTF-8 and UTF-16 encoded character may use multiple bytes sometimes whereas all UTF-32 encoded characters always use 32-bits.
References:
IBM: Multibyte Characters
Unicode and MultiByte Character Set (archived), Unicode and Multibyte Character Set (MBCS) Support | Microsoft Docs
Unicode Consortium Website
A multibyte character set may consist of both one-byte and two-byte
characters. Thus a multibyte-character string may contain a mixture of
single-byte and double-byte characters.
Ref: Single-Byte and Multibyte Character Sets
UTF-8 is multi-byte, which means that each English character (ASCII) is stored in 1 byte while non-english character like Chinese, Thai, is stored in 3 bytes. When you mix Chinese/Thai with English, like "ทt", the first Thai character "ท" uses 3 bytes while the second English character "t" uses only 1 byte. People who designed multi-byte encoding realized that English character shouldn't be stored in 3 bytes while it can fit in 1 byte due to the waste of storage space.
UTF-16 stores each character either English or non-English in a fixed 2 byte length so it is not multi-byte but called a wide character. It is very suitable for Chinese/Thai languages where each character fits entirely in 2 bytes but printing to utf-8 console output need a conversion from wide character to multi-byte format by using function wcstombs().
UTF-32 stores each character in a fixed 4 byte length but nobody use it to store character due to a waste of storage space.
Typically the former, i.e. UTF-8-like. For more info, see Variable-width encoding.
The former - although the term "variable-length encoding" would be more appropriate.
I generally use it to refer to any character that can have more than one byte per character.

Resources