Base64 calculation for extended ascii? - algorithm

As far as I know - base 64 can represent any char. ( inclduing binary)
Base64 encoding schemes are commonly used when there is a need to
encode binary(!) data that need to be stored and transferred over
media that are designed to deal with textual data
So I tried to apply it on extended ascii char ( beyond the 127)
the char :
after following the simple algorithm :
I got to :
so the value should be Fy
So why when I use online-encoder and put the value by alt+178 ,
I get this result :
What is going on here ?

Your browser sent the encoding website the UTF-8 encoding of the character. This encoding is not 178.

That is the UTF-8 encoding of unicode character U+2593, which is the same as extended ASCII character 178.

Thanks to Damien_The_Unbeliever && zmbq &&Markus Jarderot :
so

Related

How to decode/encode ASCII_8BIT in the Scala?

I have an external system written in Ruby which sending a data over the wire encoded with ASCII_8BIT. How should I decode and encode them in the Scala?
I couldn't find a library for decoding and encoding ASCII_8BIT string in scala.
As I understand, correctly, the ASCII_8BIT is something similar to Base64. However, there is more than one Base64 encoding. Which type of encoding should I use to be sure that cover all corner cases?
What is ASCII-8BIT?
ASCII-8BIT is Ruby's binary encoding (the name "BINARY" is accepted as an alias for "ASCII-8BIT" when specifying the name of an encoding). It is used both for binary data and for text whose real encoding you don't know.
Any sequence of bytes is a valid string in the ASCII-8BIT encoding, but unlike other 8bit-encodings, only the bytes in the ASCII range are considered printable characters (and of course only those that are printable in ASCII). The bytes in the 128-255 range are considered special characters that don't have a representation in other encodings. So trying to convert an ASCII-8BIT string to any other encoding will fail (or replace the non-ASCII characters with question marks depending on the options you give to encode) unless it only contains ASCII characters.
What's its equivalent in the Scala/JVM world?
There is no strict equivalent. If you're dealing with binary data, you should be using binary streams that don't have an encoding and aren't treated as containing text.
If you're dealing with text, you'll either need to know (or somehow figure out) its encoding or just arbitrarily pick an 8-bit ASCII-superset encoding. That way non-ASCII characters may come out as the wrong character (if the text was actually encoded with a different encoding), but you won't get any errors because any byte is a valid character. You can then replace the non-ASCII characters with question marks if you want.
What does this have to do with Base64?
Nothing. Base64 is a way to represent binary data as ASCII text. It is not itself a character encoding. Knowing that a string has the character encoding ASCII or ASCII-8BIT or any other encoding, doesn't tell you whether it contains Base64 data or not.
But do note that a Base64 string will consist entirely of ASCII characters (and not just any ASCII characters, but only letters, numbers, +, / and =). So if your string contains any non-ASCII character or any character except the aforementioned, it's not Base64.
Therefore any Base64 string can be represented as ASCII. So if you have an ASCII-8BIT string containing Base64 data in Ruby, you should be able to convert it to ASCII without any problems. If you can't, it's not Base64.

Is there a term for characters that appear from ios logs like `\M-C\M-6` or `\134`

I'm trying to figure out the term for these types of characters:
\M-C\M-6 (corresponds to german "ö")
\M-C\M-$ (corresponds to german "ä")
\M-C\M^_ (corresponds to german "ß")
I want to know the term for these outputs so that I can easily convert them into the utf-8 character they actually are in golang instead of creating a mapping of each I come across.
What is the term for these? unicode? What would be the best way to convert these "characters" to their actual human readable character in golang?
It is the vis encoding of UTF-8 encoded text.
Here's an example:
The UTF-8 encoding of the rune ö in bytes is [0303, 0266].
vis encodes the byte 0303 as the bytes \M-C and the byte 0266 as the bytes \M-6.
Putting the two levels of encoding together, the rune ö is encoded as the bytes \M-C\M-6.
You can either write an decoder using the documentation on the man page or search for a decoding package. The Go standard library does not include such a decoder.

Few questions about utf-8

I have few questions about UTF-8
Is it possible to encode any Unicode with UTF-8 ?
Does utf-8 allow to encode any ascii character using only 1 byte ?
Is length of UTF-8 encoding fixed ?
My answers to check :
1) No, it's not possible. It is possible to encode 1,112,064 out of 1,114,112 codes
2) Yes
3) No, it could be 1,2,3 or 4 bytes
For question (1), what do you mean by "any Unicode"?
Do you mean "Any valid unicode character"? Then yes.
Do you mean "any possible character value from 0x0 to 0x10FFFF, including the 2048 surrogate code points that are not valid unicode character values?" Then no, but only because a valid UTF-8 decoder should reject those values.
The scheme defined by UTF-8 is perfectly capable of encoding those surrogate values in isolation, and in fact it's simpler to write UTF-8 encoding/decoding software that just handles those values like any other.

Make UUID shorter (Hex to ASCII conversion)

In my web application one model uses identifier that was generated by some UUID tool. As I want that identifier to be part of the URL I am investigating methods to shorten that UUID string. As it is currently is in hexadecimal format I thought about converting it to ASCII somehow. As it should afterwards only contain normal characters and number ([\d\w]+) the normal hex to ASCII conversion doesn't seem to work (ugly characters).
Do you know of some nice algorithm or tool (Ruby) to do that?
A UUID is a 128-bit binary number, in the end. If you represent it as 16 unencoded bytes, there's no way to avoid "ugly characters". What you probably want to do is decode it from hex and then encode it using base64. Note that base64 encoding uses the characters + / = as well as A-Za-z0-9, you'll want to do a little postprocessing (I suggest s/+/-/g; s/\//_/g; s/==$// -- a base64ed UUID will always end with two equals signs)

Importing extended ASCII into Oracle

I have a procedure that imports a binary file containing some strings. The strings can contain extended ASCII, e.g. CHR(224), 'à'. The procedure is taking a RAW and converting the BCD bytes into characters in a string one by one.
The problem is that the extended ASCII characters are getting lost. I suspect this is due to their values meaning something else in UTF8.
I think what I need is a function that takes an ASCII character index and returns the appropriate UTF8 character.
Update: If I happen to know the equivalent Oracle character set for the incoming text can I then convert the raw bytes to UTF8? The source text will always be single byte.
There's no such thing as "extended ASCII." Or, to be more precise, so many encodings are supersets of ASCII, sharing the same first 127 code points, that the term is too vague to be meaningful. You need to find out if the strings in this file are encoded using UTF-8, ISO-8859-whatever, MacRoman, etc.
The answer to the second part of your question is the same. UTF-8 is, by design, a superset of ASCII. Any ASCII character (i.e. 0 through 127) is also a UTF-8 character. To translate some non-ASCII character (i.e. >= 128) into UTF-8, you first need to find out what encoding it's in.

Resources