XMLParser in Pharo Claims U+00A0 is "Invalid UTF-8" - utf-8

Given the input:
<?xml version='1.0' encoding='UTF-8' standalone='yes' ?>
<sms body=". what" />
Where the character after the "." in the body attribute of the sms tag is U+00A0;
I get the error:
XMLEncodingException: Invalid UTF-8 character encoding (line 2) (column 13)
IIUC, the UTF-8 representation of that character is 0xC2 0xA0 per Wikipedia. Sure enough, bytes 72 and 73 of the input are 194 and 160 respectively.
This seems like a bug in XMLParser, or am I missing something?

Thanks to Monty for coming to the rescue on the Pharo User's list:
You're double decoding. Use onFileNamed:/parseFileNamed: instead (and
the DOM printToFileNamed: family of messages when writing) and let
XMLParser take care this for you, or disable XMLParser decoding before
parsing with #decodesCharacters:.
Longer explanation:
The class #on:/#parse: take either a string or a stream (read the
definitions). You gave it a FileReference, but because the argument is
tested with isString and sent #readStream otherwise, it didn't blowup
then.
File refs sent #readStream return file streams that do automatic
decoding. But XMLParser automatically attempts its own decoding too,
if:
The input starts with a BOM or it can be inferred by null bytes
before or after the first non-null byte.
There is an encoding declaration with a non-UTF-8 encoding.
There is a UTF-8 encoding declaration but the stream is not a normal
ReadStream (your case).
So it gets decoded twice, and the decoded value of the char causes the
error. I'll consider changing the heuristic to make less eager to
decode.

Related

JavaScript alternative to Ruby's force_encoding(Encoding::ASCII_8BIT)

I'm making my way through the Building Git book, but attempting to build my implementation in JavaScript. I'm stumped at the part of reading data in this file format that apparently only Ruby uses. Here is the excerpt from the book about this:
Note that we set the string’s encoding13 to ASCII_8BIT, which is Ruby’s way of saying that the string represents arbitrary binary data rather than text per se. Although the blobs we’ll be storing will all be ASCII-compatible source code, Git does allow blobs to be any kind of file, and certainly other kinds of objects — especially trees — will contain non-textual data. Setting the encoding this way means we don’t get surprising errors when the string is concatenated with others; Ruby sees that it’s binary data and just concatenates the bytes and won’t try to perform any character conversions.
Is there a way to emulate this encoding in JS?
or
Is there an alternative encoding I can use that JS and Ruby share that won't break anything?
Additionally, I've tried using Buffer.from(< text input >, 'binary') but it doesn't result in the same amount of bytes that the ruby ASCII-8BIT returns because in Node.js binary maps to ISO-8859-1.
Node certainly supports binary data, that's kind of what Buffer is for. However, it is crucial to know what you are converting into what. For example, the emoji "☺️" is encoded as six bytes in UTF-8:
// UTF-16 (JS) string to UTF-8 representation
Buffer.from('☺️', 'utf-8')
// => <Buffer e2 98 ba ef b8 8f>
If you happen to have a string that is not a native JS string (i.e. its encoding is different), you can use the encoding parameter to make Buffer interpret each character in a different manner (though only several different conversions are supported). For example, if we have a string of six characters that correspond to the six numbers above, it is not a smiley face for JavaScript, but Buffer.from can help us repackage it:
Buffer.from('\u00e2\u0098\u00ba\u00ef\u00b8\u008f', 'binary')
// => <Buffer e2 98 ba ef b8 8f>
JavaScript itself has only one encoding for its strings; thus, the parameter 'binary' is not really the binary encoding, but a mode of operation for Buffer.from, telling it that the string would have been a binary string if each character were one byte (however, since JavaScript internally uses UCS-2, each character is always represented by two bytes). Thus, if you use it on something that is not a string of characters in range from U+0000 to U+00FF, it will not do the correct thing, because there no such thing (GIGO principle). What it will actually do is get the lower byte of each character, which is probably not what you want:
Buffer.from('STUFF', 'binary') // 8BIT range: U+0000 to U+00FF
// => <Buffer 42 59 54 45 53> ("STUFF")
Buffer.from('STUFF', 'binary') // U+FF33 U+FF34 U+FF35 U+FF26 U+FF26
// => <Buffer 33 34 35 26 26> (garbage)
So, Node's Buffer structure exactly corresponds to Ruby's ASCII-8BIT "encoding" (binary is an encoding like "bald" is a hair style — it simply means no interpretation is attached to bytes; e.g. in ASCII, 65 means "A"; but in binary "encoding", 65 is just 65). Buffer.from with 'binary' lets you convert weird strings where one character corresponds to one byte into a Buffer. It is not the normal way of handling binary data; its function is to un-mess-up binary data when it has been read incorrectly into a string.
I assume you are reading a file as string, then trying to convert it to a Buffer — but your string is not actually in what Node considers to be the "binary" form (a sequence of characters in range from U+0000 to U+00FF; thus "in Node.js binary maps to ISO-8859-1" is not really true, because ISO-8859-1 is a sequence of characters in range from 0x00 to 0xFF — a single-byte encoding!).
Ideally, to have a binary representation of file contents, you would want to read the file as a Buffer in the first place (by using fs.readFile without an encoding), without ever touching a string.
(If my guess here is incorrect, please specify what the contents of your < text input > is, and how you obtain it, and in which case "it doesn't result in the same amount of bytes".)
EDIT: I seem to like typing Array.from too much. It's Buffer.from, of course.

Octal, Hex, Unicode

I have a character appearing over the wire that has a hex value and octal value \xb1 and \261.
This is what my header looks like:
From: "\261Central Station <sip#...>"
Looking at the ASCII table the character in the picture is "±":
What I don't understand:
If I try to test the same by passing "±Central Station" in the header I see it converted to "\xC2\xB1". Why?
How can I have "\xB1" or "\261" appearing over the wire instead of "\xC2\xB1".
e. If I try to print "\xB1" or "\261" I never see "±" being printed. But if I print "\u00b1" it prints the desired character, I'm assuming because "\u00b1" is the Unicode format.
From the page you linked to:
The extended ASCII codes (character code 128-255)
There are several different variations of the 8-bit ASCII table. The table below is according to ISO 8859-1, also called ISO Latin-1.
That's worth reading twice. The character codes 128–255 aren't ASCII (ASCII is a 7-bit encoding and ends at 127).
Assuming that you're correct that the character in question is ± (it's likely, but not guaranteed), your text could be encoded ISO 8850-1 or, as #muistooshort kindly pointed out in the comments, any of a number of other ISO 8859-X or CP-12XX (Windows-12XX) encodings. We do know, however, that the text isn't (valid) UTF-8, because 0xb1 on its own isn't a valid UTF-8 character.
If you're lucky, whatever client is sending this text specified the encoding in the Content-Type header.
As to your questions:
If I try to test the same by passing ±Central Station in header I see it get converted to \xC2\xB1. Why?
The text you're passing is in UTF-8, and the bytes that represent ± in UTF-8 are 0xC2 0xB1.
How can I have \xB1 or \261 appearing over the wire instead of \xC2\xB1?
We have no idea how you're testing this, so we can't answer this question. In general, though: Either send the text encoded as ISO 8859-1 (Encoding::ISO_8859_1 in Ruby), or whatever encoding the original text was in, or as raw bytes (Encoding::ASCII_8BIT or Encoding::BINARY, which are aliases for each other).
If I try to print \xB1 or \261 I never see ± being printed. But if I print \u00b1 it prints the desired character. (I'm assuming because \u00b1 is the unicode format but I will love If some can explain this in detail.)
That's not a question, but the reason is that \xB1 (\261) is not a valid UTF-8 character. Some interfaces will print � for invalid characters; others will simply elide them. \u00b1, on the other hand, is a valid Unicode code point, which Ruby knows how to represent in UTF-8.
Brief aside: UTF-8 (like UTF-16 and UTF-32) is a character encoding specified by the Unicode standard. U+00B1 is the Unicode code point for ±, and 0xC2 0xB1 are the bytes that represent that code point in UTF-8. In Ruby we can represent UTF-8 characters using either the Unicode code point (\u00b1) or the UTF-8 bytes (in hex: \xC2\xB1; or octal: \302\261, although I don't recommend the latter since fewer Rubyists are familiar with it).
Character encoding is a big topic, well beyond the scope of a Stack Overflow answer. For a good primer, read Joel Spolsky's "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)", and for more details on how character encoding works in Ruby read Yehuda Katz's "Encodings, Unabridged". Reading both will take you less than 30 minutes and will save you hundreds of hours of pain in the future.

base64 encode length parameter

I am decoding a base64 string, modifying it, and re-encoding it with Ruby. The problem when I re-encode it is that the ruby encode library is adding a linebreak after 60 or so characters. How can I tell it to not have max characters per line limit?
val = "QmFzZTY0IGlzIGEgZ2VuZXJpYyB0ZXJtIGZvciBhIG51bWJlciBvZiBzaW1pbGFyIGVuY29kaW5nIHNjaGVtZXMgdGhhdCBlbmNvZGUgYmluYXJ5IGRhdGEgYnkgdHJlYXRpbmcgaXQgbnVtZXJpY2FsbHkgYW5kIHRyYW5zbGF0aW5nIGl0IGludG8gYSBiYXNlIDY0IHJlcHJlc2VudGF0aW9uLiBUaGUgQmFzZTY0IHRlcm0gb3JpZ2luYXRlcyBmcm9tIGEgc3BlY2lmaWMgTUlNRSBjb250ZW50IHRyYW5zZmVyIGVuY29kaW5nLg0KDQpCYXNlNjQgZW5jb2Rpbmcgc2NoZW1lcyBhcmUgY29tbW9ubHkgdXNlZCB3aGVuIHRoZXJlIGlzIGEgbmVlZCB0byBlbmNvZGUgYmluYXJ5IGRhdGEgdGhhdCBuZWVkcyBiZSBzdG9yZWQgYW5kIHRyYW5zZmVycmVkIG92ZXIgbWVkaWEgdGhhdCBhcmUgZGVzaWduZWQgdG8gZGVhbCB3aXRoIHRleHR1YWwgZGF0YS4gVGhpcyBpcyB0byBlbnN1cmUgdGhhdCB0aGUgZGF0YSByZW1haW5zIGludGFjdCB3aXRob3V0IG1vZGlmaWNhdGlvbiBkdXJpbmcgdHJhbnNwb3J0LiBCYXNlNjQgaXMgdXNlZCBjb21tb25seSBpbiBhIG51bWJlciBvZiBhcHBsaWNhdGlvbnMgaW5jbHVkaW5nIGVtYWlsIHZpYSBNSU1FLCBhbmQgc3RvcmluZyBjb21wbGV4IGRhdGEgaW4gWE1MLg=="
decoded_val = Base64.decode64(val)
encoded_val = Base64.encode64(val)
#=> QmFzZTY0IGlzIGEgZ2VuZXJpYyB0ZXJtIGZvciBhIG51bWJlciBvZiBzaW1p
# bGFyIGVuY29kaW5nIHNjaGVtZXMgdGhhdCBlbmNvZGUgYmluYXJ5IGRhdGEg
# YnkgdHJlYXRpbmcgaXQgbnVtZXJpY2FsbHkgYW5kIHRyYW5zbGF0aW5nIGl0
# IGludG8gYSBiYXNlIDY0IHJlcHJlc2VudGF0aW9uLiBUaGUgQmFzZTY0IHRl
# cm0gb3JpZ2luYXRlcyBmcm9tIGEgc3BlY2lmaWMgTUlNRSBjb250ZW50IHRy
# YW5zZmVyIGVuY29kaW5nLg0KDQpCYXNlNjQgZW5jb2Rpbmcgc2NoZW1lcyBh
# cmUgY29tbW9ubHkgdXNlZCB3aGVuIHRoZXJlIGlzIGEgbmVlZCB0byBlbmNv
# ZGUgYmluYXJ5IGRhdGEgdGhhdCBuZWVkcyBiZSBzdG9yZWQgYW5kIHRyYW5z
# ZmVycmVkIG92ZXIgbWVkaWEgdGhhdCBhcmUgZGVzaWduZWQgdG8gZGVhbCB3
# aXRoIHRleHR1YWwgZGF0YS4gVGhpcyBpcyB0byBlbnN1cmUgdGhhdCB0aGUg
# ZGF0YSByZW1haW5zIGludGFjdCB3aXRob3V0IG1vZGlmaWNhdGlvbiBkdXJp
# bmcgdHJhbnNwb3J0LiBCYXNlNjQgaXMgdXNlZCBjb21tb25seSBpbiBhIG51
# bWJlciBvZiBhcHBsaWNhdGlvbnMgaW5jbHVkaW5nIGVtYWlsIHZpYSBNSU1F
# LCBhbmQgc3RvcmluZyBjb21wbGV4IGRhdGEgaW4gWE1MLg==
RFC 4648: The Base16, Base32, and Base64 Data Encodings has this to say:
3.3. Interpretation of Non-Alphabet Characters in Encoded Data
[...]
Implementations MUST reject the encoded data if it contains
characters outside the base alphabet when interpreting base-encoded
data, unless the specification referring to this document explicitly
states otherwise. Such specifications may instead state, as MIME
does, that characters outside the base encoding alphabet should
simply be ignored when interpreting data ("be liberal in what you
accept"). Note that this means that any adjacent carriage return/
line feed (CRLF) characters constitute "non-alphabet characters" and
are ignored.
So the newlines are fine and pretty much everything will ignore them even if they're not strictly compliant with RFC 4648.
Also, the fine manual has this to say:
encode64(bin)
Returns the Base64-encoded version of bin. This method complies with RFC 2045. Line feeds are added to every 60 encoded charactors [sic].
So the 60 character line length is intentional and specified. If you want strict RFC 4648 Base64 (i.e. no newlines), then there is strict_encode64:
strict_encode64(bin)
Returns the Base64-encoded version of bin. This method complies with RFC 4648. No line feeds are added.
So you can say Base64.strict_encode64(val) to get the output you're looking for.
And for reference, here's the relevant section of RFC 2045:
6.8. Base64 Content-Transfer-Encoding
[...]
The encoded output stream must be represented in lines of no more
than 76 characters each. All line breaks or other characters not
found in Table 1 must be ignored by decoding software.
So the 60 character line length is somewhat arbitrary but compliant with RFC 2045 since 60 < 76.

Why do we use Base64?

Wikipedia says
Base64 encoding schemes are commonly used when there is a need to encode binary data that needs be stored and transferred over media that are designed to deal with textual data. This is to ensure that the data remains intact without modification during transport.
But is it not that data is always stored/transmitted in binary because the memory that our machines have store binary and it just depends how you interpret it? So, whether you encode the bit pattern 010011010110000101101110 as Man in ASCII or as TWFu in Base64, you are eventually going to store the same bit pattern.
If the ultimate encoding is in terms of zeros and ones and every machine and media can deal with them, how does it matter if the data is represented as ASCII or Base64?
What does it mean "media that are designed to deal with textual data"? They can deal with binary => they can deal with anything.
Thanks everyone, I think I understand now.
When we send over data, we cannot be sure that the data would be interpreted in the same format as we intended it to be. So, we send over data coded in some format (like Base64) that both parties understand. That way even if sender and receiver interpret same things differently, but because they agree on the coded format, the data will not get interpreted wrongly.
From Mark Byers example
If I want to send
Hello
world!
One way is to send it in ASCII like
72 101 108 108 111 10 119 111 114 108 100 33
But byte 10 might not be interpreted correctly as a newline at the other end. So, we use a subset of ASCII to encode it like this
83 71 86 115 98 71 56 115 67 110 100 118 99 109 120 107 73 61 61
which at the cost of more data transferred for the same amount of information ensures that the receiver can decode the data in the intended way, even if the receiver happens to have different interpretations for the rest of the character set.
Your first mistake is thinking that ASCII encoding and Base64 encoding are interchangeable. They are not. They are used for different purposes.
When you encode text in ASCII, you start with a text string and convert it to a sequence of bytes.
When you encode data in Base64, you start with a sequence of bytes and convert it to a text string.
To understand why Base64 was necessary in the first place we need a little history of computing.
Computers communicate in binary - 0s and 1s - but people typically want to communicate with more rich forms data such as text or images. In order to transfer this data between computers it first has to be encoded into 0s and 1s, sent, then decoded again. To take text as an example - there are many different ways to perform this encoding. It would be much simpler if we could all agree on a single encoding, but sadly this is not the case.
Originally a lot of different encodings were created (e.g. Baudot code) which used a different number of bits per character until eventually ASCII became a standard with 7 bits per character. However most computers store binary data in bytes consisting of 8 bits each so ASCII is unsuitable for tranferring this type of data. Some systems would even wipe the most significant bit. Furthermore the difference in line ending encodings across systems mean that the ASCII character 10 and 13 were also sometimes modified.
To solve these problems Base64 encoding was introduced. This allows you to encode arbitrary bytes to bytes which are known to be safe to send without getting corrupted (ASCII alphanumeric characters and a couple of symbols). The disadvantage is that encoding the message using Base64 increases its length - every 3 bytes of data is encoded to 4 ASCII characters.
To send text reliably you can first encode to bytes using a text encoding of your choice (for example UTF-8) and then afterwards Base64 encode the resulting binary data into a text string that is safe to send encoded as ASCII. The receiver will have to reverse this process to recover the original message. This of course requires that the receiver knows which encodings were used, and this information often needs to be sent separately.
Historically it has been used to encode binary data in email messages where the email server might modify line-endings. A more modern example is the use of Base64 encoding to embed image data directly in HTML source code. Here it is necessary to encode the data to avoid characters like '<' and '>' being interpreted as tags.
Here is a working example:
I wish to send a text message with two lines:
Hello
world!
If I send it as ASCII (or UTF-8) it will look like this:
72 101 108 108 111 10 119 111 114 108 100 33
The byte 10 is corrupted in some systems so we can base 64 encode these bytes as a Base64 string:
SGVsbG8Kd29ybGQh
Which when encoded using ASCII looks like this:
83 71 86 115 98 71 56 75 100 50 57 121 98 71 81 104
All the bytes here are known safe bytes, so there is very little chance that any system will corrupt this message. I can send this instead of my original message and let the receiver reverse the process to recover the original message.
Encoding binary data in XML
Suppose you want to embed a couple images within an XML document. The images are binary data, while the XML document is text. But XML cannot handle embedded binary data. So how do you do it?
One option is to encode the images in base64, turning the binary data into text that XML can handle.
Instead of:
<images>
<image name="Sally">{binary gibberish that breaks XML parsers}</image>
<image name="Bobby">{binary gibberish that breaks XML parsers}</image>
</images>
you do:
<images>
<image name="Sally" encoding="base64">j23894uaiAJSD3234kljasjkSD...</image>
<image name="Bobby" encoding="base64">Ja3k23JKasil3452AsdfjlksKsasKD...</image>
</images>
And the XML parser will be able to parse the XML document correctly and extract the image data.
Why not look to the RFC that currently defines Base64?
Base encoding of data is used in
many situations to store or transfer
data in environments that, perhaps for
legacy reasons, are restricted to
US-ASCII [1] data.Base encoding can
also be used in new applications
that do not have legacy restrictions,
simply because it makes it possible
to manipulate objects with text
editors.
In the past, different applications
have had different requirements and
thus sometimes implemented base
encodings in slightly different
ways. Today, protocol specifications
sometimes use base encodings in
general, and "base64" in particular,
without a precise description or
reference. Multipurpose Internet Mail
Extensions (MIME) [4] is often used
as a reference for base64 without
considering the consequences for
line-wrapping or non-alphabet
characters. The purpose of this
specification is to establish common
alphabet and encoding
considerations. This will hopefully
reduce ambiguity in other
documents, leading to better
interoperability.
Base64 was originally devised as a way to allow binary data to be attached to emails as a part of the Multipurpose Internet Mail Extensions.
Media that is designed for textual data is of course eventually binary as well, but textual media often use certain binary values for control characters. Also, textual media may reject certain binary values as non-text.
Base64 encoding encodes binary data as values that can only be interpreted as text in textual media, and is free of any special characters and/or control characters, so that the data will be preserved across textual media as well.
It is more that the media validates the string encoding, so we want to ensure that the data is acceptable by a handling application (and doesn't contain a binary sequence representing EOL for example)
Imagine you want to send binary data in an email with encoding UTF-8 -- The email may not display correctly if the stream of ones and zeros creates a sequence which isn't valid Unicode in UTF-8 encoding.
The same type of thing happens in URLs when we want to encode characters not valid for a URL in the URL itself:
http://www.foo.com/hello my friend -> http://www.foo.com/hello%20my%20friend
This is because we want to send a space over a system that will think the space is smelly.
All we are doing is ensuring there is a 1-to-1 mapping between a known good, acceptable and non-detrimental sequence of bits to another literal sequence of bits, and that the handling application doesn't distinguish the encoding.
In your example, man may be valid ASCII in first form; but often you may want to transmit values that are random binary (ie sending an image in an email):
MIME-Version: 1.0
Content-Description: "Base64 encode of a.gif"
Content-Type: image/gif; name="a.gif"
Content-Transfer-Encoding: Base64
Content-Disposition: attachment; filename="a.gif"
Here we see that a GIF image is encoded in base64 as a chunk of an email. The email client reads the headers and decodes it. Because of the encoding, we can be sure the GIF doesn't contain anything that may be interpreted as protocol and we avoid inserting data that SMTP or POP may find significant.
Here is a summary of my understanding after reading what others have posted:
Important!
Base64 encoding is not meant to provide security
Base64 encoding is not meant to compress data
Why do we use Base64
Base64 is a text representation of data that consists of only 64 characters which are the alphanumeric characters (lowercase and uppercase), +, / and =.
These 64 characters are considered ‘safe’, that is, they can not be misinterpreted by legacy computers and programs unlike characters such as <, > \n and many others.
When is Base64 useful
I've found base64 very useful when transfering files as text. You get the file's bytes and encode them to base64, transmit the base64 string and from the receiving side you do the reverse.
This is the same procedure that is used when sending attachments over SMTP during emailing.
How to perform base64 encoding/decoding
Conversion from base64 text to bytes is called decoding.
Conversion from bytes to base64 text is called encoding. This is a bit different from how other encodings/decodings are named.
Dotnet and Powershell
Microsoft's Dotnet framework has support for encoding and decoding bytes to base64. Look for the Convert namespace in the mscorlib library.
Below are powershell commands you can use:
// Base64 encode PowerShell
// See: https://adsecurity.org/?p=478
$Text='This is my nice cool text'
$Bytes = [System.Text.Encoding]::Unicode.GetBytes($Text)
$EncodedText = [Convert]::ToBase64String($Bytes)
$EncodedText
// Convert from base64 to plain text
[System.Text.Encoding]::Unicode.GetString([Convert]::FromBase64String('VABoAGkAcwAgAGkAcwAgAG0AeQAgAG4AaQBjAGUAIABjAG8AbwBsACAAdABlAHgAdAA='))
Output>This is my nice cool text
Bash has a built-in command for base64 encoding/decoding. You can use it like this:
To encode to base64:
echo 'hello' | base64
To decode base64-encoded text to normal text:
echo 'aGVsbG8K' | base64 -d
Node.js also has support for base64. Here is a class that you can use:
/**
* Attachment class.
* Converts base64 string to file and file to base64 string
* Converting a Buffer to a string is known as decoding.
* Converting a string to a Buffer is known as encoding.
* See: https://nodejs.org/api/buffer.html
*
* For binary to text, the naming convention is reversed.
* Converting Buffer to string is encoding.
* Converting string to Buffer is decoding.
*
*/
class Attachment {
constructor(){
}
/**
*
* #param {string} base64Str
* #returns {Buffer} file buffer
*/
static base64ToBuffer(base64Str) {
const fileBuffer = Buffer.from(base64Str, 'base64');
// console.log(fileBuffer)
return fileBuffer;
}
/**
*
* #param {Buffer} fileBuffer
* #returns { string } base64 encoded content
*/
static bufferToBase64(fileBuffer) {
const base64Encoded = fileBuffer.toString('base64')
// console.log(base64Encoded)
return base64Encoded
}
}
You get the file buffer like so:
const fileBuffer = fs.readFileSync(path);
Or like so:
const buf = Buffer.from('hey there');
You can also use an API to do for you the encoding and encoding, here is one:
To encode, you pass in the plain text as the body.
POST https://mk34rgwhnf.execute-api.ap-south-1.amazonaws.com/base64-encode
To decode, pass in the base64 string as the body.
POST https://mk34rgwhnf.execute-api.ap-south-1.amazonaws.com/base64-decode
Fantasy example of when you might need base64
Here is a far fetched scenario of when you might need to use base64.
Suppose you are a spy and you're on a mission to copy and take back a picture of great value back to your country's intelligence.
This picture is on a computer that has no access to internet and no printer. All you have in your hands is a pen and a single sheet of paper. No flash disk, no CD etc. What do you do?
Your first option would be to convert the picture into binary 1s and 0s , copy those 1s and 0s to the paper one by one and then run for it.
However, this can be a challenge because representing a picture using only 1s and 0s as your alphabet will result in very many 1s and 0s. Your paper is small and you dont have time. Plus, the more 1s and 0s the more chances of error.
Your second option is to use hexadecimal instead of binary. Hexadecimal allows for 16 instead of 2 possible characters so you have a wider alphabet hence less paper and time required.
Still a better option is to convert the picture into base64 and take advantage of yet another larger character set to represent the data. Less paper and less time to complete. There you go!
Base64 instead of escaping special characters
I'll give you a very different but real example: I write javascript code to be run in a browser. HTML tags have ID values, but there are constraints on what characters are valid in an ID.
But I want my ID to losslessly refer to files in my file system. Files in reality can have all manner of weird and wonderful characters in them from exclamation marks, accented characters, tilde, even emoji! I cannot do this:
<div id="/path/to/my_strangely_named_file!#().jpg">
<img src="http://myserver.com/path/to/my_strangely_named_file!#().jpg">
Here's a pic I took in Moscow.
</div>
Suppose I want to run some code like this:
# ERROR
document.getElementById("/path/to/my_strangely_named_file!#().jpg");
I think this code will fail when executed.
With Base64 I can refer to something complicated without worrying about which language allows what special characters and which need escaping:
document.getElementById("18GerPD8fY4iTbNpC9hHNXNHyrDMampPLA");
Unlike using an MD5 or some other hashing function, you can reverse the encoding to find out what exactly the data was that actually useful.
I wish I knew about Base64 years ago. I would have avoided tearing my hair out with ‘encodeURIComponent’ and str.replace(‘\n’,’\\n’)
SSH transfer of text:
If you're trying to pass complex data over ssh (e.g. a dotfile so you can get your shell personalizations), good luck doing it without Base 64. This is how you would do it with base 64 (I know you can use SCP, but that would take multiple commands - which complicates key bindings for sshing into a server):
https://superuser.com/a/1376076/114723
One example of when I found it convenient was when trying to embed binary data in XML. Some of the binary data was being misinterpreted by the SAX parser because that data could be literally anything, including XML special characters. Base64 encoding the data on the transmitting end and decoding it on the receiving end fixed that problem.
Most computers store data in 8-bit binary format, but this is not a requirement. Some machines and transmission media can only handle 7 bits (or maybe even lesser) at a time. Such a medium would interpret the stream in multiples of 7 bits, so if you were to send 8-bit data, you won't receive what you expect on the other side. Base-64 is just one way to solve this problem: you encode the input into a 6-bit format, send it over your medium and decode it back to 8-bit format at the receiving end.
In addition to the other (somewhat lengthy) answers: even ignoring old systems that support only 7-bit ASCII, basic problems with supplying binary data in text-mode are:
Newlines are typically transformed in text-mode.
One must be careful not to treat a NUL byte as the end of a text string, which is all too easy to do in any program with C lineage.
What does it mean "media that are
designed to deal with textual data"?
That those protocols were designed to handle text (often, only English text) instead of binary data (like .png and .jpg images).
They can deal with binary => they can
deal with anything.
But the converse is not true. A protocol designed to represent text may improperly treat binary data that happens to contain:
The bytes 0x0A and 0x0D, used for line endings, which differ by platform.
Other control characters like 0x00 (NULL = C string terminator), 0x03 (END OF TEXT), 0x04 (END OF TRANSMISSION), or 0x1A (DOS end-of-file) which may prematurely signal the end of data.
Bytes above 0x7F (if the protocol that was designed for ASCII).
Byte sequences that are invalid UTF-8.
So you can't just send binary data over a text-based protocol. You're limited to the bytes that represent the non-space non-control ASCII characters, of which there are 94. The reason Base 64 was chosen was that it's faster to work with powers of two, and 64 is the largest one that works.
One question though. How is that
systems still don't agree on a common
encoding technique like the so common
UTF-8?
On the Web, at least, they mostly have. A majority of sites use UTF-8.
The problem in the West is that there is a lot of old software that ass-u-me-s that 1 byte = 1 character and can't work with UTF-8.
The problem in the East is their attachment to encodings like GB2312 and Shift_JIS.
And the fact that Microsoft seems to have still not gotten over having picked the wrong UTF encoding. If you want to use the Windows API or the Microsoft C runtime library, you're limited to UTF-16 or the locale's "ANSI" encoding. This makes it painful to use UTF-8 because you have to convert all the time.
Why/ How do we use Base64 encoding?
Base64 is one of the binary-to-text encoding scheme having 75% efficiency. It is used so that typical binary data (such as images) may be safely sent over legacy "not 8-bit clean" channels.
In earlier email networks (till early 1990s), most email messages were plain text in the 7-bit US-ASCII character set. So many early comm protocol standards were designed to work over "7-bit" comm links "not 8-bit clean".
Scheme efficiency is the ratio between number of bits in the input and the number of bits in the encoded output.
Hexadecimal (Base16) is also one of the binary-to-text encoding scheme with 50% efficiency.
Base64 Encoding Steps (Simplified):
Binary data is arranged in continuous chunks of 24 bits (3 bytes) each.
Each 24 bits chunk is grouped in to four parts of 6 bit each.
Each 6 bit group is converted into their corresponding Base64 character values, i.e. Base64 encoding converts three octets into four encoded characters. The ratio of output bytes to input bytes is 4:3 (33% overhead).
Interestingly, the same characters will be encoded differently depending on their position within the three-octet group which is encoded to produce the four characters.
The receiver will have to reverse this process to recover the original message.
What does it mean "media that are designed to deal with textual data"?
Back in the day when ASCII ruled the world dealing with non-ASCII values was a headache. People jumped through all sorts of hoops to get these transferred over the wire without losing out information.

RSS reader Error : Input is not proper UTF-8 when use simplexml_load_file()

I'm using simplexml_load_file method for parsing feed from external source.
My code like this
$rssFeed['DAILYSTAR'] = 'http://www.thedailystar.net/latest/rss/rss.xml';
$rssParser = simplexml_load_file($url);
The output is as follows :
Warning: simplexml_load_file() [function.simplexml-load-file]: http://www.thedailystar.net/latest/rss/rss.xml:12: parser error : Input is not proper UTF-8, indicate encoding ! Bytes: 0x92 0x73 0x20 0x48 in C:\xampp\htdocs\googlebd\index.php on line 39
Ultimately stop with a fatal error. Main problem is the site's character encoding is ISO-8859-1, not UTF-8.
Can i be able to read this using this method(SimpleXML API)?
If no then any other method is available?
I've searched through Google but no answer. Every method I applied returns with this error.
Thanks,
Rashed
Well, well, when I retrieve this content using Python, I get the following:
'\n<rss version="2.0" encoding="ISO-8859-1">\n [...]
<description>The results of this year\x92s Higher Secondary Certificate
Now it says it's ISO-8859-1, but \x92 is not in that character set, but instead is the closing curly single quote, used as an apostrophe, in Windows-1252. So the page throws an encoding error, and as per the XML spec, clients should be "strict" and not fix errors.
You can retrieve it, and filter out the non-ISO-8859-1 characters in some fashion, or better, convert the encoding using mb-convert-encoding() before passing the result to your RSS parser.
Oh, and if you want to incorporate the result into a UTF-8 page, you may have convert everything to UTF-8, though this is English, which might not even require any different character encodings, if all turns out to be ASCII after all.
We ran into the same issue and used utf8_encode to change the encoding from ISO-8859-1/latin-1 to UTF-8 and get past the error.
$contents = file_get_contents($url);
simplexml_load_string(utf8_encode($contents));

Resources