base64 encode length parameter - ruby

I am decoding a base64 string, modifying it, and re-encoding it with Ruby. The problem when I re-encode it is that the ruby encode library is adding a linebreak after 60 or so characters. How can I tell it to not have max characters per line limit?
val = "QmFzZTY0IGlzIGEgZ2VuZXJpYyB0ZXJtIGZvciBhIG51bWJlciBvZiBzaW1pbGFyIGVuY29kaW5nIHNjaGVtZXMgdGhhdCBlbmNvZGUgYmluYXJ5IGRhdGEgYnkgdHJlYXRpbmcgaXQgbnVtZXJpY2FsbHkgYW5kIHRyYW5zbGF0aW5nIGl0IGludG8gYSBiYXNlIDY0IHJlcHJlc2VudGF0aW9uLiBUaGUgQmFzZTY0IHRlcm0gb3JpZ2luYXRlcyBmcm9tIGEgc3BlY2lmaWMgTUlNRSBjb250ZW50IHRyYW5zZmVyIGVuY29kaW5nLg0KDQpCYXNlNjQgZW5jb2Rpbmcgc2NoZW1lcyBhcmUgY29tbW9ubHkgdXNlZCB3aGVuIHRoZXJlIGlzIGEgbmVlZCB0byBlbmNvZGUgYmluYXJ5IGRhdGEgdGhhdCBuZWVkcyBiZSBzdG9yZWQgYW5kIHRyYW5zZmVycmVkIG92ZXIgbWVkaWEgdGhhdCBhcmUgZGVzaWduZWQgdG8gZGVhbCB3aXRoIHRleHR1YWwgZGF0YS4gVGhpcyBpcyB0byBlbnN1cmUgdGhhdCB0aGUgZGF0YSByZW1haW5zIGludGFjdCB3aXRob3V0IG1vZGlmaWNhdGlvbiBkdXJpbmcgdHJhbnNwb3J0LiBCYXNlNjQgaXMgdXNlZCBjb21tb25seSBpbiBhIG51bWJlciBvZiBhcHBsaWNhdGlvbnMgaW5jbHVkaW5nIGVtYWlsIHZpYSBNSU1FLCBhbmQgc3RvcmluZyBjb21wbGV4IGRhdGEgaW4gWE1MLg=="
decoded_val = Base64.decode64(val)
encoded_val = Base64.encode64(val)
#=> QmFzZTY0IGlzIGEgZ2VuZXJpYyB0ZXJtIGZvciBhIG51bWJlciBvZiBzaW1p
# bGFyIGVuY29kaW5nIHNjaGVtZXMgdGhhdCBlbmNvZGUgYmluYXJ5IGRhdGEg
# YnkgdHJlYXRpbmcgaXQgbnVtZXJpY2FsbHkgYW5kIHRyYW5zbGF0aW5nIGl0
# IGludG8gYSBiYXNlIDY0IHJlcHJlc2VudGF0aW9uLiBUaGUgQmFzZTY0IHRl
# cm0gb3JpZ2luYXRlcyBmcm9tIGEgc3BlY2lmaWMgTUlNRSBjb250ZW50IHRy
# YW5zZmVyIGVuY29kaW5nLg0KDQpCYXNlNjQgZW5jb2Rpbmcgc2NoZW1lcyBh
# cmUgY29tbW9ubHkgdXNlZCB3aGVuIHRoZXJlIGlzIGEgbmVlZCB0byBlbmNv
# ZGUgYmluYXJ5IGRhdGEgdGhhdCBuZWVkcyBiZSBzdG9yZWQgYW5kIHRyYW5z
# ZmVycmVkIG92ZXIgbWVkaWEgdGhhdCBhcmUgZGVzaWduZWQgdG8gZGVhbCB3
# aXRoIHRleHR1YWwgZGF0YS4gVGhpcyBpcyB0byBlbnN1cmUgdGhhdCB0aGUg
# ZGF0YSByZW1haW5zIGludGFjdCB3aXRob3V0IG1vZGlmaWNhdGlvbiBkdXJp
# bmcgdHJhbnNwb3J0LiBCYXNlNjQgaXMgdXNlZCBjb21tb25seSBpbiBhIG51
# bWJlciBvZiBhcHBsaWNhdGlvbnMgaW5jbHVkaW5nIGVtYWlsIHZpYSBNSU1F
# LCBhbmQgc3RvcmluZyBjb21wbGV4IGRhdGEgaW4gWE1MLg==

RFC 4648: The Base16, Base32, and Base64 Data Encodings has this to say:
3.3. Interpretation of Non-Alphabet Characters in Encoded Data
[...]
Implementations MUST reject the encoded data if it contains
characters outside the base alphabet when interpreting base-encoded
data, unless the specification referring to this document explicitly
states otherwise. Such specifications may instead state, as MIME
does, that characters outside the base encoding alphabet should
simply be ignored when interpreting data ("be liberal in what you
accept"). Note that this means that any adjacent carriage return/
line feed (CRLF) characters constitute "non-alphabet characters" and
are ignored.
So the newlines are fine and pretty much everything will ignore them even if they're not strictly compliant with RFC 4648.
Also, the fine manual has this to say:
encode64(bin)
Returns the Base64-encoded version of bin. This method complies with RFC 2045. Line feeds are added to every 60 encoded charactors [sic].
So the 60 character line length is intentional and specified. If you want strict RFC 4648 Base64 (i.e. no newlines), then there is strict_encode64:
strict_encode64(bin)
Returns the Base64-encoded version of bin. This method complies with RFC 4648. No line feeds are added.
So you can say Base64.strict_encode64(val) to get the output you're looking for.
And for reference, here's the relevant section of RFC 2045:
6.8. Base64 Content-Transfer-Encoding
[...]
The encoded output stream must be represented in lines of no more
than 76 characters each. All line breaks or other characters not
found in Table 1 must be ignored by decoding software.
So the 60 character line length is somewhat arbitrary but compliant with RFC 2045 since 60 < 76.

Related

Difference with Q encoding and Quoted Printable?

My apology if I am asking a beginners question.
May I ask what is the difference between RFC2047 4.2 The "Q" encoding and RFC 2045 6.7 Quoted-Printable Content-Transfer-Encoding?
From RFC2047:
The "Q" encoding is similar to the "Quoted-Printable" content-transfer-encoding defined in RFC 2045.
I am trying to implement decode logic using Ruby. I have read the answer below and am trying to understand why first.gsub('_',' ') is required for Q encoding.
Is there a way to decode q-encoded strings in Ruby?
By reading RFC2047 again, I have realized that the approach below does not decode underscore properly in cases where underscore is encoded as =5F.
decoded = m[3].unpack('M').first.gsub('_',' ')
Instead, as described in the last sentence of RFC2047 4.2 (2):
Note that the "_" always represents hexadecimal 20, even if the SPACE character occupies a different code position in the character set in use.
I have substituted literal underscore back to =20 first, then unpack it.
I have coded as below:
decoded = m[3].gsub('_', '=20').unpack('M').first()

Octal, Hex, Unicode

I have a character appearing over the wire that has a hex value and octal value \xb1 and \261.
This is what my header looks like:
From: "\261Central Station <sip#...>"
Looking at the ASCII table the character in the picture is "±":
What I don't understand:
If I try to test the same by passing "±Central Station" in the header I see it converted to "\xC2\xB1". Why?
How can I have "\xB1" or "\261" appearing over the wire instead of "\xC2\xB1".
e. If I try to print "\xB1" or "\261" I never see "±" being printed. But if I print "\u00b1" it prints the desired character, I'm assuming because "\u00b1" is the Unicode format.
From the page you linked to:
The extended ASCII codes (character code 128-255)
There are several different variations of the 8-bit ASCII table. The table below is according to ISO 8859-1, also called ISO Latin-1.
That's worth reading twice. The character codes 128–255 aren't ASCII (ASCII is a 7-bit encoding and ends at 127).
Assuming that you're correct that the character in question is ± (it's likely, but not guaranteed), your text could be encoded ISO 8850-1 or, as #muistooshort kindly pointed out in the comments, any of a number of other ISO 8859-X or CP-12XX (Windows-12XX) encodings. We do know, however, that the text isn't (valid) UTF-8, because 0xb1 on its own isn't a valid UTF-8 character.
If you're lucky, whatever client is sending this text specified the encoding in the Content-Type header.
As to your questions:
If I try to test the same by passing ±Central Station in header I see it get converted to \xC2\xB1. Why?
The text you're passing is in UTF-8, and the bytes that represent ± in UTF-8 are 0xC2 0xB1.
How can I have \xB1 or \261 appearing over the wire instead of \xC2\xB1?
We have no idea how you're testing this, so we can't answer this question. In general, though: Either send the text encoded as ISO 8859-1 (Encoding::ISO_8859_1 in Ruby), or whatever encoding the original text was in, or as raw bytes (Encoding::ASCII_8BIT or Encoding::BINARY, which are aliases for each other).
If I try to print \xB1 or \261 I never see ± being printed. But if I print \u00b1 it prints the desired character. (I'm assuming because \u00b1 is the unicode format but I will love If some can explain this in detail.)
That's not a question, but the reason is that \xB1 (\261) is not a valid UTF-8 character. Some interfaces will print � for invalid characters; others will simply elide them. \u00b1, on the other hand, is a valid Unicode code point, which Ruby knows how to represent in UTF-8.
Brief aside: UTF-8 (like UTF-16 and UTF-32) is a character encoding specified by the Unicode standard. U+00B1 is the Unicode code point for ±, and 0xC2 0xB1 are the bytes that represent that code point in UTF-8. In Ruby we can represent UTF-8 characters using either the Unicode code point (\u00b1) or the UTF-8 bytes (in hex: \xC2\xB1; or octal: \302\261, although I don't recommend the latter since fewer Rubyists are familiar with it).
Character encoding is a big topic, well beyond the scope of a Stack Overflow answer. For a good primer, read Joel Spolsky's "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)", and for more details on how character encoding works in Ruby read Yehuda Katz's "Encodings, Unabridged". Reading both will take you less than 30 minutes and will save you hundreds of hours of pain in the future.

IMultiLanguage2::ConvertStringFromUnicode - how to avoid compound prefix?

I am using IMultilanguage2::ConvertStringFromUnicode to convert from UTF-16. For some languages (Japanese, Chinese, Korean), I am getting an escape sequence (e.g. 0x1B, 0x24, 0x29, 0x43 for codepage 50225 (ISO-2022 Korean)). WideCharToMultiByte exhibits the same behavior.
I am building a MIME message, so the encoding is specified in the header itself and the escape prefix is displayed as-is.
Is there a way to convert without the prefix?
Thank you!
I don't really see a problem here. That is a valid byte sequence in ISO 2022:
Escape sequences to designate character sets take the form ESC I [I...] F, where there are one or more intermediate I bytes from the range 0x20–0x2F, and a final F byte from the range 0x40–0x7F. (The range 0x30–0x3F is reserved for private-use F bytes.) The I bytes identify the type of character set and the working set it is to be designated to, while the F byte identifies the character set itself.
...
Code: ESC $ ) F
Hex: 1B 24 29 F
Abbr: G1DM4
Name: G1-designate multibyte 94-set F
Effect: selects a 94n-character set to be used for G1.
As F is 0x43 (C), this byte sequence tells a decoder to switch to ISO-2022-KR:
Character encodings using ISO/IEC 2022 mechanism include:
...
ISO-2022-KR. An encoding for Korean.
ESC $ ) C to switch to KS X 1001-1992, previously named KS C 5601-1987 (2 bytes per character) [designated to G1]
In this case, you have to specify iso-2022-kr as the charset in a MIME Content-Type or RFC2047-encoded header. But an ISO 2022 decoder still has to be able to switch charsets dynamically while decoding, so it is valid for the data to include an intial switch sequence to the Korean charset.
Is there a way to convert without the prefix?
Not with IMultiLanguage2 and WideCharToMultiByte(), no. They have no clue how you are going to use their output, so it makes sense why they include an initial switch sequence to the Korean charset - so a decoder without access to charset info from MIME (or other source) would still know what charset to use initially.
When you put the data into a MIME message, you will have to manually strip off the charset switch sequence when you set the MIME charset to iso-2022-kr. If you do not want to strip it manually, you will have to find (or write) a Unicode encoder that does not output that initial switch sequence.
That was a red herring - turned out the escape sequence is necessary. The problem was with my code that was trimming the names and addresses using Trim() Delphi function, which trims all characters less than or equal to space (0x20); that includes the escape character (0x1B).
Switching to my own trimming function that removes only spaces fixed the problem.

Decoding base64 after stripping padding

I am stripping the padding from the end of of a base64 encoded string. I have found that I can successfully decode the string without adding the padding back on first (in php at least). Is this a result of the specification of base64 or implementation dependent?
And a related question, can a = sign exist in a base64 encoded string other than as padding, i.e. If it is not implementation dependent, is it safe just to strip the trailing equals signs or is it possible that the last "real" character could have been one?
While padding isn't necessary to decode a base64 string, there are some implementations that will throw an error if the padding has been stripped away.
The RFC states:
Implementations MUST include appropriate pad characters at the end of
encoded data unless the specification referring to this document
explicitly states otherwise.
The = character will only ever be used for padding.

Importing extended ASCII into Oracle

I have a procedure that imports a binary file containing some strings. The strings can contain extended ASCII, e.g. CHR(224), 'à'. The procedure is taking a RAW and converting the BCD bytes into characters in a string one by one.
The problem is that the extended ASCII characters are getting lost. I suspect this is due to their values meaning something else in UTF8.
I think what I need is a function that takes an ASCII character index and returns the appropriate UTF8 character.
Update: If I happen to know the equivalent Oracle character set for the incoming text can I then convert the raw bytes to UTF8? The source text will always be single byte.
There's no such thing as "extended ASCII." Or, to be more precise, so many encodings are supersets of ASCII, sharing the same first 127 code points, that the term is too vague to be meaningful. You need to find out if the strings in this file are encoded using UTF-8, ISO-8859-whatever, MacRoman, etc.
The answer to the second part of your question is the same. UTF-8 is, by design, a superset of ASCII. Any ASCII character (i.e. 0 through 127) is also a UTF-8 character. To translate some non-ASCII character (i.e. >= 128) into UTF-8, you first need to find out what encoding it's in.

Resources