Compression Formats and Delimiter Sequences - hadoop

My question is: Are there any standard compression formats which can ensure that a certain delimiter sequence does not occur in the compressed data stream?
We want to design a binary file format, containing chunks of sequential data (3D coordinates + other data, not really important for the question). Each chunk should be compressed using a standard compression format, like GZIP, ZIP, ...
So, the file structure will be like:
FileHeader
ChunkDelimiter Chunk1_Header compress(Chunk1_Data)
ChunkDelimiter Chunk2_Header compress(Chunk2_Data)
...
Use case is the following: The files should be read in splits in Hadoop, so we want to be able to start at an arbitrary byte position in the file, and find the start of the next chunk by looking for the delimiter sequence. -> The delimiter sequence should not occur within the chunks.
I know that we could post-process the compressed data, "escaping" the delimiter sequence in case that it occurs in the compressed output. But we'd better avoid this, since the "reverse escaping" would be required in the decoder, adding complexity.
Some more facts why we chose this file format:
Should be easily readable by third parties -> standard compression algorithm preferred.
Large files; streaming operation: amount of data and number of chunks is not known when starting to write the file -> Difficult to write start-of-chunk byte positions in the header.

I won't answer your question with a compression scheme name but will give you a hint of how other solved the same issue.
Let's give a look at Avro. Basically, they have similar requirements: files must be splitable and each data block can be compressed (you can even choose your compression scheme).
From the Avro Specification we learn that splittability is achieved with the help of a synchronization marker ("Objects are stored in blocks that may be compressed. Syncronization markers are used between blocks to permit efficient splitting of files for MapReduce processing."). We also discover that the synchronization marker is a 16-byte randomly-generated value ("The 16-byte, randomly-generated sync marker for this file.").
How does it solve your issue ? Well, since Martin Kleppmann provided a great answer to this question a few years ago I will just copy paste his message
On 23 January 2013 21:09, Josh Spiegel wrote:
As I understand it, Avro container files contain synchronization markers
every so often to support splitting the file. See:
https://cwiki.apache.org/AVRO/faq.html#FAQ-Whatisthepurposeofthesyncmarkerintheobjectfileformat%3F
(1) Why isn't the synchronization marker the same for every container file?
(i.e. what is the point of generating it randomly every time)
(2) Is it possible, at least in theory, for naturally occurring data to
contain bytes that match the sync marker? If so, would this break
synchronization?
Thanks,
Josh
Because if it was predictable, it would inevitably appear in the actual data sometimes (e.g. imagine the Avro documentation, stating
what the sync marker is, is downloaded by a web crawler and stored in
an Avro data file; then the sync marker will appear in the actual
data). Data may come from malicious sources; making the marker random
makes it unfeasible to exploit.
Possibly, but extremely unlikely. The probability of a given random 16-byte string appearing in a petabyte of (uniformly distributed) data
is about 10^-23. It's more likely that your data center is wiped out
by a meteorite
(http://preshing.com/20110504/hash-collision-probabilities).
If the sync marker appears in your data, it only breaks reading the file if you happen to also seek to that place in the file. If you just
read over it sequentially, nothing happens.
Martin
Link to the Avro mailing list archive
If it works for Avro, it will work for you too.

No. I know of no standard compression format that does not allow any sequence of bits to occur somewhere within. To do otherwise would (slightly) degrade compression, going against the original purpose of a compression format.
The solutions are a) post-process the sequence to use a specified break pattern and insert escapes if the break pattern accidentally appears in the compressed data -- this is guaranteed to work, but you don't like this solution, or b) trust that the universe is not conspiring against you and use a long break pattern whose length assures that it is incredibly unlikely to appear accidentally in all the sequences this is applied to anytime from now until the heat death of the universe.
For b) you can protect somewhat against the universe conspiring against you by selecting a random pattern for each file, and providing the random pattern at the start of the file. For the truly paranoid, you could go even further and generate a new random pattern for each successive break, from the previous pattern.
Note that the universe can conspire against you for a fixed pattern. If you make one of these compressed files with a fixed break pattern, and then you include that file in another compressed archive also using that break pattern, that archive will likely not be able to compress this already compressed file and will simply store it, leaving exposed the same fixed break pattern as is being used by the archive.
Another protection for b) would be to detect the decompression failure of an incorrect break by seeing that the piece before the break does not terminate, and handle that special case by putting that piece and the following piece back together and trying the decompression again. You would also very likely detect this on the following piece as well, with that decompression failing.

Related

Spark: Minimize task/partition size skew with textFile's minPartitions option?

I'm reading in tens of thousands of files into an rdd via something like sc.textFile("/data/*/*/*") One problem is that most of these files are tiny, whereas others are huge. That leads to imbalanced tasks, which causes all sorts of well-known problems.
Can I break up the largest partitions by instead reading in my data via sc.textFile("/data/*/*/*", minPartitions=n_files*5), where n_files is the number of input files?
As convered elsewhere on stackoverflow, minPartitions gets passed way down the hadoop rabit hole and is used in the org.apache.hadoop.mapred.TextInputFormat.getSplits. My question is whether this is implemented such that the largest files are split first. In other words, is the splitting strategy one that tries to lead to evenly sized partitions?
I would prefer an answer that points to wherever the splitting strategy is actually implemented in a recent version of spark/hadoop.
Nobody's posted an answer so I dug into this myself and will post an answer to my own question:
It appears that, if your input file(s) are splittable, then textFile will indeed try to balance partition size if you use the minPartitions option.
The partitioning strategy is implemented here, i.e., in the getSplits method of org.apache.hadoop.mapred.TextInputFormat. This partitioning strategy is complex, and operates by first setting goalSize, which is simply the total size of the input divided by the numSplits (minPartitions is passed down to set the value of numSplits). It then splits up files in such a way that tries to ensure that each partition's size (in terms of its input's byte size) is as close as possible to the goalSize/
If your input file(s) are not splittable, then this splitting will not take place: see the source code here.

Does the order of data in a text file affects its compression ratio?

I have 2 large text files (csv, to be precise). Both have the exact same content except that the rows in one file are in one order and the rows in the other file are in a different order.
When I compress these 2 files (programmatically, using DotNetZip) I notice that always one of the files is considerably bigger -for example, one file is ~7 MB bigger compared to the other.-
My questions are:
How does the order of data in a text file affect compression and what measures can one take in order to guarantee the best compression ratio? - I presume that having similar rows grouped together (at least in the case of ZIP files, which is what I am using) would help compression but I am not familiar with the internals of the different compression algorithms and I'd appreciate a quick explanation on this subject.
Which algorithm handles this sort of scenario better in the sense that would achieve the best average compression regardless of the order of the data?
"How" has already been answered. To answer your "which" question:
The larger the window for matching, the less sensitive the algorithm will be to the order. However all compression algorithms will be sensitive to some degree.
gzip has a 32K window, bzip2 a 900K window, and xz an 8MB window. xz can go up to a 64MB window. So xz would be the least sensitive to the order. Matches that are further away will take more bits to code, so you will always get better compression with, for example, sorted records, regardless of the window size. Short windows simply preclude distant matches.
In some sense, it is the measure of the entropy of the file defines how well it will compress. So, yes, the order definitely matters. As a simple example, consider a file filled with values abcdefgh...zabcd...z repeating over and over. It would compress very well with most algorithms because it is very ordered. However, if you completely randomize the order (but leave the same count of each letter), then it has the exact same data (although a different "meaning"). It is the same data in a different order, and it will not compress as well.
In fact, because I was curious, I just tried that. I filled an array with 100,000 characters a-z repeating, wrote that to a file, then shuffled that array "randomly" and wrote it again. The first file compressed down to 394 bytes (less than 1% of the original size). The second file compressed to 63,582 bytes (over 63% of the original size).
A typical compression algorithm works as follows. Look at a chunk of data. If it's identical to some other recently seen chunk, don't output the current chunk literally, output a reference to that earlier chunk instead.
It surely helps when similar chunks are close together. The algorithm will only keep a limited amount of look-back data to keep compression speed reasonable. So even if a chunk of data is identical to some other chunk, if that old chunk is too old, it could already be flushed away.
Sure it does. If the input pattern is fixed, there is a 100% chance to predict the character at each position. Given that two parties know this about their data stream (which essentially amounts to saying that they know the fixed pattern), virtually nothing needs to be communicated: total compression is possible (to communicate finite-length strings, rather than unlimited streams, you'd still need to encode the length, but that's sort of beside the point). If the other party doesn't know the pattern, all you'd need to do is to encode it. Total compression is possible because you can encode an unlimited stream with a finite amount of data.
At the other extreme, if you have totally random data - so the stream can be anything, and the next character can always be any valid character - no compression is possible. The stream must be transmitted completely intact for the other party to be able to reconstruct the correct stream.
Finite strings are a little trickier. Since finite strings necessarily contain a fixed number of instances of each character, the probabilities must change once you begin reading off initial tokens. One can read some sort of order into any finite string.
Not sure if this answers your question, but it addresses things a bit more theoretically.

How to Find Exact Row in Log File

If you have a big log file, billions of lines long. The files have some columns, like IP addresses: xxx.xxx.xxx.xxx.
How can I find exact one line quickly, like if I want to find 123.123.123.123.
A naive line-by-line search seems too slow.
If you don't have any other information to go on (such as a date range, assuming the file is sorted), then line-by-line search is your best option. Now, that doesn't mean you need to read in lines. Also, it might be more efficient for you to search backwards because you know the entry is recent.
The general approach (for searching backwards) is this:
Declare a buffer. You will read chunks of the file at a time into this buffer as fast as possible (preferably by using low-level operating system calls that can read directly without any buffering/caching).
So you seek to the end of your file minus the size of your buffer and read that many bytes.
Now you search forwards through your buffer for the first newline character. Remember that offset for later, as it represents a partial line. Starting at next line, you search forward to the end of the buffer looking for your string. If it has to be in a certain column but other columns could contain that value, then you need to do some parsing.
Now you continue to search backwards through your file. You seek to the last position you read from minus the chunk size plus the offset that you found when you searched for a newline character. Now, you read again. If you like you can move that partial line to the end of the buffer and read fewer bytes but it's not going to make a huge difference if your chunks are large enough.
And you continue until you reach the beginning of the file. There is of course a special case when the number of bytes to read is less than the chunk size (namely, you don't ignore the first line). I assume that you won't reach the beginning of the file because it seems clear that you don't want to search the entire thing.
So that's the approach when you have no idea where the value is. If you do have some idea on ordering, then of course you probably want to do a binary search. In that case you can use smaller chunk sizes (enough to at least catch a full line).
You really need to search for some regularity in the file and exploit that, Barring that, then if you have more processors you could split the file into sections and search in parallel - assuming I/O would not then be a bottleneck.

Is Terra Compression possible? If so, please explain and provide samples

Long Ascii String Text may or may not be crushed and compressed into hash kind of ascii "checksum" by using sophisticated mathematical formula/algorithm. Just like air which can be compressed.
To compress megabytes of ascii text into a 128 or so bytes, by shuffling, then mixing new "patterns" of single "bytes" turn by turn from the first to the last. When we are decompressing it, the last character is extracted first, then we just go on decompression using the formula and the sequential keys from the last to the first. The sequential keys and the last and the first bytes must be exactly known, including the fully updated final compiled string, and the total number of bytes which were compressed.
This is the terra compression I was thinking about. Is this possible? Can you explain examples. I am working on this theory and it is my own thought.
In general? Absolutely not.
For some specific cases? Yup. A megabyte of ASCII text consisting only of spaces is likely to compress extremely well. Real text will generally compress pretty well... but not in the order of several megabytes into 128 bytes.
Think about just how many strings - even just strings of valid English words - can fit into several megabytes. Far more than 256^128. They can't all compress down to 128 bytes, by the pigeon-hole principle...
If you have n possible input strings and m possible compressed strings and m is less than n then two strings must map to the same compressed string. This is called the pigeonhole principle and is the fundemental reason why there is a limit on how much you can compress data.
What you are describing is more like a hash function. Many hash functions are designed so that given a hash of a string it is extremely unlikely that you can find another string that gives the same hash. But there is no way that given a hash you can discover the original string. Even if you are able to reverse the hashing operation to produce a valid input that gives that hash, there are infinitely many other inputs that would give the same hash. You wouldn't know which of them is the "correct" one.
Information theory is the scientific field which addresses questions of this kind. It also provides you the possibility to calculate the minimum amount of bits needed to store a compressed message (with lossless compression). This lower bound is known as the Entropy of the message.
Calculation of the Entropy of a piece of text is possible using a Markov model. Such a model uses information how likely a certain sequence of characters of the alphabet is.
The air analogy is very wrong.
When you compress air you make the molecules come closer to each other, each molecule is given less space.
When you compress data you can not make the bit smaller (unless you put your harddrive in a hydraulic press). The closest you can get of actually making bits smaller is increasing the bandwidth of a network, but that is not compression.
Compression is about finding a reversible formula for calculating data. The "rules" about data compression are like
The algorithm (including any standard start dictionaries) is shared before hand and not included in the compressed data.
All startup parameters must be included in the compressed data, including:
Choice of algorithmic variant
Choice of dictionaries
All compressed data
The algorithm must be able to compress/decompress all possible messages in your domain (like plain text, digits or binary data).
To get a feeling of how compression works you may study some examples, like Run length encoding and Lempel Ziv Welch.
You may be thinking of fractal compression which effectively works by storing a formula and start values. The formula is iterated a certain number of times and the result is an approximation of the original input.
This allows for high compression but is lossy (output is close to input but not exactly the same) and compression can be very slow. Even so, ratios of 170:1 are about the highest achieved at the moment.
This is a bit off topic, but I'm reminded of the Broloid compression joke thread that appeared on USENET ... back in the days when USENET was still interesting.
Seriously, anyone who claims to have a magical compression algorithm that reduces any text megabyte file to a few hundred bytes is either:
a scammer or click-baiter,
someone who doesn't understand basic information theory, or
both.
You can compress test to a certain degree because it doesn't use all the available bits (i.e. a-z and A-Z make up 52 out of 256 values). Repeating patterns allow some intelligent storage (zip).
There is no way to store arbitrary large chunks of text in any fixed length number of bytes.
You can compress air, but you won't remove it's molecules! It's mass keeps the same.

Using base64 encoding as a mechanism to detect changes

Is it possible to detect changes in the base64 encoding of an object to detect the degree of changes in the object.
Suppose I send a document attachment to several users and each makes changes to it and emails back to me, can I use the string distance between original base64 and the received base64s to detect which version has the most changes. Would that be a valid metric?
If not, would there be any other metrics to quantify the deltas?
That would depend entirely on the type of the document you had encoded. If it was a text file, then sure, the base64 encoded difference are probably on a par with the actual changes. However, you may have a format of a file where changes to the contents effectively produce a completely different binary file. An example of this would be a ZIP file.
you should do the same that diff does. Then for example do the metrics on diff fiel size.
In theory, yes, if do a smart diff (detecting inserts, deletions, and modifications).
In practice, no, unless the documents are absolutely plain text. Binary formats can't be meaningfully diff'd.
Base64 packs groups of 3x8 bit values into 4x6. If you change one 8 bit value by one bit, then you'll impact only one of the 6 bit values. If you change by two bits, then you have about a 5/12 chance of hitting one of the other 6 bit values. So if you're counting bits, it is entirely equivalent; otherwise, you will introduce noise depending on the metric you use.

Resources