I'm writing a compression algorithm (mostly for fun) in C, and I need to be able to store a list of numbers in binary. Each element of this list will be in the form of two digits, both under 10 (like (5,5), (3,6), (9,2)). I'll potentially be storing thousands of these pairs (one pair is made for each character in a string in my compression algorithm).
Obviously the simplest way to do this would be to concatenate each pair (-> 55, 36, 92) to make a 2-digit number (since they're just one digit each), then store each pair as a 7-bit number (since 99 is the highest). Unfortunately, this isn't so space-efficient (7 bits per pair).
Then I thought perhaps if I concatenate each pair, then concatenate that (553692), I'd be able to then store that as a plain number in binary form (10000111001011011100, which for three pairs is already smaller than storing each number separately), and keep a quantifier for the number of bits used for the binary number. The only problem is, this approach requires a bigint library and could be potentially slow because of that. As the number gets bigger and bigger (+2 digits per character in the string) the memory usage and slowdown would get bigger and bigger as well.
So here's my question: Is there a better storage-efficient way to store a list of numbers like I'm doing, or should I just go with the bignum or 7-bit approach?
The information-theoretic minimum for storing 100 different values is log2100, which is about 6.644. In other words, the possible compression from 7 bits is a hair more than 5%. (log2100 / 7 is 94.91%.)
If these pairs are simply for temporary storage during the algorithm, then it's almost certainly not worth going to a lot of effort to save 5% of storage, even if you managed to do that.
If the pairs form part of you compressed output, then your compression cannot be great (a character is only eight bits, and presumably the pairs are additional to any compressed character data.) Nonetheless, the easy compression technique is to store up to 6 pairs in 40 bits (5 bytes), which can be done without a bigint package assuming a 64-bit machine. (Alternatively, store up to 3 pairs in 20 bits and then pack two 20-bit sequences into five bytes.) That gives you 99.66% of the maximum compression for the values.
All of the above assumes that the 100 possible values are equally distributed. If the distribution is not even and it is possible to predict the frequencies, then you can use Hoffman encoding to improve compression. Even so, I wouldn't recommend it for temporary storage.
Related
Problem space: We have a ton of data to digest that can range 6 orders of magnitude in size. Looking for a way to be more efficient, and thus use less disk space to store all of these digests.
So I was thinking about lossy audio encoding, such as MP3. There are two basic approaches - constant bitrate and constant quality (aka variable bitrate). Since my primary interest is quality, I usually go for VBR. Thus, to achieve the same level of quality, a pure sin tone would require significantly lower bitrate than a something like a complex classical piece.
Using the same idea, two very small data chunks should require significantly less total digest bits than two very large data chunks to ensure roughly the same statistical improbability (what I am calling quality in this context) of their digests colliding. This is an assumption that seems intuitively correct to me, but then again, I am not a crypto mathematician. Also note that this is all about identification, not security. It's okay if a small data chunk has a small digest, and thus computationally feasible to reproduce.
I tried searching around the inter-tubes for anything like this. The closest thing I found was a posting somewhere that talked about using a fixed size digest hash, like SHA256, as a initialization vector for AES/CTR acting as a psuedo-random generator. Then taking the first x number of bit off that.
That seems like a totally do-able thing. The only problem with this approach is that I have no idea how to calculate the appropriate value of x as a function of the data chunk size. I think my target quality would be statistical improbability of SHA256 collision between two 1GB data chunks. Does anyone have thoughts on this calculation?
Are there any existing digest hashing algorithms that already do this? Or are there any other approaches that will yield this same result?
Update: Looks like there is the SHA3 Keccak "sponge" that can output an arbitrary number of bits. But I still need to know how many bits I need as a function of input size for a constant quality. It sounded like this algorithm produces an infinite stream of bits, and you just truncate at however many you want. However testing in Ruby, I would have expected the first half of a SHA3-512 to be exactly equal to a SHA3-256, but it was not...
Your logic from the comment is fairly sound. Quality hash functions will not generate a duplicate/previously generated output until the input length is nearly (or has exceeded) the hash digest length.
But, the key factor in collision risk is the size of the input set to the size of the hash digest. When using a quality hash function, the chance of a collision for two 1 TB files not significantly different than the chance of collision for two 1KB files, or even one 1TB and one 1KB file. This is because hash function strive for uniformity; good functions achieve it to a high degree.
Due to the birthday problem, the collision risk for a hash function is is less than the bitwidth of its output. That wiki article for the pigeonhole principle, which is the basis for the birthday problem, says:
The [pigeonhole] principle can be used to prove that any lossless compression algorithm, provided it makes some inputs smaller (as the name compression suggests), will also make some other inputs larger. Otherwise, the set of all input sequences up to a given length L could be mapped to the (much) smaller set of all sequences of length less than L, and do so without collisions (because the compression is lossless), which possibility the pigeonhole principle excludes.
So going to a 'VBR' hash digest is not guaranteed to save you space. The birthday problem provides the math for calculating the chance that two random things will share the same property (a hash code is a property, in a broad sense), but this article gives a better summary, including the following table.
Source: preshing.com
The top row of the table says that in order to have a 50% chance of a collision with a 32-bit hash function, you only need to hash 77k items. For a 64-bit hash function, that number rises to 5.04 billion for the same 50% collision risk. For a 160-bit hash function, you need 1.42 * 1024 inputs before there is a 50% chance that a new input will have the same hash as a previous input.
Note that 1.42 * 1024 160 bit numbers would themselves take up an unreasonably large amount of space; millions of Terabytes, if I'm doing the math right. And that's without counting for the 1024 item values they represent.
The bottom end of that table should convince you that a 160-bit hash function has a sufficiently low risk of collisions. In particular, you would have to have 1021 hash inputs before there is even a 1 in a million chance of a hash collision. That's why your searching turned up so little: it's not worth dealing with the complexity.
No matter what hash strategy you decide upon however, there is a non-zero risk of collision. Any type of ID system that relies on a hash needs to have a fallback comparison. An easy additional check for files is to compare their sizes (works well for any variable length data where the length is known, such as strings). Wikipedia covers several different collision mitigation and detection strategies for hash tables, most of which can be extended to a filesystem with a little imagination. If you require perfect fidelity, then after you've run out of fast checks, you need to fallback to the most basic comparator: the expensive bit-for-bit check of the two inputs.
If I understand the question correctly, you have a number of data items of different lengths, and for each item you are computing a hash (i.e. a digest) so the items can be identified.
Suppose you have already hashed N items (without collisions), and you are using a 64bit hash code.
The next item you hash will take one of 2^64 values and so you will have a N / 2^64 probability of a hash collision when you add the next item.
Note that this probability does NOT depend on the original size of the data item. It does depend on the total number of items you have to hash, so you should choose the number of bits according to the probability you are willing to tolerate of a hash collision.
However, if you have partitioned your data set in some way such that there are different numbers of items in each partition, then you may be able to save a small amount of space by using variable sized hashes.
For example, suppose you use 1TB disk drives to store items, and all items >1GB are on one drive, while items <1KB are on another, and a third is used for intermediate sizes. There will be at most 1000 items on the first drive so you could use a smaller hash, while there could be a billion items on the drive with small files so a larger hash would be appropriate for the same collision probability.
In this case the hash size does depend on file size, but only in an indirect way based on the size of the partitions.
Let's say I have a document & the document is spread across 4 different machines, I would like to get a character which has the highest repeated count (all 4 machines combined).
One approach I have is to use a hashmap in each machine and calculate the frequency on each machine individually and then pass that hashmap to the main server where hashmaps from all the 4 machines will be merged.
Thus we'll get the character with the highest frequency.
But the cache here is that I want to minimize the data transferred from each machine.
What improvements can be made ?
[EDIT]
Each machine holds a part of the document
If you don't mind it taking longer...
Each computer passes the most frequent character(s). Hopefully, the number of characters with the highest frequency is low. Ideally, it would be almost always only one.
Main server combines them into a set. If the set has a single character done. Otherwise this set is passed along to the computers, likely as an array or list. Assuming only one character from each computer, this list would have only 2-4 characters.
Each computer returns the frequencies of each character in the set.
Main server sums the frequencies, obtaining the most frequent.
I assert that without prior knowledge of the distribution of characters in the document then any approach you take will have to reduce the data from all 4 computers onto one of them. To minimise the data transferred it is necessary to minimise the size of the data structure which holds the character counts on each computer.
Supposing that you are working with an alphabet with N characters your problem is now the design of a data structure which can hold N integers (in some range [0..m], m being the number of characters in the alphabet) and there is any number of such data structures to be found.
Of course, if you have prior knowledge of the distribution of characters, for example if you know that it is pure text written in English, you have a range of possible approaches to data compression.
Given the relatively small values for N and m likely to be found in practice I agree with the general thrust of the commentary, that it is probably not worth devising a complicated structure to minimise the amount of data transferred, sending an array of N integers would be adequate in most conceivable circumstances.
Sort at most 10 million 7-digit numbers. constraints: 1M RAM, high speed. several secs is good.
[Edit: from comment by questioner: the input values are distinct]
Using Bitmap Data Structure is a good solution for this problem.
That means I need a string, which length is at most 10 Million????
So is the RAM enough for this?
confused here.
Thank you
So, there are ~8,000,000 bits in 1MB but if you have arbitrary 7 digit numbers (up to 9,999,999) using a bit vector to do the sort won't work. Similarly, it won't work if some numbers can be repeated because you can only store {0,1} in a bit vector.
But assuming, (what I think your problem is asking) that you have a sequence of integers between 0 and 8,000,000 with no duplicates, you can simply allocate a zeroed array of 8,000,000 bits and then for each number, tick off the corresponding bit in the array. Then outputting the sorted list is simply reading through that array in sequence and outputting the index for each 1 value.
If you are asking the more complex version of the question (0 - 10 million, repeats allowed), then you will need to to sort chunks that fit in ram, store them on disk, and then you can merge these chunks in linear time and streaming (so you don't ever have to store the whole thing in memory). Here is an implementation of a very similar thing in python: http://neopythonic.blogspot.com/2008/10/sorting-million-32-bit-integers-in-2mb.html
Start with a bit array representing the lowest 8 million possible values. Read through the input and set a bit for every value within range. Then output all the numbers for the turned-on bits in sequence. Next Clear the first 2 million bits of the array so that it can represent the highest 2 million possible values. Read through the input and set a bit for every value in the new range. Output all the values in this range. Done.
Long Ascii String Text may or may not be crushed and compressed into hash kind of ascii "checksum" by using sophisticated mathematical formula/algorithm. Just like air which can be compressed.
To compress megabytes of ascii text into a 128 or so bytes, by shuffling, then mixing new "patterns" of single "bytes" turn by turn from the first to the last. When we are decompressing it, the last character is extracted first, then we just go on decompression using the formula and the sequential keys from the last to the first. The sequential keys and the last and the first bytes must be exactly known, including the fully updated final compiled string, and the total number of bytes which were compressed.
This is the terra compression I was thinking about. Is this possible? Can you explain examples. I am working on this theory and it is my own thought.
In general? Absolutely not.
For some specific cases? Yup. A megabyte of ASCII text consisting only of spaces is likely to compress extremely well. Real text will generally compress pretty well... but not in the order of several megabytes into 128 bytes.
Think about just how many strings - even just strings of valid English words - can fit into several megabytes. Far more than 256^128. They can't all compress down to 128 bytes, by the pigeon-hole principle...
If you have n possible input strings and m possible compressed strings and m is less than n then two strings must map to the same compressed string. This is called the pigeonhole principle and is the fundemental reason why there is a limit on how much you can compress data.
What you are describing is more like a hash function. Many hash functions are designed so that given a hash of a string it is extremely unlikely that you can find another string that gives the same hash. But there is no way that given a hash you can discover the original string. Even if you are able to reverse the hashing operation to produce a valid input that gives that hash, there are infinitely many other inputs that would give the same hash. You wouldn't know which of them is the "correct" one.
Information theory is the scientific field which addresses questions of this kind. It also provides you the possibility to calculate the minimum amount of bits needed to store a compressed message (with lossless compression). This lower bound is known as the Entropy of the message.
Calculation of the Entropy of a piece of text is possible using a Markov model. Such a model uses information how likely a certain sequence of characters of the alphabet is.
The air analogy is very wrong.
When you compress air you make the molecules come closer to each other, each molecule is given less space.
When you compress data you can not make the bit smaller (unless you put your harddrive in a hydraulic press). The closest you can get of actually making bits smaller is increasing the bandwidth of a network, but that is not compression.
Compression is about finding a reversible formula for calculating data. The "rules" about data compression are like
The algorithm (including any standard start dictionaries) is shared before hand and not included in the compressed data.
All startup parameters must be included in the compressed data, including:
Choice of algorithmic variant
Choice of dictionaries
All compressed data
The algorithm must be able to compress/decompress all possible messages in your domain (like plain text, digits or binary data).
To get a feeling of how compression works you may study some examples, like Run length encoding and Lempel Ziv Welch.
You may be thinking of fractal compression which effectively works by storing a formula and start values. The formula is iterated a certain number of times and the result is an approximation of the original input.
This allows for high compression but is lossy (output is close to input but not exactly the same) and compression can be very slow. Even so, ratios of 170:1 are about the highest achieved at the moment.
This is a bit off topic, but I'm reminded of the Broloid compression joke thread that appeared on USENET ... back in the days when USENET was still interesting.
Seriously, anyone who claims to have a magical compression algorithm that reduces any text megabyte file to a few hundred bytes is either:
a scammer or click-baiter,
someone who doesn't understand basic information theory, or
both.
You can compress test to a certain degree because it doesn't use all the available bits (i.e. a-z and A-Z make up 52 out of 256 values). Repeating patterns allow some intelligent storage (zip).
There is no way to store arbitrary large chunks of text in any fixed length number of bytes.
You can compress air, but you won't remove it's molecules! It's mass keeps the same.
I've done an example using bitarrays from a newbie manual. I want to know what they can be used for and what some common data structures for them (assuming that "array" is fairly loose terminology.)
Thanks.
There are several listed in the Applications section of the Bit array Wikipedia article:
Because of their compactness, bit arrays have a number of applications in areas where space or efficiency is at a premium. Most commonly, they are used to represent a simple group of boolean flags or an ordered sequence of boolean values.
We mentioned above that bit arrays are used for priority queues, where the bit at index k is set if and only if k is in the queue; this data structure is used, for example, by the Linux kernel, and benefits strongly from a find-first-zero operation in hardware.
Bit arrays can be used for the allocation of memory pages, inodes, disk sectors, etc. In such cases, the term bitmap may be used. However, this term is frequently used to refer to raster images, which may use multiple bits per pixel.
Another application of bit arrays is the Bloom filter, a probabilistic set data structure that can store large sets in a small space in exchange for a small probability of error. It is also possible to build probabilistic hash tables based on bit arrays that accept either false positives or false negatives.
Bit arrays and the operations on them are also important for constructing succinct data structures, which use close to the minimum possible space. In this context, operations like finding the nth 1 bit or counting the number of 1 bits up to a certain position become important.
Bit arrays are also a useful abstraction for examining streams of compressed data, which often contain elements that occupy portions of bytes or are not byte-aligned. For example, the compressed Huffman coding representation of a single 8-bit character can be anywhere from 1 to 255 bits long.
In information retrieval, bit arrays are a good representation for the posting lists of very frequent terms. If we compute the gaps between adjacent values in a list of strictly increasing integers and encode them using unary coding, the result is a bit array with a 1 bit in the nth position if and only if n is in the list. The implied probability of a gap of n is 1/2n. This is also the special case of Golomb coding where the parameter M is 1; this parameter is only normally selected when -log(2-p)/log(1-p) ≤ 1, or roughly the term occurs in at least 38% of documents.