Best way of storing differences between 2 images - image

I am doing motion detection.
I compare 2 images at a time.
The differences are compared at the pixel level.
I want to store the differences in a file.
I have tried saving the hex value into 2 dimensional string and using the binary formatter serializing it out to a file. But the size is 495kb and the original image size is only 32kb.
What is the most efficient way of storing differences?
I am using C#
Thanks

There are many ways. Maybe took a look how bdiff is doing it. In general, compare the binary value, not a hex representation. Maybe also the binary formatter serialization adds some overhead.

Related

Compression methods

I know most compression methods rely on some data repeatings in order to be effective. For example the sting "AAAAAaaaQWERTY" can be represented as "5A3aQWERTY" for lossless and something like "8aqwerty" for lossy (these are just for example, not actual working methods). As far as I know, all compression algorithms count on the repeats of ->constant<- strings of characters.
Here comes the problem with the string "abcdefghijklmnopqrstuvwxyz". Here nothing repeats, but as you probably see the information in the string can be represented in far shorter manner. In regex-like str. will be "[a-z]", or maybe "for(x=0;x<25;++){ascii(97+x)}".
Consider also the string "0149162536496481100121" - it can be represented by "for(x=0;x<11;++){x*x}".
The string "ABEJQZer" can be represented by "for(x=0;8;++){ascii(64+x*x)}"
The last two were examples of knowing an algorithm, which can reproduce the original string. I know that in general algorithms (if they are efficient) take far lesser space than the data they can produce.
like in svg images (which have only algorithms in the file) the size is lesser than jpeg.
My question is is there a way of compression, which takes the data and tryes to find efficient algorithms which can represent it. Like vectorizing an raster image ( like in http://vectormagic.com/), which works with other data too.
Consider audio data (for it can be compressed lossy) - some audio edditors (audacity for example) project files contain information like "generate 120Hz constant frequency with 0.8 amplitude from time 0 to time 2 minutes 45.6 seconds" (audacity stores info in xml format). This metadata takes very little memory, and when the project is exported to wav or mp3, the program "renders" the information to actual samples in the exported format.
In that case the compressor should reverse the process of rendering. It should take wav or mp3 file, figure out what algorithms can represent the samples (if it's lossy the algorithms must produce some approximation of the samples - like vectormagic.com approxymates the image) and produce compressed file.
I understand that compression time will be unbelievably long, but are there such (or similar) compression algorithms ?
All compression methods are like that: the output is a set of parameters for a set algorithms that renders the input, or something similar to the input.
For example MP3 audio codec breaks the input into blocks of 576 samples and converts each block into frequency-amplitude space, and prunes what cannot be heard by a human being. The output is equivalent to "during the next 13 milliseconds play frequencies x,y,z with amplitudes a,b,c". This woks well for audio data, and the similar approach used in JPEG works well for photographic images.
Similar methods can be applied to cases you mention. Sequences such as 987654 or 010409162536 for example are generated by successive values from polynomials, and can be represented as the coefficients of that polynomial, the first one as (9, -1) for 9-x, and the second one as (1,2,1) for 1+2x+x².
The choice of algorithm(s) used to generate the input tends to be fixed for simplicity, and tailored for the use case. For example if you are processing photographic images taken with a digital camera there's little point in even attempting to produce a vectorial output.
When trying to losslessly compress some data you always start with creating a model, for example when compressing some text in a human language, you assume, that there are actually not so many words, which repeat over and over. But then, many algorithms try to learn the parameters of the model on the go. Like it doesn't rely on what these words will actually be, it tries to find them for a given input. So the algorithm doesn't rely on the actual language used, but it does rely on the fact, that it is actually a human language, which follows some patterns.
In general, there isn't any perfect algorithm, which can compress anything losslessly, it is mathematically proven. For any algorithm there exist some data for which the compression result is bigger, than the data itself.
You can try data de-duplication:http://en.m.wikipedia.org/wiki/Data_deduplication. It's a little different and more intelligent data compression.

Best way to store 1 trillion lines of information

I'm doing calculations and the resultant text file right now has 288012413 lines, with 4 columns. Sample column:
288012413; 4855 18668 5.5677643628300215
the file is nearly 12 GB's.
That's just unreasonable. It's plain text. Is there a more efficient way? I only need about 3 decimal places, but would a limiter save much room?
Go ahead and use MySQL database
MSSQL express has a limit of 4GB
MS Access has a limit of 4 GB
So these options are out. I think by using a simple database like mysql or sSQLLite without indexing will be your best bet. It will probably be faster accessing the data using a database anyway and on top of that the file size may be smaller.
Well,
The first column looks suspiciously like a line number - if this is the case then you can probably just get rid of it saving around 11 characters per line.
If you only need about 3 decimal places then you can round / truncate the last column, potentially saving another 12 characters per line.
I.e. you can get rid of 23 characters per line. That line is 40 characters long, so you can approximatley halve your file size.
If you do round the last column then you should be aware of the effect that rounding errors may have on your calculations - if the end result needs to be accurate to 3 dp then you might want to keep a couple of extra digits of precision depending on the type of calculation.
You might also want to look into compressing the file if it is just used to storing the results.
Reducing the 4th field to 3 decimal places should reduce the file to around 8GB.
If it's just array data, I would look into something like HDF5:
http://www.hdfgroup.org/HDF5/
The format is supported by most languages, has built-in compression and is well supported and widely used.
If you are going to use the result as a lookup table, why use ASCII for numeric data? why not define a struct like so:
struct x {
long lineno;
short thing1;
short thing2;
double value;
}
and write the struct to a binary file? Since all the records are of a known size, advancing through them later is easy.
well, if the files are that big, and you are doing calculations that require any sort of precision with the numbers, you are not going to want a limiter. That might possibly do more harm than good, and with a 12-15 GB file, problems like that will be really hard to debug. I would use some compression utility, such as GZIP, ZIP, BlakHole, 7ZIP or something like that to compress it.
Also, what encoding are you using? If you are just storing numbers, all you need is ASCII. If you are using Unicode encodings, that will double to quadruple the size of the file vs. ASCII.
Like AShelly, but smaller.
Assuming line #'s are continuous...
struct x {
short thing1;
short thing2;
short value; // you said only 3dp. so store as fixed point n*1000. you get 2 digits left of dp
}
save in binary file.
lseek() read() and write() are your friends.
file will be large(ish) at around 1.7Gb.
The most obvious answer is just "split the data". Put them to different files, eg. 1 mln lines per file. NTFS is quite good at handling hundreds of thousands of files per folder.
Then you've got a number of answers regarding reducing data size.
Next, why keep the data as text if you have a fixed-sized structure? Store the numbers as binaries - this will reduce the space even more (text format is very redundant).
Finally, DBMS can be your best friend. NoSQL DBMS should work well, though I am not an expert in this area and I dont know which one will hold a trillion of records.
If I were you, I would go with the fixed-sized binary format, where each record occupies the fixed (16-20?) bytes of space. Then even if I keep the data in one file, I can easily determine at which position I need to start reading the file. If you need to do lookup (say by column 1) and the data is not re-generated all the time, then it could be possible to do one-time sorting by lookup key after generation -- this would be slow, but as a one-time procedure it would be acceptable.

Is Terra Compression possible? If so, please explain and provide samples

Long Ascii String Text may or may not be crushed and compressed into hash kind of ascii "checksum" by using sophisticated mathematical formula/algorithm. Just like air which can be compressed.
To compress megabytes of ascii text into a 128 or so bytes, by shuffling, then mixing new "patterns" of single "bytes" turn by turn from the first to the last. When we are decompressing it, the last character is extracted first, then we just go on decompression using the formula and the sequential keys from the last to the first. The sequential keys and the last and the first bytes must be exactly known, including the fully updated final compiled string, and the total number of bytes which were compressed.
This is the terra compression I was thinking about. Is this possible? Can you explain examples. I am working on this theory and it is my own thought.
In general? Absolutely not.
For some specific cases? Yup. A megabyte of ASCII text consisting only of spaces is likely to compress extremely well. Real text will generally compress pretty well... but not in the order of several megabytes into 128 bytes.
Think about just how many strings - even just strings of valid English words - can fit into several megabytes. Far more than 256^128. They can't all compress down to 128 bytes, by the pigeon-hole principle...
If you have n possible input strings and m possible compressed strings and m is less than n then two strings must map to the same compressed string. This is called the pigeonhole principle and is the fundemental reason why there is a limit on how much you can compress data.
What you are describing is more like a hash function. Many hash functions are designed so that given a hash of a string it is extremely unlikely that you can find another string that gives the same hash. But there is no way that given a hash you can discover the original string. Even if you are able to reverse the hashing operation to produce a valid input that gives that hash, there are infinitely many other inputs that would give the same hash. You wouldn't know which of them is the "correct" one.
Information theory is the scientific field which addresses questions of this kind. It also provides you the possibility to calculate the minimum amount of bits needed to store a compressed message (with lossless compression). This lower bound is known as the Entropy of the message.
Calculation of the Entropy of a piece of text is possible using a Markov model. Such a model uses information how likely a certain sequence of characters of the alphabet is.
The air analogy is very wrong.
When you compress air you make the molecules come closer to each other, each molecule is given less space.
When you compress data you can not make the bit smaller (unless you put your harddrive in a hydraulic press). The closest you can get of actually making bits smaller is increasing the bandwidth of a network, but that is not compression.
Compression is about finding a reversible formula for calculating data. The "rules" about data compression are like
The algorithm (including any standard start dictionaries) is shared before hand and not included in the compressed data.
All startup parameters must be included in the compressed data, including:
Choice of algorithmic variant
Choice of dictionaries
All compressed data
The algorithm must be able to compress/decompress all possible messages in your domain (like plain text, digits or binary data).
To get a feeling of how compression works you may study some examples, like Run length encoding and Lempel Ziv Welch.
You may be thinking of fractal compression which effectively works by storing a formula and start values. The formula is iterated a certain number of times and the result is an approximation of the original input.
This allows for high compression but is lossy (output is close to input but not exactly the same) and compression can be very slow. Even so, ratios of 170:1 are about the highest achieved at the moment.
This is a bit off topic, but I'm reminded of the Broloid compression joke thread that appeared on USENET ... back in the days when USENET was still interesting.
Seriously, anyone who claims to have a magical compression algorithm that reduces any text megabyte file to a few hundred bytes is either:
a scammer or click-baiter,
someone who doesn't understand basic information theory, or
both.
You can compress test to a certain degree because it doesn't use all the available bits (i.e. a-z and A-Z make up 52 out of 256 values). Repeating patterns allow some intelligent storage (zip).
There is no way to store arbitrary large chunks of text in any fixed length number of bytes.
You can compress air, but you won't remove it's molecules! It's mass keeps the same.

Using base64 encoding as a mechanism to detect changes

Is it possible to detect changes in the base64 encoding of an object to detect the degree of changes in the object.
Suppose I send a document attachment to several users and each makes changes to it and emails back to me, can I use the string distance between original base64 and the received base64s to detect which version has the most changes. Would that be a valid metric?
If not, would there be any other metrics to quantify the deltas?
That would depend entirely on the type of the document you had encoded. If it was a text file, then sure, the base64 encoded difference are probably on a par with the actual changes. However, you may have a format of a file where changes to the contents effectively produce a completely different binary file. An example of this would be a ZIP file.
you should do the same that diff does. Then for example do the metrics on diff fiel size.
In theory, yes, if do a smart diff (detecting inserts, deletions, and modifications).
In practice, no, unless the documents are absolutely plain text. Binary formats can't be meaningfully diff'd.
Base64 packs groups of 3x8 bit values into 4x6. If you change one 8 bit value by one bit, then you'll impact only one of the 6 bit values. If you change by two bits, then you have about a 5/12 chance of hitting one of the other 6 bit values. So if you're counting bits, it is entirely equivalent; otherwise, you will introduce noise depending on the metric you use.

Best compression algorithm? (see below for definition of best)

I want to store a list of the following tuples in a compressed format and I was wondering which algorithm gives me
smallest compressed size
fastest de/compression
tradeoff optimum ("knee" of the tradeoff curve)
My data looks like this:
(<int>, <int>, <double>),
(<int>, <int>, <double>),
...
(<int>, <int>, <double>)
One of the two ints refers to a point in time and it's very likely that the numbers ending up in one list are close to each other. The other int represents an abstract id and the values are less likely to be close, although they aren't going to be completely random, either. The double is representing a sensor reading and while there is some correlation between the values, it's probably not of much use.
Since the "time" ints can be close to each other, try to only store the first and after that save the difference to the int before (delta-coding). You can try the same for the second int, too.
Another thing you can try is to reorganize the data from [int1, int2, double], [int1, int2, double]... to [int1, int1...], [int2, int2...], [double, double...].
To find out the compression range your result will be in, you can write your data into a file and download the compressor CCM from Christian Martelock here. I found out that it performs very well for such data collections. It uses a quite fast context mixing algorithm. You can also compare it to other compressors like WinZIP or use a compression library like zLib to see if it is worth the effort.
Here is a common scheme used in most search engines: store deltas of values and encode the delta using a variable byte encoding scheme, i.e. if the delta is less than 128, it can be encoded with only 1 byte. See vint in Lucene and Protocol buffers for details.
This will not give you the best compression ratio but usually the fastest for encoding/decoding throughput.
If I'm reading the question correctly, you simply want to store the data efficiently. Obviously simple options like compressed xml are simple, but there are more direct binary serialization methods. One the leaps to mind is Google's protocol buffers.
For example, in C# with protobuf-net, you can simply create a class to hold the data:
[ProtoContract]
public class Foo {
[ProtoMember(1)]
public int Value1 {get;set;}
[ProtoMember(2)]
public int Value2 {get;set;}
[ProtoMember(3)]
public double Value3 {get;set;}
}
Then just [de]serialize a List or Foo[] etc, via the ProtoBuf.Serializer class.
I'm not claiming it will be quite as space-efficient as rolling your own, but it'll be pretty darned close. The protocol buffer spec makes fairly good use of space (for example, using base-128 for integers, such that small numbers take less space). But it would be simple to try it out, without having to write all the serialization code yourself.
This approach, as well as being simple to implement, also has the advantage of being simple to use from other architectures, since there are protocol buffers implementations for various languages. It also uses much less CPU than regular [de]compression (GZip/DEFLATE/etc), and/or xml-based serialization.
Sort as already proposed, then store
(first ints)
(second ints)
(doubles)
transposed matrix. Then compressed
Most compression algorithms will work equally bad on such data. However, there are a few things ("preprocessing") that you can do to increase the compressibility of the data before feeding it to a gzip or deflate like algorithm. Try the following:
First, if possible, sort the tuples in ascending order. Use the abstract ID first, then the timestamp. Assuming you have many readings from the same sensor, similar ids will be placed close together.
Next, if the measures are taken at regular intervals, replace the timestamp with the difference from the previous timestamp (except for the very first tuple for a sensor, of course.) For example, if all measures are taken at 5 minutes intervals, the delta between two timestamps will usually be close to 300 seconds. The timestamp field will therefore be much more compressible, as most values are equal.
Then, assuming that the measured values are stable in time, replace all readings with a delta from the previous reading for the same sensor. Again, most values will be close to zero, and thus more compressible.
Also, floating point values are very bad candidates for compression, due to their internal representation. Try to convert them to an integer. For example, temperature readings most likely do not require more than two decimal digits. Multiply values by 100 and round to the nearest integer.
Great answers, for the record, I'm going to merge those I upvoted into the approach I'm finally using:
Sort and reorganize the data so that similar numbers are next to each other, i. e. sort by id first, then by timestamp and rearrange from (<int1>, <int2>, <double>), ... to ([<int1>, <int1> ...], [<int2>, <int2> ... ], [<double>, <double> ...]) (as suggested by
schnaader and Stephan Leclercq
Use delta-encoding on the timestamps (and maybe on the other values) as suggested by schnaader and ididak
Use protocol buffers to serialize (I'm going to use them anyway in the application, so that's not going to add dependencies or anything). Thanks to Marc Gravell for pointing me to it.

Resources