I need to choose a compression algorithm to compress some data. I don't know the type of data I'll be compressing in advance (think of it as kinda like the WinRAR program).
I've heard of the following algorithms but I don't know which one I should use. Can anyone post a short list of pros and cons? For my application the first priority is decompression speed; the second priority is space saved. Compression (not decompression) speed is irrelevant.
Deflate
Implode
Plain Huffman
bzip2
lzma
I ran a few benchmarks compressing a .tar that contained a mix of high entropy data and text. These are the results:
Name - Compression rate* - Decompression Time
7zip - 87.8% - 0.703s
bzip2 - 80.3% - 1.661s
gzip - 72.9% - 0.347s
lzo - 70.0% - 0.111s
*Higher is better
From this I came to the conclusion that the compression rate of an algorithm depends on its name; the first in alphabetical order will be the one with the best compression rate, and so on.
Therefore I decided to rename lzo to 1lzo. Now I have the best algorithm ever.
EDIT: worth noting that of all of them unfortunately lzo is the only one with a very restrictive license (GPL) :(
If you need high decompression speed then you should be using LZO. Its compression speed and ratio are decent, but it's hard to beat its decompression speed.
In the Linux kernel it is well explained (from those included):
Deflate (gzip) - Fast, worst compression
bzip2 - Slow, middle compression
lzma - Very slow compression, fast decompression (however slower than gzip), best compression
I haven't use others, so it is hard to say, but speeds of algorithms may depend largely on architecture. For example, there are studies that data compression on the HDD speeds the I/O, as the processor is so much faster than the disk that it is worth it. However, it depends largely on the size of bottlenecks.
Similarly, one algorithm may use memory extensively, which may or may not cause problems (12 MiB -- is it a lot or very small? On embedded systems it is a lot; on a modern x86 it is tiny fragment of memory).
Take a look at 7zip. It's open source and contains 7 separate compression methods. Some minor testing we've done shows the 7z format gives a much smaller result file than zip and it was also faster for the sample data we used.
Since our standard compression is zip, we didn't look at other compression methods yet.
For a comprehensive benchmark on text data you might want to check out the Large Text Compression Benchmark.
For other types, this might be indicative.
Related
I have a large text file of size in order of 100 MB to compress. It has to be fast (12-14 seconds). What algorithms I can consider and what will be the expected compression ratio for them?
I got some file compression algorithms like FLZP,SR2,ZPAQ,Fp8,LPAQ8,PAQ9A.... which are performant among these ? The time limit is strict for me.
The algorithms that you have picked are the most well compressing in the world. Therefore, they are slow.
There are fast compression algorithms made for your use case. Names such as LZ4 and Snappy come up.
You have not defined what performance criterion you are looking for: more speed or more compression ? LZ based compressor (FLZP, LZO, LZ4, LZHAM, Snappy, ...) are the fastest. The PAQ compressors use context mixing for each bit, so they are slow but offer the best compression ratios. In between you can find things like Brotli, Zstd (they both offer a wide range of options to tune speed/compression) or the older Bzip/Bzip2. Personally I like BCM for its great speed/compression compromise and its simple code: https://github.com/encode84/bcm.
I have a dataset larger than main memory. After compression, it fits into memory. However, in-memory decompression is kind of compute-intensive.
Compared to accessing uncompressed data in hard drive, does in-memory decompression have any advantage in term of time-to-completion? assuming data from HDD will loaded into memory in its entirety (i.e. no random access to HDD during processing). Anyone has done any benchmark before. Thanks.
First, the data has to be compressible. If there is no compression, then obviously compressing to the HDD and decompressing back will be slower. Many files on a HDD are not compressible because they are already compressed, e.g. image files, video files, audio files, and losslessly compressed archives like zip or .tar.gz files.
If it is compressible, zlib decompression is likely to be faster than HDD reads, and lz4 decompression is very likely to be faster.
This is the classic sort of question which can only be correctly answered with "it depends" followed by "you need to measure it for your situation".
If you can decompress at least as fast as the HDD reads the data, and you decompress in parallel with the disk read, then reading of compressed data will almost always be faster (read of smaller file will finish sooner and decompression adds only latency of the last block).
According to this benchmark a pretty weak CPU can decompress gzip at over 60MB/s.
This depends on your data, on how you're processing it, and the specs of your machine. A few considerations that make this almost impossible to answer without profiling your exact scenario:
how good is your compression? Different compression algorithms use differing amounts of CPU.
how is the data used? The amount of data that you need to buffer before processing will affect how much you can multi-thread between decompression and processing, which will massively affect your answer.
what's your environment? A 16-core server with 1TB of data to process is very different to a fancy phone with 1GB of data, but it's not clear from your question which you're dealing with (HDD suggests a computer rather than a phone at least, but server vs desktop is still relevant).
how much random access are you doing once the data is loaded? You suggest there'll be no random access to the HDD after loading, but if you're loading the full compressed data and only decompressing a portion of data at a time, the pattern of access to the data is important - you might have decompress everything twice (or more!) to process.
Ultimately this question is hugely subjective and, if you think the performance difference will be important, I'd suggest you create some basic test scenarios and profile heavily.
As a more specific example: if you're doing heavy-duty audio or visual processing, the process is CPU intensive but will typically accept a stream of data. In that scenario, compression would probably slow you down as the bottleneck will be the CPU.
Alternatively, if you're reading a billion lines of text from a file and counting the total number of vowels in each, your disk IO will probably be the bottleneck, and you would benefit from reducing the disk IO and working the CPU harder by decompressing the file.
In our case, we optimized our batch processing code that would go through structured messages (read: tweets) in batch processing mode; switching the representation from JSON to msgpack, and mapping the entire files using mmap, we got into a state where it was clearly I/O-bound with speed of the magnetic disk being the limiting factor.
We found out that the msgpacked messages containing mostly UTF-8 text could be compressed with compression ratios of 3-4 with LZ4; after switching to LZ4 decompression our optimized code was still I/O-bound, but the throughput increased significantly.
In your case, I'd start experimenting with LZ4.
Hi I heard of lzo and lzf and seems they are all compression algorithms. Are they the same thing? Are there any other algorithms like them(light and fast)?
lzo and lzf are 2 well known very simple compression algorithms.
lzf goes for low memory usage during compression.
lzo goes for maximum decoding speed.
Both are fast, both have little memory requirements, both have comparable compression rates (which means very poor).
You can look at a direct comparison of them with other compressors here for example :
http://phantasie.tonempire.net/t96-compression-benchmark#149
Are there any other algorithms like them(light and fast)?
There is also LZ4 and Google's snappy. According to the benchmarks published by the LZ4 author on the project homepage and Hadoop developers on issue HADOOP-7657, LZ4 seems the fastest of them all.
Both are basic Lempel-Ziv compressors, which allows fast operation (since there is no second phase of encoding using huffman (as gzip/zip do) or statistical encoder) with moderate compression.
One benchmark for comparing codecs on java is jvm-compressor-benchmark. LZO is not yet included, but pure Java LZF has excellent performance (esp. compression speed), and I assume LZO might fare well too, if there was a driver for it.
Another LZ-based algorithm is Snappy by Google, and its native codec is the fastest codec at decompression (and compression is as fast as pure-java LZF compression).
Splittable LZ4 and ZSTD for hadoop, recently born but promising -> https://github.com/carlomedas/4mc
We have to compress a ton o' (monochrome) image data and move it quickly. If one were to just use the parallelizeable stages of jpeg compression (DCT and run length encoding of the quantized results) and run it on a GPU so each block is compressed in parallel I am hoping that would be very fast and still yeild a very significant compression factor like full jpeg does.
Does anyone with more GPU / image compression experience have any idea how this would compare both compression and performance wise over using libjpeg on a CPU? (If it is a stupid idea, feel free to say so - I am extremely novice in my knowledge of cuda and the various stages of jpeg compression.) Certainly it will be less compression and hopefully(?) faster but I have no idea how significant those factors may be.
You could hardly get more compression in GPU - there are just no complex-enough algorithms which can use that MUCH power.
When working with simple alos like JPEG - it's so simple that you'll spend most of the time transferring data via PCI-E bus (which has significant latency, especially when card does not support DMA transfers).
Positive side is that if card have DMA, you can free up CPU for more important stuff, and get image compression "for free".
In the best case, you can get about 10x improvement on top-end GPU compared to top-end CPU provided that both CPU & GPU code is well-optimized.
I'm looking for an algorithm to decompress chunks of data (1k-30k) in real time with minimal overhead. Compression should preferably be fast but isn't as important as decompression speed.
From what I could gather LZO1X would be the fastest one. Have I missed anything? Ideally the algorithm is not under GPL.
lz4 is what you're looking for here.
LZ4 is lossless compression algorithm, providing compression speed at
400 MB/s per core, scalable with multi-cores CPU. It features an
extremely fast decoder, with speed in multiple GB/s per core,
typically reaching RAM speed limits on multi-core systems.
Try Google's Snappy.
Snappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression. For instance, compared to the fastest mode of zlib, Snappy is an order of magnitude faster for most inputs, but the resulting compressed files are anywhere from 20% to 100% bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec or more.
When you cannot use GPL licensed code your choice is clear - zlib. Very permissive license, fast compression, fair compression ratio, very fast decompression, works everywhere and ported to every sane language.