When to use compression - hadoop

The question is in the title - when is it good to use compression? By good I meen faster processing.
My pipeline consists of multiple MR jobs and intermediate results are stored in sequence files.
The data is numeric - time series. Also, it happens that output of one job has same size as the input. So, the data transfered/stored can be large.
I would like to know whether I can expect speedup due to compression, or it will take more time to compress/decompress data?

It is almost always a good idea to enable compression of intermediate data with a fast codec (read snappy). You won't get penalized too much even if your data is uncompressible.

Compression doesn't affect your job as long as you are aware what you are trying to achieve, make sure your compressed data is splittable. I found bzip2 format more convenient with compression ratio and CPU usages but better to do in-house testing with different formats on your data set.
Compression give two major benefits.
1) use less disk space while running mapreduce job (intermittent output and final output compressed).
2) Increase job performance since we are sending compressed data during shuffling phase across the cluster nodes.
Hope that will help.

Related

Performance Improvement of the hive with SSD

I'm try to use the SSD in order to improve the hive performance.
SSD is, have a high-speed random access. Taking advantage to try to change the hive to be executed in the mapreduce code.
Now my idea is to simplify or eliminate the shuffling step.
Is it possible this? If possible, Where you do change?
ps. Tell us what happens when the hive is operating, where temporary files are stored.
I do not know English well. I'm sorry.
thank you.
In theory you can write your own partitioner and send the data on reducer which runs on the same node where the mapper ran.
Doing so you will never get the output file "unsplitted", so avoid the shuffling is not a good idea.
If you have a fast disk like SSD can be, you can increase the block size.
Usually the block size is computed to have the seek time no bigger than the 1% of the whole block transfer.
This will also reduce the number of the mapper used, since the number of the splits are few. Somewhat, less mapper means also less shuffling.
Using a compressed file format for the intermediate file, also speedup the work.

How does compression in hive results in better query performance?

Many best practices suggest that the data should be stored in a compressed format in HDFS.
There are clear performance differences while running a hive queries on a table comprising of compressed text files ( chunked gzip files of around 250 MB each) vs uncompressed textfile.
Can somebody please explain what is happening behind the scenes?
As per my understanding, while the query input is being assigned to mapper tasks, there is a decompression stage and then there is a query. If this is the case, how can it provide better performance over uncompressed text file as it will have the overhead of decompression?
There are two aspects involved here:
Network overhead: Map Reduce paradigm is heavily criticized for overhead for shuffle and sorting. If you look the processing steps in very selfish way then these steps are not contributing anything in the processing you want. Plus when bigger data flows thru the network in physical level even if you employ gigabit freq switch then also (if its not about very involved operation) then shuffle-sort becomes bottleneck. Hence more compressed the data easily it can pass thru the shuffle sort bottleneck.
Sparse Data: Bigger dataset are mostly sparse (Exceptions exist but take it as rule of thumb). So compression brings down the size of the data and then again shuffle sort step is pretty small.
data compressesion in Hive tables has is been known to give better performance than uncompressed storage, both in terms of disk usage and query performance.
You can import text files compressed with Gzip directly into a table stored as TextFile. The compression will be detected automatically and the file will be decompressed on-the-fly during query execution.
Record compresses each value individually while BLOCK buffers up 1MB (default) before doing compression.

What is the effect of compression on a MapReduce job?

The following has been documented in a white paper from Microsoft:
Compression will help if the input source files are very large (for example, over 500 GB) and you want to run MapReduce jobs repeatedly against the same input data.
So, we should use compression if the input files are very large, as it saves disk I/O and network bandwidth.
But, I cannot understand how "running a MapReduce job repeatedly against same input data" will help with the performance of compressing and decompressing? Compression should have same performance even if the data is different every time.
I think it depends on the what kind of compression logic is used to compress the files. Following is the information I found on internet.
http://comphadoop.weebly.com/
File compression brings two major benefits:
a. it reduces the space needed to store files,
b. it speeds up data transfer across the network or to or from disk. When dealing with large volumes of data, both of these savings can be significant, so it pays to carefully consider how to use compression in Hadoop.
Reasons to compress:
a) Data is mostly stored and not frequently processed. It is usual DWH scenario. In this case space saving can be much more significant then processing overhead
b) Compression factor is very high and thereof we save a lot of IO.
c) Decompression is very fast (like Snappy) and thereof we have a some gain with little price
d) Data already arrived compressed
Reasons not to compress:
a) Compressed data is not splittable. Have to be noted that many modern format are built with block level compression to enable splitting and other partial processing of the files.
b) Data is created in the cluster and compression takes significant time. Have to be noted that compression usually is much more CPU intensive then decompression.
All compression algorithms exhibit a space/time trade-off: faster compression and decompression speeds usually come at the expense of smaller space savings. Meaning, faster compression (in terms of time) means files are relatively bigger (not much of a benefit on space). Slower compression means, the compressed files are very small.
2. Common input format
various compression formats

Map Reduce & RDBMS

I was reading hadoop definitive guide , It was written Map Reduce is good for updating larger portions of the database , and it uses Sort & Merge to rebuild the database which is dependent on transfer time .
Also RDBMS is good for updating only smaller portions of a big database , It uses a B-Tree which is limited by seek time
Can anyone elaborate on what both these claims really mean ?
I am not really sure what the book means, but you will usually do a map reduce job to rebuild the entire database/anything if you still have the raw data.
The real good thing about hadoop is that it's distributed, so performance is not really a problem since you could just add more machines.
Let's take an example, you need to rebuild a complex table with 1 billion rows. With RDBMS, you can only scale vertically, so you will be depending more on the power of the CPU, and how fast the algorithm is. You will be doing it with some SQL command. You will need to select a few data, process them, do stuffs, etc. So you will most likely be limited by the seek time.
With hadoop map reduce, you could just add more machines, so performance is not the problem. Let's say you you use 10000 mappers, that means the task will be divided to 10000 mapper containers, and because of hadoop's nature, all these containers usually already have the data on their harddrive stored locally. The output of each mapper is always a key value structured format on their local harddrive. These data are sorted using the key by the mapper.
Now the problem is, they need to combine the data together, so all of these data will be sent to a reducer. This happens through the network, is usually the slowest part if you have big data. The reducer will receive all of the data and will merge-sort them for further processing. In the end you have a file which could be just uploaded to your database.
The transfer from mapper to reducer is usually what's taking the longest time if you have a lot of data, and network is usually your bottleneck. Maybe this is what it meant by depending on the transfer time.

Hadoop MR: better to have compressed input files or raw files?

as can be derived from the question, I want to know when it makes sense to have input files in compressed format (like gzip) and when it makes sense to have input files in uncompressed format.
What is the overhead of having compressed files? Is it much slower when reading the file? Are there any benchmarks done on big input files?
Thx!
It mostly makes sense to have input files in compressed format unless you are doing development and you need to frequently read data from HDFS to local file system for working on it.
Compressed format provides significant advantage. The data is already replicated in Hadoop cluster unless you set it other wise. Replicated data is good redundancy but consumes more space. If all your data is replicated with a factor of 3, you are going to consume 3 times the capacity required to store it.
Compression on textual data like log data is very effective as it yield high compression ratio. This is also the kind of data that you usually find more often in Hadoop cluster.
I don't have benchmarks but I have not seen any significant penalty on a decent sized cluster and data that we have.
How ever, for time being choose LZO over gzip.
See: LZO compression and it's significance over gzip
Gzip compresses better than LZO. LZO is faster at compressing and uncompressing. It is possible to split Lzo files, splittable Gzip is not available but I have seen Jira tasks for the same. (Also for bzip2)
Lets put reasons to compress vs reasons not to compress.
For:
a) Data is mostly stored and not frequently processed. It is usual DWH scenario. In this case space saving can be much more significant then processing overhead
b) Compression factor is very high and thereof we save a lot of IO.
c) Decompression is very fast (like Snappy) and thereof we have a some gain with little price
d) Data already arrived compressed
Against:
a) Compressed data is not splittable. Have to be noted that many modern format are built with block level compression to enable splitting and other partial processing of the files.
b) Data is created in the cluster and compression takes significant time. Have to be noted that compression usually much more CPU intensive then decompression.
c) Data has little redundancy and compression gives little gain.
1) Compressing input files
If the input file is compressed, then the bytes read in from HDFS is reduced, which means less time to read data. This time conservation is beneficial to the performance of job execution.
If the input files are compressed, they will be decompressed automatically as they are read by MapReduce, using the filename extension to determine which codec to use. For example, a file ending in .gz can be identified as gzip-compressed file and thus read with GzipCodec.
2) Compressing output files
Often we need to store the output as history files. If the amount of output per day is extensive, and we often need to store history results for future use, then these accumulated results will take extensive amount of HDFS space. However, these history files may not be used very frequently, resulting in a waste of HDFS space. Therefore, it is necessary to compress the output before storing on HDFS.
3) Compressing map output
Even if your MapReduce application reads and writes uncompressed data, it may benefit from compressing the intermediate output of the map phase. Since the map output is written to disk and transferred across the network to the reducer nodes, by using a fast compressor such as LZO or Snappy, you can get performance gains simply because the volume of data to transfer is reduced.
2. Common input format
gzip:
gzip is naturally supported by Hadoop. gzip is based on the DEFLATE algorithm, which is a combination of LZ77 and Huffman Coding.
bzip2:
bzip2 is a freely available, patent free (see below), high-quality data compressor. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), whilst being around twice as fast at compression and six times faster at decompression.
LZO:
The LZO compression format is composed of many smaller (~256K) blocks of compressed data, allowing jobs to be split along block boundaries. Moreover, it was designed with speed in mind: it decompresses about twice as fast as gzip, meaning it’s fast enough to keep up with hard drive read speeds. It doesn’t compress quite as well as gzip — expect files that are on the order of 50% larger than their gzipped version. But that is still 20-50% of the size of the files without any compression at all, which means that IO-bound jobs complete the map phase about four times faster.
Snappy:
Snappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression. For instance, compared to the fastest mode of zlib, Snappy is an order of magnitude faster for most inputs, but the resulting compressed files are anywhere from 20% to 100% bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec or more. Snappy is widely used inside Google, in everything from BigTable and MapReduce to our internal RPC systems.
Some tradeoffs:
All compression algorithms exhibit a space/time trade-off: faster compression and decompression speeds usually come at the expense of smaller space savings. The tools listed in above table typically give some control over this trade-off at compression time by offering nine different options: –1 means optimize for speed and -9 means optimize for space.
The different tools have very different compression characteristics. Gzip is a general purpose compressor, and sits in the middle of the space/time trade-off. Bzip2 compresses more effectively than gzip, but is slower. Bzip2’s decompression speed is faster than its compression speed, but it is still slower than the other formats. LZO and Snappy, on the other hand, both optimize for speed and are around an order of magnitude faster than gzip, but compress less effectively. Snappy is also significantly faster than LZO for decompression.
3. Issues about compression and input split
When considering how to compress data that will be processed by MapReduce, it is important to understand whether the compression format supports splitting. Consider an uncompressed file stored in HDFS whose size is 1 GB. With an HDFS block size of 64 MB, the file will be stored as 16 blocks, and a MapReduce job using this file as input will create 16 input splits, each processed independently as input to a separate map task.
Imagine now the file is a gzip-compressed file whose compressed size is 1 GB. As before, HDFS will store the file as 16 blocks. However, creating a split for each block won’t work since it is impossible to start reading at an arbitrary point in the gzip stream and therefore impossible for a map task to read its split independently of the others. The gzip format uses DEFLATE to store the compressed data, and DEFLATE stores data as a series of compressed blocks. The problem is that the start of each block is not distinguished in any way that would allow a reader positioned at an arbitrary point in the stream to advance to the beginning of the next block, thereby synchronizing itself with the stream. For this reason, gzip does not support splitting.
In this case, MapReduce will do the right thing and not try to split the gzipped file, since it knows that the input is gzip-compressed (by looking at the filename extension) and that gzip does not support splitting. This will work, but at the expense of locality: a single map will process the 16 HDFS blocks, most of which will not be local to the map. Also, with fewer maps, the job is less granular, and so may take longer to run.
If the file in our hypothetical example were an LZO file, we would have the same problem since the underlying compression format does not provide a way for a reader to synchronize itself with the stream. However, it is possible to preprocess LZO files using an indexer tool that comes with the Hadoop LZO libraries. The tool builds an index of split points, effectively making them splittable when the appropriate MapReduce input format is used.
A bzip2 file, on the other hand, does provide a synchronization marker between blocks (a 48-bit approximation of pi), so it does support splitting.
4. IO-bound and CPU bound
Storing compressed data in HDFS allows your hardware allocation to go further since compressed data is often 25% of the size of the original data. Furthermore, since MapReduce jobs are nearly always IO-bound, storing compressed data means there is less overall IO to do, meaning jobs run faster. There are two caveats to this, however: some compression formats cannot be split for parallel processing, and others are slow enough at decompression that jobs become CPU-bound, eliminating your gains on IO.
The gzip compression format illustrates the first caveat. Imagine you have a 1.1 GB gzip file, and your cluster has a 128 MB block size. This file will be split into 9 chunks of size approximately 128 MB. In order to process these in parallel in a MapReduce job, a different mapper will be responsible for each chunk. But this means that the second mapper will start on an arbitrary byte about 128MB into the file. The contextful dictionary that gzip uses to decompress input will be empty at this point, which means the gzip decompressor will not be able to correctly interpret the bytes. The upshot is that large gzip files in Hadoop need to be processed by a single mapper, which defeats the purpose of parallelism.
Bzip2 compression format illustrates the second caveat in which jobs become CPU-bound. Bzip2 files compress well and are even splittable, but the decompression algorithm is slow and cannot keep up with the streaming disk reads that are common in Hadoop jobs. While Bzip2 compression has some upside because it conserves storage space, running jobs now spend their time waiting on the CPU to finish decompressing data, which slows them down and offsets the other gains.
5. Summary
Reasons to compress:
a) Data is mostly stored and not frequently processed. It is usual DWH scenario. In this case space saving can be much more significant then processing overhead
b) Compression factor is very high and thereof we save a lot of IO.
c) Decompression is very fast (like Snappy) and thereof we have a some gain with little price
d) Data already arrived compressed
Reasons not to compress
a) Compressed data is not splittable. Have to be noted that many modern format are built with block level compression to enable splitting and other partial processing of the files. b) Data is created in the cluster and compression takes significant time. Have to be noted that compression usually much more CPU intensive then decompression.
c) Data has little redundancy and compression gives little gain.

Resources