ElasticMapReduce streaming compressed output - hadoop

I'm running streaming jobs, with python scripts for the map and reduce. The job flow I create with the boto library.
I'm using gzip input files. How can I create gzip output files, though?

I use java to process gzip files and generate output in gzip compression. I use below code
FileOutputFormat.setCompressOutput(job, true);
FileOutputFormat.setOutputCompressorClass(job, GzipCodec.class);
FileOutputFormat.setOutputPath(job, output path));
I hope You will find similar API/ code in python.
You can generate gzip files as your generated output. Pass '-D mapred.output.compress=true -D mapred.output.compression.codec=org.apache.hadoop.io.compress.GzipCodec' as option to your streaming job.

Related

How can I get raw content of a file which is stored on hdfs with gzip compressed ?

Is there any way that can read raw content of a file which is stored on hadoop hdfs byte by byte ?
Typically when I submit a streaming job with -input param that point to an .gz file (like -input hdfs://host:port/path/to/gzipped/file.gz).
My task received decompressed input line by line, this is NOT what I want.
You can initialize the FileSystem with respective Hadoop configuration:
FileSystem.get(conf);
It has a method open which should in principle allow you to read raw data.

Concatenating multiple text files into one very large file in HDFS

I have the multiple text files.
The total size of them exceeds the largest disk size available to me (~1.5TB)
A spark program reads a single input text file from HDFS. So I need to combine those files into one. (I cannot re-write the program code. I am given only the *.jar file for execution)
Does HDFS have such a capability? How can I achieve this?
What I understood from your question is you want to Concatenate multiple files into one. Here is a solution which might not be the most efficient way of doing it but it works. suppose you have two files: file1 and file2 and you want to get a combined file as ConcatenatedFile
.Here is the script for that.
hadoop fs -cat /hadoop/path/to/file/file1.txt /hadoop/path/to/file/file2.txt | hadoop fs -put - /hadoop/path/to/file/Concatenate_file_Folder/ConcatenateFile.txt
Hope this helps.
HDFS by itself does not provide such capabilities. All out-of-the-box features (like hdfs dfs -text * with pipes or FileUtil's copy methods) use your client server to transfer all data.
In my experience we always used our own written MapReduce jobs to merge many small files in HDFS in distributed way.
So you have two solutions:
Write your own simple MapReduce/Spark job to combine text files with
your format.
Find already implemented solution for such kind of
purposes.
About solution #2: there is the simple project FileCrush for combining text or sequence files in HDFS. It might be suitable for you, check it.
Example of usage:
hadoop jar filecrush-2.0-SNAPSHOT.jar crush.Crush -Ddfs.block.size=134217728 \
--input-format=text \
--output-format=text \
--compress=none \
/input/dir /output/dir 20161228161647
I had a problem to run it without these options (especially -Ddfs.block.size and output file date prefix 20161228161647) so make sure you run it properly.
You can do a pig job:
A = LOAD '/path/to/inputFiles' as (SCHEMA);
STORE A into '/path/to/outputFile';
Doing a hdfs cat and then putting it back to hdfs means, all this data is processed in the client node and will degradate your network

Pig script to compress and decompress the hdfs data in bzip2

How to compress hdfs data to bzip2 using pig such that on decompression it should give the same dir structure which it had initially.I am new to pig.
I tried to compress with bzip2 but it generated many files due to many mappers being spawned and hence reverting back to plain text file(initial form) in the same dir structure becomes difficult.
Just like how in unix if we compress bzip2 using tarball and then after decompression of bzip2.tar gives me exactly same data and folder structure which it had initially.
eg Compression:- tar -cjf compress_folder.tar.bz2 compress_folder/
Decompression:- tar -jtvf compress_folder.tar.bz2
will give exactly same dir st.
Approach 1:
you can try running one reducer to store only 1 file on hdfs. but compromise will be performance here.
set default_parallel 1;
to compress data, set these parameters in pig script , if not tried this way:-
set output.compression.enabled true;
SET mapred.output.compression.codec 'org.apache.hadoop.io.compress.BZip2Codec';
just use JsonStorage while storing file
STORE file INTO '/user/hduser/data/usercount' USING JsonStorage();
Eventually you also want to read data, use TextLoader
data = LOAD '/user/hduser/data/usercount/' USING TextLoader;
Approach 2:
filecrush: file merge utility available at #Mr. github

Convert multiple .deflate files into one gzip file in ubuntu

I ran one hadoop job which has generated multiple .deflate files. Now these files are stored on S3. So, i cannot run hadoop fs -text /somepath command it will take the hdfs path. Now, i want to convert multiple files stored on s3 in .deflate format into one gzip file.
If you make gzip files instead, using the GzipCodec, you can simply concatenate them to make one large gzip file.
You can wrap a deflate stream with a gzip header and trailer, as described in RFC 1952. A fixed 10-byte header, and an 8-byte trailer that is computed from the uncompressed data. So you will need to decompress each .deflate stream in order to compute its CRC-32 and uncompressed length to put in the trailer.

Is there a combine Input format for hadoop streaming?

I have many small input files, and I want to combine them using some input format like CombineFileInputFormat to launch fewer mapper tasks. I know I can use Java API to do this, but I don't know whether there's a streaming jar library to support this function while I'm using Hadoop streaming.
Hadoop streaming uses TextInputFormat by default but any other input format can be used, including CombineFileInputFormat. You can change the input format from the command line, using the option -inputformat. Be sure to use the old API and implement org.apache.hadoop.mapred.lib.CombineFileInputFormat. The new API isn't supported yet.
$HADOOP_HOME/bin/hadoop jar \
$HADOOP_HOME/hadoop-streaming.jar \
-inputformat foo.bar.MyCombineFileInputFormat \
-Dmapred.max.split.size=524288000 \
-Dstream.map.input.ignoreKey=true \
...
Example of CombineFileInputFormat

Resources