Hadoop streaming with single mapper - hadoop

I am using Hadoop streaming, I start the script as following:
../hadoop/bin/hadoop jar ../hadoop/contrib/streaming/hadoop-streaming-1.0.4.jar \
-mapper ../tests/mapper.php \
-reducer ../tests/reducer.php \
-input data \
-output out
"data" is 2.5 GB txt file.
however in ps axf I can see only one mapper. i tried with -Dmapred.map.tasks=10, but result is the same - single mapper.
how can I make hadoop split my input file and start several mapper processes?

To elaborate on my comments - If your file isn't in HDFS, and you're running with local runner then the file itself will only be processed by a single mapper.
A large file is typically processed by several mappers due to the fact that it is stored in HDFS as several blocks.
A 2.5 GB file, with a block size of 512M will be split into ~5 blocks in HDFS. If the file is splittable (plain text, or using a splittable compression codec such as snappy, but not gzip), then hadoop will launch a mapper per block to process the file.
Hope this helps explain what you're seeing

Related

How to make hadoop snappy output file the same format as those generated by Spark

we are using Spark and up until now the output are PSV files. Now in order to save space, we'd like to compress the output. To do so, we will change to save JavaRDD using the SnappyCodec, like this:
objectRDD.saveAsTextFile(rddOutputFolder, org.apache.hadoop.io.compress.SnappyCodec.class);
We will then use Sqoop to import the output into a database. The whole process works fine.
For previously generated PSV files in HDFS, we'd like to compress them in Snappy format as well. This is the command we tried:
hadoop jar /usr/hdp/2.6.5.106-2/hadoop-mapreduce/hadoop-streaming-2.7.3.2.6.5.106-2.jar \
-Dmapred.output.compress=true -Dmapred.compress.map.output=true \
-Dmapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec \
-Dmapred.reduce.tasks=0 \
-input input-path \
-output output-path
The command works fine. But the issue is, sqoop can't parse the snappy output files.
When we use a command like "hdfs dfs -text hdfs-file-name" to view the generated files, the output looks like below, with a "index" like field added into each line:
0 2019-05-02|AMRS||5072||||3540||MMPT|0|
41 2019-05-02|AMRS||5538|HK|51218||1000||Dummy|45276|
118 2019-05-02|AMRS||5448|US|51218|TRADING|2282|HFT|NCR|45119|
I.e., an extra value like "0 ", "41 ", "118 " are added into the beginning of each line. Note that the .snappy files generated by Spark doesn't has this "extra-field".
Any idea how to prevent this extra field being inserted?
Thanks a lot!
These are not indexes but rather keys generated by TextInputFormat, as explained here.
The class you supply for the input format should return key/value
pairs of Text class. If you do not specify an input format class, the
TextInputFormat is used as the default. Since the TextInputFormat
returns keys of LongWritable class, which are actually not part of the
input data, the keys will be discarded; only the values will be piped
to the streaming mapper.
And since you do not have any mapper defined in your job, those key/value pairs are written straight out to the file system. So as the above excerpt hints, you need some sort of a mapper that would discard the keys. A quick-and-dirty is to use something already available to serve as a pass-through, like a shell cat command:
hadoop jar /usr/hdp/2.6.5.106-2/hadoop-mapreduce/hadoop-streaming-2.7.3.2.6.5.106-2.jar \
-Dmapred.output.compress=true -Dmapred.compress.map.output=true \
-Dmapred.output.compression.codec=org.apache.hadoop.io.compress.SnappyCodec \
-mapper /bin/cat \
-Dmapred.reduce.tasks=0 \
-input input-path \
-output output-path

How can I get raw content of a file which is stored on hdfs with gzip compressed ?

Is there any way that can read raw content of a file which is stored on hadoop hdfs byte by byte ?
Typically when I submit a streaming job with -input param that point to an .gz file (like -input hdfs://host:port/path/to/gzipped/file.gz).
My task received decompressed input line by line, this is NOT what I want.
You can initialize the FileSystem with respective Hadoop configuration:
FileSystem.get(conf);
It has a method open which should in principle allow you to read raw data.

Concatenating multiple text files into one very large file in HDFS

I have the multiple text files.
The total size of them exceeds the largest disk size available to me (~1.5TB)
A spark program reads a single input text file from HDFS. So I need to combine those files into one. (I cannot re-write the program code. I am given only the *.jar file for execution)
Does HDFS have such a capability? How can I achieve this?
What I understood from your question is you want to Concatenate multiple files into one. Here is a solution which might not be the most efficient way of doing it but it works. suppose you have two files: file1 and file2 and you want to get a combined file as ConcatenatedFile
.Here is the script for that.
hadoop fs -cat /hadoop/path/to/file/file1.txt /hadoop/path/to/file/file2.txt | hadoop fs -put - /hadoop/path/to/file/Concatenate_file_Folder/ConcatenateFile.txt
Hope this helps.
HDFS by itself does not provide such capabilities. All out-of-the-box features (like hdfs dfs -text * with pipes or FileUtil's copy methods) use your client server to transfer all data.
In my experience we always used our own written MapReduce jobs to merge many small files in HDFS in distributed way.
So you have two solutions:
Write your own simple MapReduce/Spark job to combine text files with
your format.
Find already implemented solution for such kind of
purposes.
About solution #2: there is the simple project FileCrush for combining text or sequence files in HDFS. It might be suitable for you, check it.
Example of usage:
hadoop jar filecrush-2.0-SNAPSHOT.jar crush.Crush -Ddfs.block.size=134217728 \
--input-format=text \
--output-format=text \
--compress=none \
/input/dir /output/dir 20161228161647
I had a problem to run it without these options (especially -Ddfs.block.size and output file date prefix 20161228161647) so make sure you run it properly.
You can do a pig job:
A = LOAD '/path/to/inputFiles' as (SCHEMA);
STORE A into '/path/to/outputFile';
Doing a hdfs cat and then putting it back to hdfs means, all this data is processed in the client node and will degradate your network

Input Split always 2 in hadoop cluster HDinsight

I deployed a 5 node hadoop MR cluster in Azure. I am using a bash script to perform chaining. I am using Hadoop streaming API, as my implementation is in Python.
My input data is always in one file but the size of the file ranges from 1 mb to 2 gb.
I want to create multiple mappers to handle this. I tried running using this command:
yarn jar /usr/hdp/current/hadoop-mapreduce-client/hadoop-streaming.jar -D mapred.max.split.size=5000000 -files map2.py,red2.py -mapper map2.py -reducer red2.py -input wasb:///example/${ipfileA} -output wasb:///example/${opfile}
Here I have set my maximum split size to be 5mb.
I also tried to set my maximum block size to 5 mb.
However, when I run this, the number of input splits is always 2
mapreduce.JobSubmitter: number of splits:2
And number of map tasks launched are also always 2.
I want the number of map tasks to be dynamically set based on the size of the data. What should I do?

Hadoop seq directory with index, data and bloom files -- how to read?

New to Hadoop...I have a series of HDFS directories with the naming convention filename.seq. Each directory contains an index, data and bloom file. These have binary content and appear to be SequenceFiles (SEQ starts the header). I want to know the structure/schema. Everything I read refers to reading an individual sequence file so I'm not sure how to read these or how they were produced. Thanks.
Update: I've tried recommended tools for streaming & outputting text on the files, none worked:
hadoop fs -text /path/to/hdfs-filename.seq/data | head
hadoop jar /usr/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.0.0-mr1-cdh4.1.2.jar \
-input /path/to/hdfs-filename.seq/data \
-output /tmp/outputfile \
-mapper "/bin/cat" \
-reducer "/bin/wc -l" \
-inputformat SequenceFileAsTextInputFormat
Error was:
ERROR streaming.StreamJob: Job not successful. Error: NA
The SEQ header confirms that hadoop sequence file. (One thing that I have never seem is the bloom file that you mentioned.)
The structure / schema of a typical Sequence file is:
Header (version, key class, value class, compression, compression code, metadata)
Record
Record length
Key length
Key Value
A sync-marker every few 100 bytes or so.
For more details:
see the description here.
Sequence file reader and How to read hadoop sequential file?

Resources