I've written a Hadoop Map Reduce job. When I run it locally, I notice that if I don't specify any reduce tasks there are some temporary files written to the output directory. If I specify reducers no temporary files are written. Is this normal behavior? I would expect to see the temporary files written otherwise it would mean that the mapper is trying to do everything in memory and then transfer to the reducer in memory. This strikes me as implausible.
Any insights into how/when/where the mapper writes intermediate output to the file system would be appreciated.
Thanks
Map tasks write their output to the local disk, not to HDFS. Map output
is intermediate output: it’s processed by reduce tasks to produce the final output, and
once the job is complete the map output can be thrown away. So storing it in HDFS,
with replication, would be overkill.
But if we set number of reducers to 0 then map output is stored on HDFS as final output. There is no reduce phase so output of the mapper is the output of the whole job.
Additionally here is how to look into intermediate files even if reducer is specified.
Related
What is the order of execution of the actions/components in map-reduce -
Mapper --> Combiner --> Shuffling/Sorting --> Partitioner --> Reducer
Is the order same??
The process is almost correct but let's clearly understand in depth of it.
First Map phase will start by running map.
Once map process the input, it will sort before it save in the local file system,
which is called sort and then copy to the local file system and next this data will be copied to reducer which is shuffle phase.
Since the data is already sorted in each mapper output, it performs merge sort by each key coming to the reducer located node.
Once merge is done, the data is ready to enter the reduce phase. It depends on the manual configuration of your choice.
we can set number of reducers to zero as well. In that case all the output of map will be written directly to Output path either to local file system or hdfs as well.
Hope it helps !
So I'm writing a MR job to read hundreds of files from an input folder. Since all the files are compressed, so instead of using the default TextInputFormat, I was using the WholeFileReadFormat from an online code source.
So my question is that does the Mapper process multiple input files in sequence? I mean, if I have three files A B C, and since I'm reading the whole file content as the map input value, will mapreduce process the files in the order of, say, A->B->C, which means, only after doing with A, Mapper will start to process B?
Actually, I'm kind of confused on the concept of Map job and Map task. In my understanding the Map job is just the same thing as Mapper. And a mapper job contains several map tasks, in my case, each map task will read in a single file. But what I don't understand is that I think map tasks are executed in parallel, so I think all the input files should be processed in parallel, which turns out to be a paradox....
Can any one please explain it to me?
I understand that the map reduce output are stored in files named like part-r-* for reducer and part-m-* for mapper.
When I run a mapreduce job sometimes a get the whole output in a single file(size around 150MB), and sometimes for almost same data size I get two output files(one 100mb and other 50mb). This seems very random to me. I cant find out any reason for this.
I want to know how its decided to put that data in a single or multiple output files. and if any way we can control it.
Thanks
Unlike specified in the answer by Jijo here - the number of the files depends on on the number of Reducers/Mappers.
It has nothing to do with the number of physical nodes in the cluster.
The rule is: one part-r-* file for one Reducer. The number of Reducers is set by job.setNumReduceTasks();
If there are no Reducers in your job - then one part-m-* file for one Mapper. There is one Mapper for one InputSplit (usually - unless you use custom InputFormat implementation, there is one InputSplit for one HDFS block of your input data).
The number of output files part-m-* and part-r-* is set according to the number of map tasks and the number of reduce tasks respectively.
Here is the detail:
The input files is in the hdfs path /user/rd/input, and the hdfs output path is /user/rd/output
In the input path, there are 20,000 files from part-00000 to part-19999, each file is about 64MB.
What I want to do is to write a hadoop streaming job to merge these 20,000 files into 10,000 files.
Is there a way to merge these 20,000 files to 10,000 files using hadoop streaming job? Or, in other words, Is there a way to control the number of hadoop streaming output files?
Thanks in advance!
It looks like right now you have a map-only streaming job. The behavior with a map-only job is to have one output file per map task. There isn't much you can do about changing this behavior.
You can exploit the way MapReduce works by adding the reduce phase so that it has 10,000 reducers. Then, each reducer will output one file, so you are left with 10,000 files. Note that your data records will be "scattered" across the 10,000... it won't be just two files concatenated. To do this, use the -D mapred.reduce.tasks=10000 flag in your command line args.
This is probably the default behavior, but you can also specify the identity reducer as your reducer. This doesn't do anything other than pass on the record, which is what I think you want here. Use this flag to do this: -reducer org.apache.hadoop.mapred.lib.IdentityReducer
Although I use Hadoop frequently on my Ubuntu machine I have never thought about SUCCESS and part-r-00000 files. The output always resides in part-r-00000 file, but what is the use of SUCCESS file? Why does the output file have the name part-r-0000? Is there any significance/any nomenclature or is this just a randomly defined?
See http://www.cloudera.com/blog/2010/08/what%E2%80%99s-new-in-apache-hadoop-0-21/
On the successful completion of a job, the MapReduce runtime creates a _SUCCESS file in the output directory. This may be useful for applications that need to see if a result set is complete just by inspecting HDFS. (MAPREDUCE-947)
This would typically be used by job scheduling systems (such as OOZIE), to denote that follow-on processing on the contents of this directory can commence as all the data has been output.
Update (in response to comment)
The output files are by default named part-x-yyyyy where:
x is either 'm' or 'r', depending on whether the job was a map only job, or reduce
yyyyy is the mapper or reducer task number (zero based)
So a job which has 32 reducers will have files named part-r-00000 to part-r-00031, one for each reducer task.