hadoop get actual number of mappers - hadoop

In the map phase of my program, I need to know the total number of mappers that are created. This will help me in the key creation process of the map (I want to emit as many key-value pairs for each object as the number of mappers).
I know that setting the number of mappers is just a hint, but what is the way to get the actual number of mappers.
I tried the following in the configure method of my Mapper:
public void configure(JobConf conf) {
System.out.println("map tasks: "+conf.get("mapred.map.tasks"));
System.out.println("tipid: "+conf.get("mapred.tip.id"));
System.out.println("taskpartition: "+conf.get("mapred.task.partition"));
}
But I get the results:
map tasks: 1
tipid: task_local1204340194_0001_m_000000
taskpartition: 0
map tasks: 1
tipid: task_local1204340194_0001_m_000001
taskpartition: 1
which means (?) that there are two map tasks, and not just one, as printed (which is quite natural, since I have two small input files). Shouldn't the number after map tasks be 2?
For now, I just count the number of files in the input folder, but this is not a good solution, since a file could be larger than the block size and result in more than one input splits and hence mappers. Any suggestions?

Finally, it seems that conf.get("mapred.map.tasks")) DOES work after all, when I generate an executable jar file and run my program in the cluster/locally. Now the output of "map tasks" is correct.
It did not work only when running my mapreduce program locally on hadoop from the eclipse-plugin. Maybe it is an eclipse-plugin's issue.
I hope this will help someone else having the same issue. Thank you for your answers!

I don't think there is an easy way to do this. I've implemented my own InputFormat class, if you do that you can implement a method to count the number of InputSplits which you can request in the process that starts the job. If you put that number in some Configuration setting, you can read it in your mapper process.
btw the number of input files is not always the number of mappers, as large files can be split.

Related

How does MapReduce process multiple input files?

So I'm writing a MR job to read hundreds of files from an input folder. Since all the files are compressed, so instead of using the default TextInputFormat, I was using the WholeFileReadFormat from an online code source.
So my question is that does the Mapper process multiple input files in sequence? I mean, if I have three files A B C, and since I'm reading the whole file content as the map input value, will mapreduce process the files in the order of, say, A->B->C, which means, only after doing with A, Mapper will start to process B?
Actually, I'm kind of confused on the concept of Map job and Map task. In my understanding the Map job is just the same thing as Mapper. And a mapper job contains several map tasks, in my case, each map task will read in a single file. But what I don't understand is that I think map tasks are executed in parallel, so I think all the input files should be processed in parallel, which turns out to be a paradox....
Can any one please explain it to me?

how output files(part-m-0001/part-r-0001) are created in map reduce

I understand that the map reduce output are stored in files named like part-r-* for reducer and part-m-* for mapper.
When I run a mapreduce job sometimes a get the whole output in a single file(size around 150MB), and sometimes for almost same data size I get two output files(one 100mb and other 50mb). This seems very random to me. I cant find out any reason for this.
I want to know how its decided to put that data in a single or multiple output files. and if any way we can control it.
Thanks
Unlike specified in the answer by Jijo here - the number of the files depends on on the number of Reducers/Mappers.
It has nothing to do with the number of physical nodes in the cluster.
The rule is: one part-r-* file for one Reducer. The number of Reducers is set by job.setNumReduceTasks();
If there are no Reducers in your job - then one part-m-* file for one Mapper. There is one Mapper for one InputSplit (usually - unless you use custom InputFormat implementation, there is one InputSplit for one HDFS block of your input data).
The number of output files part-m-* and part-r-* is set according to the number of map tasks and the number of reduce tasks respectively.

Why my Hadoop job get Map task num = 1 , and generated 300+ result files?

I had such a Hadoop job.
The MR has only map , not reduce. So set job.setNumReduces(0).
The input files are about 300+
Then I run the job, I can see only 1 map task running. It consumes about 1 hour to finish it.
Then I check the result, I can see 300+ result files in output folder.
Is there any wrong with it ? Or it is the right thing ?
I really expect that Map should equal to the num of the input file ( not 1 ). I also don't know why output file num same as input file num.
The hadoop job is submitted from oozie.
Thank you very much for your kindly help.
Xinsong
When you set number of reducers to 0, you the output that is generated corresponds to that generated by the map tasks alone.
There may be a large number of files being generated in the output that corresponds to the Splits on your data. Each Split of your data will spawn a new map task.
Going by the time of execution, i assume your filesize is pretty large and not 1. So it is perfectly fine for a large number of files to be generated.
Number of mapper is controlled by the number of InputSplits. if you are using default FileInputFormat it will create a inputsplit for each file.
so if you have 300+ input files it is expected to run 300+ map tasks. you can not explicitly controll this (number of mappers).
Since number of reducers is set to 0 all output from mappers run written to output considering output format.thats why you are getting 300+ output files.

How to go through the OutputFormat.RecordWriter write(key,value) twice in Hadoop

I have a situation where I need to go through the key/value pairs of my OutputFormat twice. In essence:
OutputFormat.getRecordWriter() // returns RecordWriteType1
... and when all those are complete across all machines
OutputFormat.getRecordWriter() // return RecordWriterType2
The typing of both RecordWriterType1/2 are the same. Is there a way to do this?
Thank you,
Marko.
Unfortunately you cannot simply run over the reducer data twice.
You do have some options to possibly work around:
Use an identity reducer to output the sorted data to HDFS, then run two jobs over the data with identity mappers - wasteful but simple if you don't have that much data
As above, but you could use map only jobs and the key comparator to emulate the reducer function as you know the input is already sorted (you'll need to make sure the split size is set sufficiently large to ensure all data from the first reducer output file is processed in a single mapper and not split over 2+ mapper instances
You could write the reducer key/values to local disk in your reducer, and then in the clean up method of the reducer, opening the local file up and process as detailed in the second option (using the group comparator to detemine key boundary).
If you dig through the source for ReduceTask, you may even be able to 'abuse' the merged sorted segments on local disk and run over the data again, but this option is pure unadulterated hackery...

Writing to single file from mappers

I am working on mapreduce that is generating CSV file out of some data that is read from HBase. Is there a way to write to single file from mappers without reduce phase (or to merge multiple files generated by mappers at the end of job)? I know that I can set output format to write in file on Job level, is it possible to do similar thing for mappers?
Thanks
It is possible (and not uncommon) to have a Map/Reduce-Job without a reduce phase (example). For that you just use job.setNumReduceTasks(0).
However I am not sure how Job-Output is handled in this case. Ususally you get one result file per reducer. Without reducers I could imagine that you either get one file per mapper or that you cannot produce job output. You will have to try/research that.
If the above does not work for you, you could still use the default Reducer implementation, that just forwards the mapper output (identity function).
Seriously, this is not how MapReduce works.
Why do you even need a Job for that? Write a simple Java application that does the same for you. There are also command line utils that does the same for you.

Resources