hadoop streaming ensuring one key per reducer - hadoop

I have a mapper that, while processing data, classifies output into 3 different types (type is the output key). My goal is to create 3 different csv files via the reducers, each with all of the data for one key with a header row.
The key values can change and are text strings.
Now, ideally, i would like to have 3 different reducers and each reducer would get only one key with it's entire list of values.
Except, this doesn't seem to work because the keys don't get mapped to specific reducers.
The answer to this in other places has been to write a custom partitioner class that would map each desired key value to a specific reducer. This would be great except that I need to use streaming with python and i am not able to include a custom streaming jar in my job so that seems not an option.
I see in the hadoop docs that there is an alternate partitioner class available that can enable secondary sorts, but it isn't immediately obvious to me that it is possible, using either the default or key field based partitioner, to ensure that each key ends up on it's own reducer without writing a java class and using a custom streaming jar.
Any suggestions would be much appreciated.
Examples:
mapper output:
csv2\tfieldA,fieldB,fieldC
csv1\tfield1,field2,field3,field4
csv3\tfieldRed,fieldGreen
...
the problem is that if i have 3 reducers i end up with key distribution like this:
reducer1 reducer2 recuder3
csv1 csv2
csv3
one reducer gets two different key types and one reducer gets no data sent to it at all. this is because the hash(key csv1) mod 3 and hash(key csv2) mod 3 result in the same value.

I'm pretty sure MultipleOutputFormat [1] can be used under streaming. That'll solve most of your problems.
http://hadoop.apache.org/common/docs/r0.20.1/api/org/apache/hadoop/mapred/lib/MultipleOutputFormat.html

If you are stuck with streaming, and can't include any external jars for a custom partitioner, then this is probably not going to work the way you want it to without some hacks.
If these are absolute requirements, you can get around this, but it's messy.
Here's what you can do:
Hadoop, by default, uses a hashing partitioner, like this:
key.hashCode() % numReducers
So you can pick keys such that they hash to 1, 2, and 3 (or three numbers such that x % 3 = 1, 2, 3). This is a nasty hack, and I wouldn't suggest it unless you have no other options.

If you want custom output to different csv files you can direct write (with API) to hdfs. As you know hadoop passes with key and associated value list to single reduce task. In reduce code, check , while key is same write to same file. If another key comes, create new file manually and write into it. It does not matter how many reducers you have

Related

how output files(part-m-0001/part-r-0001) are created in map reduce

I understand that the map reduce output are stored in files named like part-r-* for reducer and part-m-* for mapper.
When I run a mapreduce job sometimes a get the whole output in a single file(size around 150MB), and sometimes for almost same data size I get two output files(one 100mb and other 50mb). This seems very random to me. I cant find out any reason for this.
I want to know how its decided to put that data in a single or multiple output files. and if any way we can control it.
Thanks
Unlike specified in the answer by Jijo here - the number of the files depends on on the number of Reducers/Mappers.
It has nothing to do with the number of physical nodes in the cluster.
The rule is: one part-r-* file for one Reducer. The number of Reducers is set by job.setNumReduceTasks();
If there are no Reducers in your job - then one part-m-* file for one Mapper. There is one Mapper for one InputSplit (usually - unless you use custom InputFormat implementation, there is one InputSplit for one HDFS block of your input data).
The number of output files part-m-* and part-r-* is set according to the number of map tasks and the number of reduce tasks respectively.

How to go through the OutputFormat.RecordWriter write(key,value) twice in Hadoop

I have a situation where I need to go through the key/value pairs of my OutputFormat twice. In essence:
OutputFormat.getRecordWriter() // returns RecordWriteType1
... and when all those are complete across all machines
OutputFormat.getRecordWriter() // return RecordWriterType2
The typing of both RecordWriterType1/2 are the same. Is there a way to do this?
Thank you,
Marko.
Unfortunately you cannot simply run over the reducer data twice.
You do have some options to possibly work around:
Use an identity reducer to output the sorted data to HDFS, then run two jobs over the data with identity mappers - wasteful but simple if you don't have that much data
As above, but you could use map only jobs and the key comparator to emulate the reducer function as you know the input is already sorted (you'll need to make sure the split size is set sufficiently large to ensure all data from the first reducer output file is processed in a single mapper and not split over 2+ mapper instances
You could write the reducer key/values to local disk in your reducer, and then in the clean up method of the reducer, opening the local file up and process as detailed in the second option (using the group comparator to detemine key boundary).
If you dig through the source for ReduceTask, you may even be able to 'abuse' the merged sorted segments on local disk and run over the data again, but this option is pure unadulterated hackery...

Control the Reducer Result Output File/Bucket

I have an application where I would like to make my reducers (I have several of them for a map/reduce job) to record their outputs into different files on the HDFS depending on the key coming to them for processing. So if the reducer sees a key of say type A, apply the reduce the logic but tell Hadoop to put the result into the hdfs file belonging to type A result and so on. Obviously multiple reducers can will be outputting different portion of the type A result and each reducer can end up working on any type like A or B but tell hadoop to write the result into the type A bucket or something
Is this possible?
MultipleOutputs is almost what you are looking for (assuming you are at least at version 0.21). In my own work I have used a clone of this class modified to be more flexible about naming conventions to send output to different folders/files based on anything I want, including aspects of the input records (keys or values). As is, the class has some draconian restrictions on what names you can give to the outputs.

Writing to single file from mappers

I am working on mapreduce that is generating CSV file out of some data that is read from HBase. Is there a way to write to single file from mappers without reduce phase (or to merge multiple files generated by mappers at the end of job)? I know that I can set output format to write in file on Job level, is it possible to do similar thing for mappers?
Thanks
It is possible (and not uncommon) to have a Map/Reduce-Job without a reduce phase (example). For that you just use job.setNumReduceTasks(0).
However I am not sure how Job-Output is handled in this case. Ususally you get one result file per reducer. Without reducers I could imagine that you either get one file per mapper or that you cannot produce job output. You will have to try/research that.
If the above does not work for you, you could still use the default Reducer implementation, that just forwards the mapper output (identity function).
Seriously, this is not how MapReduce works.
Why do you even need a Job for that? Write a simple Java application that does the same for you. There are also command line utils that does the same for you.

Hadoop streaming and AMAZON EMR

I have been attempting to use Hadoop streaming in Amazon EMR to do a simple word count for a bunch of text files. In order to get a handle on hadoop streaming and on Amazon's EMR I took a very simplified data set too. Each text file had only one line of text in it (the line could contain arbitrarily large number of words).
The mapper is an R script, that splits the line into words and spits it back to the stream.
cat(wordList[i],"\t1\n")
I decided to use the LongValueSum Aggregate reducer for adding the counts together, so I had to prefix my mapper output by LongValueSum
cat("LongValueSum:",wordList[i],"\t1\n")
and specify the reducer to be "aggregate"
The questions I have now are the following:
The intermediate stage between mapper and reducer, just sorts the stream. It does not really combine by the keys. Am I right? I ask this because If I do not use "LongValueSum" as a prefix to the words output by the mapper, at the reducer I just receive the streams sorted by the keys, but not aggregated. That is I just receive ordered by K, as opposed to (K, list(Values)) at the reducer. Do I need to specify a combiner in my command?
How are other aggregate reducers used. I see, a lot of other reducers/aggregates/combiners available on http://hadoop.apache.org/mapreduce/docs/r0.21.0/api/org/apache/hadoop/mapred/lib/aggregate/package-summary.html
How are these combiners and reducer specified in an AMAZON EMR set up?
I believe an issue of this kind has been filed and fixed in Hadoop streaming for a combiner, but I am not sure what version AMAZON EMR is hosting, and the version in which this fix is available.
How about custom input formats and record readers and writers. There are bunch of libraries written in Java. Is it sufficient to specify the java class name for each of these options?
The intermediate stage between mapper and reducer, just sorts the stream. It does not really combine by the keys. Am I right?
The aggregate reducer in streaming does implement the relevant combiner interfaces so Hadoop will use it if it sees fit [1]
That is I just receive ordered by K, as opposed to (K, list(Values)) at the reducer.
With the streaming interface you always receive K,V value pairs; you'll never receive (K,list(values))
How are other aggregate reducers used.
Which of them are you unsure about? The link you specified has a quick summary of the behaviour of each
I believe an issue of this kind has been filed and fixed
What issue are you thinking of?
not sure what version AMAZON EMR is hosting
EMR is based on Hadoop 0.20.2
Is it sufficient to specify the java class name for each of these options?
Do you mean in the context of streaming? or the aggregate framework?
[1] http://hadoop.apache.org/mapreduce/docs/r0.21.0/api/org/apache/hadoop/mapred/lib/aggregate/package-summary.html

Resources