Input/Output flow in map reduce chaining - hadoop

i need help regarding map reduce chaining.i have a map reduce chain like this
map->reduce->map
i want the output of reducer to be used in the last mapper
for example, in my reducer i am getting the max salary of an employee and this value is supposed to be used in the next mapper and find the record with that max salary value.so obviously my last mapper should get the output of the reducer and the contents of the file?is it possible?how can i fix the problem?any better solution?

I'm not sure i understood the problem, but i will try to help.
You have reduced some input containing an employee salaries (lets call it input1) into output (lets call it output1) that looks like that:
Key: someEmployee Value: max salary.
and now you want another mapper to to map the data from both input1 and output1?
if so, than u have few options, u may choose one according to your needs.
Manipulate first reducer output. instad of creating output1 in the format Key: someEmployee Value:
max_salary##salary_1,salary_2,salary_3...salary_n
and than create new job, and set the new mapper input as output1.
Try reading this issue explaining how to get multiple inputs into one mapper

Related

Comparing data from same file in reducer function of a map reduce program

In my map reduce program, the mapper function will give two key value pair:
1) (person1, age)
2) (person2, age)
(I have kept 2 pairs only for simplicity it would be nice if u can explain for n nos of line)
Now I want to write a reducer which will compare age of both and give the answer who is older.
The thing I cannot understand is the output of mapper will be in different line in the file. And as reducer works on line by line bases over a file how will it compare them.
Thanks in advance.
See if any of the following logic serves your purpose:
A.
emit (age,person_name) from your map
have only 1 reducer -
your will get all ages, person pair in sorted manner. so simply emitting will give first one as youngest and last one as oldest.
If you don't want to print all values, just have two references in the reducer task - youngest, oldest - set them in reduce method and emit whichever you want in the cleanup of reducer task
B.
Have a mapper emitting (name,age) as you said
in reducer task:
a. Use setup() to create a treemap
b. in reduce() add (age, person) in the treemap
c. you map will be sorted by age which you can use in cleanup() to do something about it.
Essentially you can store all key,value in internal object(s) in reduce(), in cleanup() you will have access to all of these value and perform any logic you want in it.
I think your use case straight away fits into Secondary Sorting technique.
Secondary sorting is a technique, which has been introduced to sort "value" emitted by mapper. Primary sorting will be done by "key" emitted by mapper.
If you try to sort all values at reducer level, you may get out of memory. Secondary Sorting should be done at mapper level.
Have a look at this article
In above example, just replace "year" with "person" and "temperature" with "age"
Solution:
Create Custom partitioner to send all values from a particular key to a single reducer
Sorting should be done Key, Value combination emitted by mapper => Create a composite key with Key + Value have been used for sorting. Come up with a Comparator which sorts first by Key and then with Value.
In reducer method, all you would be getting is a key and list of values. So you can find min or max among a list of values for that key. However if you need to compare with other keys, then may be you should think of a single reducer and get all the records from mappers and handle that logic in your reducer class with help of reference variable rather than local variable and updating the reference variable with every min/max value for each key

Purpose Of NullWritable

I want to count the number of studens who have booked a movie ticket and I wannt only one output after the reduce phase. I desire that the mapper emits the count of the number of students not the keys.
Can I use NullWritable as an Output Key so that nothing is emitted from the map side as the key to the reduce side?
as shown below
context.write(NullWritable.get(),new IntWritable(1);
The data will be emitted to the reducer and the reducer will perform further aggregation
Please suggest if anybody has a better alternative.
Thank You in advance!!
Instead you could emit the map output as
context.write(new Text("number of students"),new IntWritable(1));
with number of reducers set to 1 in driver.Then you could sum up the value on the reducer side.
Suppose if you only need value in the Output file and don't need key in that case you could use NullWritable.
context.write(NullWritable.get(),value)

How to process large file with one record dependent on another in MapReduce

I have a scenario where there is a really large file and say line 1 record might have dependency on 1000th line data and the line 1 and 1000 can be part of separate spilts. Now my understanding of the framework is that record reader is going to return one key, value pair to mapper and each k,v pair will be independent of another. Moreover since the file has been divided into splits and i want that as well (i.e. splittable false is no option), can i handle this anyhow may be writing my own record reader, mapper or reducer?
Dependency is like -
Row1: a,b,c,d,e,f
Row2: x,y,z,p,q,r
Now x in Row2 need to be used with say d in Row1 to get my desired output.
Thanks.
I think what you need is to implement a reducer side join. Here you can see a better explanation of it: http://hadooped.blogspot.mx/2013/09/reduce-side-joins-in-java-map-reduce.html.
Both related values have to end in the same reducer (defined by the key and the Partitioner) and they should be grouped together (GroupingComparator) and may be use a SecondSort to order the grouped values.

retrieving unique results from a column in a text file with Hadoop MapReduce

I have the data set below. I want to get a unique list of the first column as the output. {9719,382 ..} there are integers in the end of the each line so checking if it starts and ends with a number is not a way and i couldn't think of a solution. Can you show me how to do it? I'd really
appreciate it if you show it in detail.(with what to do in map and what to do in reduce step)
id - - [date] "URL"
In your mapper you should parse each line and write out the token that you are interested in from the beginning of the line (e.g. 9719) as the Key in a Key-Value pair (the Value is irrelevant in this case). Since the keys will be sorted before sending to the reducer, all you need to do in the reducer is iterate thru the values and each time a value changes, output it.
The WordCount example app that is packaged with Hadoop is very close to what you need.

Mapping through two data sets with Hadoop

Suppose I have two key-value data sets--Data Sets A and B, let's call them. I want to update all the data in Set A with data from Set B where the two match on keys.
Because I'm dealing with such large quantities of data, I'm using Hadoop to MapReduce. My concern is that to do this key matching between A and B, I need to load all of Set A (a lot of data) into the memory of every mapper instance. That seems rather inefficient.
Would there be a recommended way to do this that doesn't require repeating the work of loading in A every time?
Some pseudcode to clarify what I'm currently doing:
Load in Data Set A # This seems like the expensive step to always be doing
Foreach key/value in Data Set B:
If key is in Data Set A:
Update Data Seta A
According to the documentation, the MapReduce framework includes the following steps:
Map
Sort/Partition
Combine (optional)
Reduce
You've described one way to perform your join: loading all of Set A into memory in each Mapper. You're correct that this is inefficient.
Instead, observe that a large join can be partitioned into arbitrarily many smaller joins if both sets are sorted and partitioned by key. MapReduce sorts the output of each Mapper by key in step (2) above. Sorted Map output is then partitioned by key, so that one partition is created per Reducer. For each unique key, the Reducer will receive all values from both Set A and Set B.
To finish your join, the Reducer needs only to output the key and either the updated value from Set B, if it exists; otherwise, output the key and the original value from Set A. To distinguish between values from Set A and Set B, try setting a flag on the output value from the Mapper.
All of the answers posted so far are correct - this should be a Reduce-side join... but there's no need to reinvent the wheel! Have you considered Pig, Hive, or Cascading for this? They all have joins built-in, and are fairly well optimized.
This video tutorial by Cloudera gives a great description of how to do a large-scale Join through MapReduce, starting around the 12 minute mark.
Here are the basic steps he lays out for joining records from file B onto records from file A on key K, with pseudocode. If anything here isn't clear, I'd suggest watching the video as he does a much better job explaining it than I can.
In your Mapper:
K from file A:
tag K to identify as Primary Key
emit <K, value of K>
K from file B:
tag K to identify as Foreign Key
emit <K, record>
Write a Sorter and Grouper which will ignore the PK/FK tagging, so that your records are sent to the same Reducer regardless of whether they are a PK record or a FK record and are grouped together.
Write a Comparator which will compare the PK and FK keys and send the PK first.
The result of this step will be that all records with the same key will be sent to the same Reducer and be in the same set of values to be reduced. The record tagged with PK will be first, followed by all records from B which need to be joined. Now, the Reducer:
value_of_PK = values[0] // First value is the value of your primary key
for value in values[1:]:
value.replace(FK,value_of_PK) // Replace the foreign key with the key's value
emit <key, value>
The result of this will be file B, with all occurrences of K replaced by the value of K in file A. You can also extend this to effect a full inner join, or to write out both files in their entirety for direct database storage, but those are pretty trivial modifications once you get this working.

Resources