Hadoop - Get Split Ids in map function - hadoop

I'm working on a project with map reduce.
My understanding of Hadoop is that it will seperate my data into blocks which will then be turned into splits where a split corresponds to a single map task.
It would be my assumption that each split would have an ID or number associated with it.
I'm wondering if there is any way to get this split Id/number or even the block Id/number as the key to the map function?
ie:
map(split_id, data)

The Inputsplit toString() method will return a pattern. If hash this pattern using MD5 Hash we can get an Unique Id identifying each of the input splits.
InputSplit is = context.getInputSplit();
splitId = MD5Hash.digest(is.toString()).toString();
Then we can use the splitId as the key to the mapper function.

Related

Joins using Custom File Format

If I wish to perform a Reduce Side Join using Custom File Format, how may shall I implement the same talking about the RecordReaderSay I have to fetch Data from two datasets One from customers table(customerid,fname,lname,age,profession) One from Transactions table(transId,transdate,customerId,itemPurchased1,itemPurchased2,city,state,methodOfPayment)In order to fetch data from two datasets, I need two mappers.Can I have two record readers for two mappers? If so how?
Please explain along with the driver implementation. If not possible please suggest me a way to implement reduce side join using custom file format.
Thank you in Advance :)
You want to join two data sets with Reducer join.
You need two mappers as both have different data and need separate parsing. While writing output, you should output join attribute(may be cust id in your case) as key and entire record as value from each mapper. You can also filter unnecessary fields here to optimize. Important thing is, you need to append a string like ("set1:"+map value), to indentify in reduce from which mapper did the record come from.
In reducer, you will have cust Id as key, then the list contains both records from different sets, and you can join them there as your requirement.
So once two mappers are written, you should let the job know about them. This is mentioned in Job class using MultipleInputs as below
MultipleInputs.addInputPath(job, new Path("inputPath1"), TextInputFormat.class, com.abc.HBaseMapper1.class);
MultipleInputs.addInputPath(job, new Path("inputPath2"), TextInputFormat.class, com.abc.HBaseMapper2.class);
From performance point, if one of the table is small, you can use distributed cache to load that file and then send the other data set accordingly.
In Mapper 1,
Get cust id from the row:
context.write(new Text("custId"),new Text("##map1##|"+value));
In Mapper 2,
context.write(new Text("custId"),new Text("##map2##|"+value));
In reducer,
for(Text txt:values)
{
String output;
if(txt contains "map1"){
//Append your output string
} else if(txt contains "map2") {
//Append your output string
}
}
context.write(key, output)

Comparing data from same file in reducer function of a map reduce program

In my map reduce program, the mapper function will give two key value pair:
1) (person1, age)
2) (person2, age)
(I have kept 2 pairs only for simplicity it would be nice if u can explain for n nos of line)
Now I want to write a reducer which will compare age of both and give the answer who is older.
The thing I cannot understand is the output of mapper will be in different line in the file. And as reducer works on line by line bases over a file how will it compare them.
Thanks in advance.
See if any of the following logic serves your purpose:
A.
emit (age,person_name) from your map
have only 1 reducer -
your will get all ages, person pair in sorted manner. so simply emitting will give first one as youngest and last one as oldest.
If you don't want to print all values, just have two references in the reducer task - youngest, oldest - set them in reduce method and emit whichever you want in the cleanup of reducer task
B.
Have a mapper emitting (name,age) as you said
in reducer task:
a. Use setup() to create a treemap
b. in reduce() add (age, person) in the treemap
c. you map will be sorted by age which you can use in cleanup() to do something about it.
Essentially you can store all key,value in internal object(s) in reduce(), in cleanup() you will have access to all of these value and perform any logic you want in it.
I think your use case straight away fits into Secondary Sorting technique.
Secondary sorting is a technique, which has been introduced to sort "value" emitted by mapper. Primary sorting will be done by "key" emitted by mapper.
If you try to sort all values at reducer level, you may get out of memory. Secondary Sorting should be done at mapper level.
Have a look at this article
In above example, just replace "year" with "person" and "temperature" with "age"
Solution:
Create Custom partitioner to send all values from a particular key to a single reducer
Sorting should be done Key, Value combination emitted by mapper => Create a composite key with Key + Value have been used for sorting. Come up with a Comparator which sorts first by Key and then with Value.
In reducer method, all you would be getting is a key and list of values. So you can find min or max among a list of values for that key. However if you need to compare with other keys, then may be you should think of a single reducer and get all the records from mappers and handle that logic in your reducer class with help of reference variable rather than local variable and updating the reference variable with every min/max value for each key

Sort reducer input iterator value before processing in Hadoop

I have some input data coming to the reducer with the value type Iterator .
How can I sort this list of values to be ascending order?
I need to sort them in order since they are time values, before processing all in the reducer.
To achieve sorting of reducer input values using hadoop's built-in features,you can do this:
1.Modify map output key - Append map output key with the corresponding value.Emit this composite key and the value from map.Since hadoop uses entire key by default for sorting, map output records will be sorted by (your old key + value).
2.Although sorting is done in step 1, you have manipulated the map output key in the process.Hadoop does Partitioning and Grouping based on the key by default.
3.Since you have modified the original key, you need to take care of modifying Partitioner and GroupingComparator to work based on the old key i.e., only the first part of your composite key.
Partitioner - decides which key-value pairs land in the same Reducer instance
GroupComparator - decides which key-value pairs among the ones that landed into the Reducer go to the same reduce method call.
4.Finally(and obviously) you need to extract the first part of input key in the reducer to get old key.
If you need more(and a better) answer, turn to Hadoop Definitive Guide 3rd Edition -> chapter 8 -> sorting -> secondary sort
What you asked for is called Secondary Sort. In a nutshell - you extend the key to add "value sort key" to it and make hadoop to group by only "real key" but sort by both.
Here is a very good explanation about the secondary sort:
http://pkghosh.wordpress.com/2011/04/13/map-reduce-secondary-sort-does-it-all/

retrieving unique results from a column in a text file with Hadoop MapReduce

I have the data set below. I want to get a unique list of the first column as the output. {9719,382 ..} there are integers in the end of the each line so checking if it starts and ends with a number is not a way and i couldn't think of a solution. Can you show me how to do it? I'd really
appreciate it if you show it in detail.(with what to do in map and what to do in reduce step)
id - - [date] "URL"
In your mapper you should parse each line and write out the token that you are interested in from the beginning of the line (e.g. 9719) as the Key in a Key-Value pair (the Value is irrelevant in this case). Since the keys will be sorted before sending to the reducer, all you need to do in the reducer is iterate thru the values and each time a value changes, output it.
The WordCount example app that is packaged with Hadoop is very close to what you need.

Mapping through two data sets with Hadoop

Suppose I have two key-value data sets--Data Sets A and B, let's call them. I want to update all the data in Set A with data from Set B where the two match on keys.
Because I'm dealing with such large quantities of data, I'm using Hadoop to MapReduce. My concern is that to do this key matching between A and B, I need to load all of Set A (a lot of data) into the memory of every mapper instance. That seems rather inefficient.
Would there be a recommended way to do this that doesn't require repeating the work of loading in A every time?
Some pseudcode to clarify what I'm currently doing:
Load in Data Set A # This seems like the expensive step to always be doing
Foreach key/value in Data Set B:
If key is in Data Set A:
Update Data Seta A
According to the documentation, the MapReduce framework includes the following steps:
Map
Sort/Partition
Combine (optional)
Reduce
You've described one way to perform your join: loading all of Set A into memory in each Mapper. You're correct that this is inefficient.
Instead, observe that a large join can be partitioned into arbitrarily many smaller joins if both sets are sorted and partitioned by key. MapReduce sorts the output of each Mapper by key in step (2) above. Sorted Map output is then partitioned by key, so that one partition is created per Reducer. For each unique key, the Reducer will receive all values from both Set A and Set B.
To finish your join, the Reducer needs only to output the key and either the updated value from Set B, if it exists; otherwise, output the key and the original value from Set A. To distinguish between values from Set A and Set B, try setting a flag on the output value from the Mapper.
All of the answers posted so far are correct - this should be a Reduce-side join... but there's no need to reinvent the wheel! Have you considered Pig, Hive, or Cascading for this? They all have joins built-in, and are fairly well optimized.
This video tutorial by Cloudera gives a great description of how to do a large-scale Join through MapReduce, starting around the 12 minute mark.
Here are the basic steps he lays out for joining records from file B onto records from file A on key K, with pseudocode. If anything here isn't clear, I'd suggest watching the video as he does a much better job explaining it than I can.
In your Mapper:
K from file A:
tag K to identify as Primary Key
emit <K, value of K>
K from file B:
tag K to identify as Foreign Key
emit <K, record>
Write a Sorter and Grouper which will ignore the PK/FK tagging, so that your records are sent to the same Reducer regardless of whether they are a PK record or a FK record and are grouped together.
Write a Comparator which will compare the PK and FK keys and send the PK first.
The result of this step will be that all records with the same key will be sent to the same Reducer and be in the same set of values to be reduced. The record tagged with PK will be first, followed by all records from B which need to be joined. Now, the Reducer:
value_of_PK = values[0] // First value is the value of your primary key
for value in values[1:]:
value.replace(FK,value_of_PK) // Replace the foreign key with the key's value
emit <key, value>
The result of this will be file B, with all occurrences of K replaced by the value of K in file A. You can also extend this to effect a full inner join, or to write out both files in their entirety for direct database storage, but those are pretty trivial modifications once you get this working.

Resources