Join three files in single mapreduce program - hadoop

how to join three large dataset in a single mapreduce program.All these three files can not fit in memory.The file1 has key K1,file2 has K2 and file3 has both k1 and k2.I want to join File1 and File2 by referencing File3.Please let me know if there is any technique to do this. Thanks in advanced...!!

You cannot do it with a single mapreduce job. Since you have to create two separate custom writable class one for input 1 and input 2(key has to be field k1 or k2 and a join identifier) and a separate one for input 3(key has to be field k1,k2 and a join identifier). So it requires two mapreduce jobs.
MR 1(Join input 1 and input 3 based on key K1).
Mapper 1
Map output:
(K,V)=>((K1,input1),value)
Mapper 2
Map Output:
(K,V)=>((K1,input3),value)
Append/Join data set in reducers.
MR2(Join input 2 and output of MR 1 based on key K2)
Mapper 1:
Map Output:
(K,V)=>((K2,input2),value)
Mapper 2:
Map Output:
(K,V)=>((K2,outputofMR1),value)
Append/Join data sets in reducer

Related

Number of Reducers and output order

When I use the function job.setNumReduceTasks(1);, I get the output sorted by key. However, the output is not sorted by key when I remove this function.
So, should we expect to get sorted output from the reducer when we have more than one reducer task?
Thanks.
Output is sorted on the key within a single Reducer. However the default Partitioner is the result of a hash function, and so whilst each file will be sorted if using multiple Reducers, one file will not be a sorted continuation of the last. For example:
We have a word count job with three Reducers. The Mapper outputs:
(A,1)
(zebra,1)
(bat,1)
(zebra,1)
(frog,1)
(A,1)
The Partitioner looks like the following
public int getPartition(K key, V value, int numReduceTasks) {
return (key.hashCode() & Integer.MAX_VALUE) % numReduceTasks;
}
and so it could allocate the keys in the following way:
REDUCER 1 REDUCER 2 REDUCER 3
(A,1) (frog,1) (bat,1)
(A,1)
(zebra,1)
Notice that Reducer 1 doesn't contain A-F, Reducer 2 doesn't contain G-M and Reducer 3 doesn't contain N-Z, i.e. it's not splitting alphabetically. And that's why the overall output won't be sorted, but data will be sorted within each Reducer's output.
This makes sense as otherwise we could end up with a big skew. Say for example you're running a MapReduce job on some customer services data where the ID always starts with C - you wouldn't want everything to go to the same Reducer.

Can Reducer read the same order af mapper output instead of group unique keys with values?

I have a basic understanding of how Hadoop order the data from Mapper to Reducer.
I have the following data written to context Mapper. The below data is a key, value pair
abc 1234
cde 2394
dec 8273
abc 2348
cde 8780
dec 6590
Key's abc, cde, dec continuous for n-times with same or different values.
Reducer reads in key with group of values. I.e
abc {1234, 2348, ...} and so on with other keys.
Question: Is there a possibility of reading data into reducer in a same order of Mapper output, instead of unique keys group with values ?
If you are required to process the data based on header then i think you can use the below approach:-
Mapper :-
Cut the header and make that as your key and the remaining data as your value.
Now all of the data for that particular header will move to the reducer.
Reducer :-
We will be having these values in reducer without grouping.
abc 1234
cde 2394
dec 8273
abc 2348
cde 8780
dec 6590
Then we will be able to process the data individually.

Key renumbering in map reduce

I am new in hadoop and i am working with a programme that the input of map function is a file that keys are like this:
ID: value:
3 sd
37 g
5675 gk
68 oi
My file is about 10 gigabytes and i want to change these Ids and renumber them in descending order. I don't want to change the values.
My output must be like this:
ID: value:
5675 sd
68 g
37 gk
3 oi
I want to do this work in a cluster of nodes? How can i do that?
I think that i need a global variable and i can't do this in a cluster? What can i do?
You can do one map/reduce to order the ids then you'd have a file with the ids in descending order.
You can then write a second map/reduce that would join that file with the unsorted file where the mapper will emit enumerator (that can be calculated by the split size to facilitate multiple maps) so that the mapper that go over the fist file will emit "1 sd" "2 g" etc. and the mapper that processes the ids file would emit "1 5675" "2 68". The reducer will then join the files
here's an (untested) pig 0.11 script that would do something along these line:
A = load 'data' AS (id:chararray,value:chararray);
ID_RAW= FOREACH A GENERATE id;
DATA_RAW = FOREACH A GENERATE value;
ID_SORT= RANK ID_RAW BY id DESC DENSE;
DATA_SORT = RANK DATA_RAW DENSE;
ID_DATA = JOIN ID_SORT by $0, DATA_SORT by $0;
RESULT = FOREACH ID_DATA GENERATE ID_SORT::ID,DATA_SORT::value;
STORE RESULT to 'output';
Before I say this, I like Arnon's answer for using hadoop.
But, since this is small file, 10G is not that big, and you only need to run it once, I would personally just write a small script.
Assuming a tab delimited file
sort myfile.txt > myfile.sorted.text
paste myfile.sorted.text myfile.text | cut -f1,4 > newFile.txt
That might take a long time, certainly longer than using hadoop, but is simple and works

Join of two datasets in Mapreduce/Hadoop

Does anyone know how to implement the Natural-Join operation between two datasets in Hadoop?
More specifically, here's what I exactly need to do:
I am having two sets of data:
point information which is stored as (tile_number, point_id:point_info) , this is a 1:n key-value pairs. This means for every tile_number, there might be several point_id:point_info
Line information which is stored as (tile_number, line_id:line_info) , this is again a 1:m key-value pairs and for every tile_number, there might be more than one line_id:line_info
As you can see the tile_numbers are the same between the two datasets. now what I really need is to join these two datasets based on each tile_number. In other words for every tile_number, we have n point_id:point_info and m line_id:line_info. What I want to do is to join all pairs of point_id:point_info with all pairs of line_id:line_info for every tile_number
In order to clarify, here's an example:
For point pairs:
(tile0, point0)
(tile0, point1)
(tile1, point1)
(tile1, point2)
for line pairs:
(tile0, line0)
(tile0, line1)
(tile1, line2)
(tile1, line3)
what I want is as following:
for tile 0:
(tile0, point0:line0)
(tile0, point0:line1)
(tile0, point1:line0)
(tile0, point1:line1)
for tile 1:
(tile1, point1:line2)
(tile1, point1:line3)
(tile1, point2:line2)
(tile1, point2:line3)
Use a mapper that outputs titles as keys and points/lines as values. You have to differentiate between the point output values and line output values. For instance you can use a special character (even though a binary approach would be much better).
So the map output will be something like:
tile0, _point0
tile1, _point0
tile2, _point1
...
tileX, *lineL
tileY, *lineK
...
Then, at the reducer, your input will have this structure:
tileX, [*lineK, ... , _pointP, ...., *lineM, ..., _pointR]
and you will have to take the values separate the points and the lines, do a cross product and output each pair of the cross-product , like this:
tileX (lineK, pointP)
tileX (lineK, pointR)
...
If you can already easily differentiate between the point values and the line values (depending on your application specifications) you don't need the special characters (*,_)
Regarding the cross-product which you have to do in the reducer:
You first iterate through the entire values List, separate them into 2 list:
List<String> points;
List<String> lines;
Then do the cross-product using 2 nested for loops.
Then iterate through the resulting list and for each element output:
tile(current key), element_of_the_resulting_cross_product_list
So basically you have two options here.Reduce side join or Map Side Join .
Here your group key is "tile". In a single reducer you are going to get all the output from point pair and line pair. But you you will have to either cache point pair or line pair in the array. If either of the pairs(point or line) are very large that neither can fit in your temporary array memory for single group key(each unique tile) then this method will not work for you. Remember you don't have to hold both of key pairs for single group key("tile") in memory, one will be sufficient.
If both key pairs for single group key are large , then you will have to try map-side join.But it has some peculiar requirements. However you can fulfill those requirement by doing some pre-processing your data through some map/reduce jobs running equal number of reducers for both data.

How can I use the map datatype in Apache Pig?

I'd like to use Apache Pig to build a large key -> value mapping, look things up in the map, and iterate over the keys. However, there does not even seem to be syntax for doing these things; I've checked the manual, wiki, sample code, Elephant book, Google, and even tried parsing the parser source. Every single example loads map literals from a file... and then never uses them. How can you use Pig's maps?
First, there doesn't seem to be a way to load a 2-column CSV file into a map directly. If I have a simple map.csv:
1,2
3,4
5,6
And I try to load it as a map:
m = load 'map.csv' using PigStorage(',') as (M: []);
dump m;
I get three empty tuples:
()
()
()
So I try to load tuples and then generate the map:
m = load 'map.csv' using PigStorage(',') as (key:chararray, val:chararray);
b = foreach m generate [key#val];
ERROR 1000: Error during parsing. Encountered " "[" "[ "" at line 1, column 24.
...
Many variations on the syntax also fail (e.g., generate [$0#$1]).
OK, so I munge my map into Pig's map literal format as map.pig:
[1#2]
[3#4]
[5#6]
And load it up:
m = load 'map.pig' as (M: []);
Now let's load up some keys and try lookups:
k = load 'keys.csv' as (key);
dump k;
3
5
1
c = foreach k generate m#key; /* Or m[key], or... what? */
ERROR 1000: Error during parsing. Invalid alias: m in {M: map[ ]}
Hrm, OK, maybe since there are two relations involved, we need a join:
c = join k by key, m by /* ...um, what? */ $0;
dump c;
ERROR 1068: Using Map as key not supported.
c = join k by key, m by m#key;
dump c;
Error 1000: Error during parsing. Invalid alias: m in {M: map[ ]}
Fail. How do I refer to the key (or value) of a map? The map schema syntax doesn't seem to let you even name the key and value (the mailing list says there's no way to assign types).
Finally, I'd just like to be able to find all they keys in my map:
d = foreach m generate ...oh, forget it.
Is Pig's map type half-baked? What am I missing?
Currently pig maps need the key to a chararray (string) that you supply and not a variable which contains a string. so in map#key the key has to be constant string that you supply (eg: map#'keyvalue').
The typical use case of this is to load a complex data structure one of the element being a key value pair and later in a foreach statement you can refer to a particular value based on the key you are interested in.
http://pig.apache.org/docs/r0.9.1/basic.html#map-schema
In Pig version 0.10.0 there is a new function available called "TOMAP" (http://pig.apache.org/docs/r0.10.0/func.html#tomap) that converts its odd (chararray) parameters to keys and even parameters to values. Unfortunately I haven't found it to be that useful, though, since I typically deal with arbitrary dicts of varying lengths and keys.
I would find a TOMAP function that took a tuple as a single argument, instead of a variable number of parameters, to be much more useful.
This isn't a complete solution to your problem, but the availability of TOMAP gives you some more options for your constructing a real solution.
Great question!
I personally do not like Maps in Pig. They have a place in traditional programming languages like Java, C# etc, wherein its really handy and fast to lookup a key in the map. On the other hand, Maps in Pig have very limited features.
As you rightly pointed, one can not lookup variable key in the Map in Pig. The key needs to be Constant. e.g. myMap#'keyFoo' is allowed but myMap#$SOME_VARIABLE is not allowed.
If you think about it, you do not need Map in Pig. One usually loads the data from some source, transforms it, joins it with some other dataset, filter it, transform it and so on. JOIN actually does a good job of looking up the variable keys in the data.
e.g. data1 has 2 columns A and B and data2 has 3 columns X, Y, Z. If you join data1 BY A with data2 BY Z, JOIN does the work of a Map (from traditional language) which maps value of column Z to value of column B (via column A). So data1 essentially represents a Map A -> B.
So why do we need Map in Pig?
Usually Hadoop data are the dumps of different data sources from Traditional languages. If original data sources contain Maps, the HDFS data would contain a corresponding Map.
How can one handle the Map data?
There are really 2 use cases:
Map keys are constants.
e.g. HttpRequest Header data contains time, server, clientIp as the keys in Map. to access value of a particular key, one case access them with Constant key.
e.g. header#'clientIp'.
Map keys are variables.
In these cases, you would most probably would want to JOIN the Map keys with some other data set. I usually convert the Map to Bag using UDF MapToBag, which converts map data into Bag of 2 field tuples (key, value). Once map data is converted to Bag of tuples, its easy to join it with other data sets.
I hope this helps.
1)If you want to load map data it should be like "[programming#SQL,rdbms#Oracle]"
2)If you want to load tuple data it should be like "(first_name_1234,middle_initial_1234,last_name_1234)"
3)If you want to load bag data it should be like"{(project_4567_1),(project_4567_2),(project_4567_3)}"
my file pigtest.csv like this
1234|emp_1234#company.com|(first_name_1234,middle_initial_1234,last_name_1234)|{(project_1234_1),(project_1234_2),(project_1234_3)}|[programming#SQL,rdbms#Oracle]
4567|emp_4567#company.com|(first_name_4567,middle_initial_4567,last_name_4567)|{(project_4567_1),(project_4567_2),(project_4567_3)}|[programming#Java,OS#Linux]
my schema:
a = LOAD 'pigtest.csv' using PigStorage('|') AS (employee_id:int, email:chararray, name:tuple(first_name:chararray, middle_name:chararray, last_name:chararray), project_list:bag{project: tuple(project_name:chararray)}, skills:map[chararray]) ;
b = FOREACH a GENERATE employee_id, email, name.first_name, project_list, skills#'programming' ;
dump b;
I think you need to think in term of relations and the map is just one field of one record. Then you can apply some operations on the relations, like joining the two sets data and mapping:
Input
$ cat data.txt
1
2
3
4
5
$ cat mapping.txt
1 2
2 4
3 6
4 8
5 10
Pig
mapping = LOAD 'mapping.txt' AS (key:CHARARRAY, value:CHARARRAY);
data = LOAD 'data.txt' AS (value:CHARARRAY);
-- list keys
mapping_keys =
FOREACH mapping
GENERATE key;
DUMP mapping_keys;
-- join mapping to data
mapped_data =
JOIN mapping BY key, data BY value;
DUMP mapped_data;
Output
> # keys
(1)
(2)
(3)
(4)
(5)
> # mapped data
(1,2,1)
(2,4,2)
(3,6,3)
(4,8,4)
(5,10,5)
This answer could also help you if you just want to do a simple look up:
pass-a-relation-to-a-pig-udf-when-using-foreach-on-another-relation
You can load up any data and then convert and store in key value format to read for later use
data = load 'somedata.csv' using PigStorage(',')
STORE data into 'folder' using PigStorage('#')
and then read as a mapped data.

Resources