Lets say I have set of values in file.txt
a,b,c
a,b,d
k,l,m
k,l,n
k,l,o
And my code is:
file = LOAD 'file.txt' using PigStorage(',');
events = foreach file generate session_id, user_id, code, type;
gr = group events by (session_id, user_id);
and I have set of value:
((a,b),{(a,b,c),(a,b,d)})
((k,l),{(k,l,m),(k,l,n),(k,l,o)})
And I'd like to have:
(a,b,(c,d))
(k,l,(m,n,o))
Have you got any idea how to do it?
Regards
Pawel
Note: you are inconsistent in your question. You say session_id, user_id, code, type in the FOREACH line, but your have a PigStorage not providing values. Also, that FOREACH has 4 values, while your sample data only has 3. I'll assume that type doesn't exist in order to answer your question.
After your gr relation, you are left with the group by key (in this case (session_id, user_id)) in a automatically generated tuple called group.
So, first step: gr2 = FOREACH gr GENERATE FLATTEN(group);
This will give you the tuples (a,b) and (k,l). You need to use FLATTEN because group is a tuple and you are asking for session_id and user_id to be individual columns. FLATTEN does that for you.
Ok, so now modify the gr2 line to also use a projection to tease out the third value:
gr2 = FOREACH gr GENERATE FLATTEN(group), events.code;
events.code creates a bag out of all the code values. events is the name of the bag of grouped tuples (it's named after the original relation).
This should give you:
(a, b, {c, d})
(k, l, {m, n, o})
It's very important to note that the values in the list are in a bag not a tuple, like you asked for. Keeping it in a bag is the right idea because the bag is a variable list, while a tuple is not.
Additional advice: Understanding how GROUP BY outputs data is something I see a lot of people struggle with when first using Pig. If you think my answer doesn't make much sense, I'd recommend spending some time to really get to understand GROUP BY. Understanding versus thinking it is magic will pay off in the long run.
Related
I am having a result which is in the form of tuple of tuples. I need to put all the data in one tuple into one column and other into other. I don't know how to achieve that. Below is my data.
Sample:
((completed,completed,completed,completed,completed,completed,completed,completed,completed,completed,completed,completed,completed,completed,completed,completed,completed,completed,completed),(10160-0),(20140403,20151207,20160313,20101225,20100420,20110208,20100419,20110310,20100412,20120130,20110729),(20160306,20110822,20110822,20110822,20110321,20110608,20110822,20120326,20110822),(24,12,24,24,7,24,8,8,7,24,24,24,24,6),(h,h,h,h,d,h,h,h,d,h,h,h,h,h),(1,30,1,1,1,1,1,1,1,30,1,1,1,1,0.0,1,1,1),(1,Ointment,Tablet,Tablet,1,Tablet,1,1,Cream,Ointment,1,Tablet_ER_24HR,1,1,Tablet,Cream,Tablet_Disperse,Teaspoon(s))
Expected Out put:
completed,10160-0,20140403,20160306,24,h,1,1
completed,10160-0,20151207,20110822,12,h,30,Ointment
completed,10160-0,20160313,20110822,24,h,1,Tablet
completed,10160-0,20101225,20110822,24,h,1,1
completed,10160-0,20100420,20110321,7,h,1,,1
completed,10160-0,20110208,20110608,24,h,1,Cream
completed,10160-0,20100419,20110822,8,h,1,Ointment
completed,10160-0,20110310,20120326,8,d,1,1
completed,10160-0,20100412,20110822,7,h,30,Tablet_ER_24HR
TOBAG should do it.Assuming you have relation A with tuple of tuples.
B = FOREACH A GENERATE TOBAG(*);
I have a Pig latin related problem:
I have this data below (in one row):
A = LOAD 'records' AS (f1:chararray, f2:chararray,f3:chararray, f4:chararray,f5:chararray, f6:chararray);
DUMP A;
(FITKA,FINVA,FINVU,FEEVA,FETKA,FINVA)
Now I have another dataset:
B = LOAD 'values' AS (f1:chararray, f2:chararray);
Dump B;
(FINVA,0.454535)
(FITKA,0.124411)
(FEEVA,0.123133)
And I would like to get those two dataset joined. I would get corresponding value from dataset B and place that value beside the value from dataset A. So expected output is below:
FITKA 0.123133, FINVA 0.454535 and so on ..
(They can also be like: FITKA, 0.123133, FINVA, 0.454535 and so on .. )
And then I would be able to multiply values (0.123133 x 0.454535 .. and so on) because they are on the same row now and this is what I want.
Of course I can join column by column but then values appear "end of row" and then I can clean it by using another foreach generate. But, I want some simpler solution without too many joins which may cause performance issues.
Dataset A is text (Sentence in one way..).
So what are my options to achieve this?
Any help would be nice.
A sentence can be represented as a tuple and contains a bag of tuples (word, count).
Therefore, I suggest you change the way you store your data to the following format:
sentence:tuple(words:bag{wordcount:tuple(word, count)})
I have following data set.
f1,f2,f3,f4,f5,f6
I am looking for count of f6 along with rest of the fields.
f1,f2,f3,f4,f5,5
f1,f2,f3,f4,f5,3
and so on.
I tried this code but it takes too long to execute
A = LOAD 'file a'
B = GROUP A BY f6
C = FOREACH B GENERATE FLATTEN (group) as f6, FLATTEN(f1), FLATTEN(f2),FLATTEN(f3),FLATTEN(f4),FLATTEN(f5),COUNT(f6)
Is there any better way to achieve what I am looking for ?
If I simply try to get count without flatten then fields end up in bag but I want final output as tuple.
So trying this gives me output as bag
C = FOREACH B GENERATE FLATTEN (group) as f6, A.f1,A.f2.A.f3,A.f4,A.f5, COUNT(f6)
All inputs are appreciated.
Cheers
It is also possible to flatten the projection which was grouped.
A = LOAD 'file a';
B = GROUP A BY f6;
C = FOREACH B GENERATE FLATTEN(A), COUNT(A) as f6_count;
EDIT 1:
The key is using FLATTEN(A) instead of FLATTEN(group).
FLATTEN(A) will produce a tuple with all of the columns from the original relation and will get rid of the bag, even those that were not used in the group by statement (f1, f2, f3, f4, f5, f6).
FLATTEN(group) will return only columns used in the group by, in this case f6.
Advantage of this approach is that it is very efficient and requires a single Map Reduce job to execute. Any solution that involves JOIN operation adds extra Map Reduce job.
As a general rule of thumb in pig, hive and MR, group by and join operations usually are executed as separate MR Jobs and reducing the number of MR jobs leads to improved performance.
Let's say I have intput_file.txt (user_id, event_code, event_date):
1,a,1
1,b,2
2,a,3
2,b,4
2,b,5
2,b,6
2,c,7
2,b,8
as you can see, user_id = 2, has events like this: abbbcb
I'd like to have a result like this:
1,{(a,1),(b,2)}
2,{(a,2),(b,6),(c,7),(b,8)}
So when we have few events, with the same code, I'd like to take only the last one.
Can you please share any hints?
Regards
Pawel
The main thing you are describing is what GROUP BY does.
In this case:
B = GROUP A BY user_id;
Gets your records together by user_id. Your data will now look like this:
1,{(a,1),(b,2)}
2,{(a,2),(b,6),(c,7),(b,8)}
You say you only want the last one (I assume you mean the one with the greatest event_date). To do this, you can do a nested FOREACH with an ORDER BY to sort by date, and then take the first one with LIMIT. Note that this has arbitrary behavior when there are ties.
C = FOREACH B {
DA = ORDER A BY event_date DESC;
DB = LIMIT DA 1;
GENERATE FLATTEN(group), FLATTEN(DB.event_code), FLATTEN(DB.event_date);
}
Your data should now look like this:
1,b,2
2,b,8
Another option would be to use a UDF to write some custom behavior on the groups given by GROUP BY:
B = GROUP A BY user_id;
C = FOREACH B GENERATE YourUDFThatYouBuilt(group, A);
In that UDF you'd write whatever custom behavior you want (in this case return the tuple with the greatest date)
It seems like you could use the DistinctBy UDF from Apache DataFu to achieve this. This UDF, given a bag, returns the first instance found for a given field. In your case the field you care about is event_code. But we have to reverse the order, as you actually want the last instance.
One clarification though. Correct me if I'm wrong, but I think the intended output is:
1,{(a,1),(b,2)}
2,{(a,3),(b,6),(c,7),(b,8)}
That is, the (a,3) event occurs for member 2. The (a,2) event occurs for member 1.
Here's how you can do it:
-- pass in 1 because we want distinct by event code (position 1)
define DistinctBy datafu.pig.bags.DistinctBy('1');
FOREACH (GROUP A BY user_id) {
-- reverse so we can take the last event code occurrence
A_reversed = ORDER A BY event_date DESC;
-- use DistinctBy to get the first tuple having an occurrence of a field value
A_distinct_by_code = DistinctBy(A_reversed);
-- put back in order again
A_ordered = ORDER A_distinct_by_code BY event_date ASC;
GENERATE group as user_id, A_ordered.(event_code,event_date);
}
I have a set set of records that I am loading from a file and the first thing I need to do is get the max and min of a column.
In SQL I would do this with a subquery like this:
select c.state, c.population,
(select max(c.population) from state_info c) as max_pop,
(select min(c.population) from state_info c) as min_pop
from state_info c
I assume there must be an easy way to do this in PIG as well but I'm having trouble finding it. It has a MAX and MIN function but when I tried doing the following it didn't work:
records=LOAD '/Users/Winter/School/st_incm.txt' AS (state:chararray, population:int);
with_max = FOREACH records GENERATE state, population, MAX(population);
This didn't work. I had better luck adding an extra column with the same value to each row and then grouping them on that column. Then getting the max on that new group. This seems like a convoluted way of getting what I want so I thought I'd ask if anyone knows a simpler way.
Thanks in advance for the help.
As you said you need to group all the data together but no extra column is required if you use GROUP ALL.
Pig
records = LOAD 'states.txt' AS (state:chararray, population:int);
records_group = GROUP records ALL;
with_max = FOREACH records_group
GENERATE
FLATTEN(records.(state, population)), MAX(records.population);
Input
CA 10
VA 5
WI 2
Output
(CA,10,10)
(VA,5,10)
(WI,2,10)