Been stuck at a problem since very long. Any help would be appreciable.
So I have a dataset file in /home/hadoop/pig directory. I can view that file, thus no permissions issue.
The dataset has 4 columns separate by "::" as delimiter.
I'm running pig in local mode from inside /home/hadoop/pig directory.
ratingsData = LOAD 'ratings.dat' AS (line:chararray);
ratings = FOREACH ratingsData GENERATE FLATTEN(REGEX_EXTRACT_ALL(line,'(.*?)::(.*?)::(.*?)::(.*?)')) AS (uid:int, mid:int, rating:int, timestamp:long);
grouped_mid = GROUP ratings BY mid;
dump grouped_mid;
The above script fails. I can successfully dump 'ratingsData' and 'ratings' relations but not the grouped_mid. But here's the bizarre part. The below script runs successfully.
ratingsData = LOAD 'ratings.dat' AS (line:chararray);
ratings = FOREACH ratingsData GENERATE FLATTEN(REGEX_EXTRACT_ALL(line,'(.*?)::(.*?)::(.*?)::(.*?)')) AS (uid:int, mid:int, rating:int, timestamp:long);
STORE ratings INTO 'ratingInfo.txt';
X = LOAD 'ratingInfo.txt' AS (uid:int, mid:int, rating:int, timestamp:long);
grouped_mid = GROUP X BY mid;
dump grouped_mid;
Obviously, the second script has a redundant step. I'm simply storing a relation and reloading it again. I want to avoid this.
Any clarification/explanation would be highly appreciable.
Thanks much.
Just reference to this: pig join with java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Integer
You can modify your scripts to:
ratingsData = LOAD 'ratings.dat' AS (line:chararray);
ratings = FOREACH ratingsData GENERATE FLATTEN((tuple(int, int, int, long))REGEX_EXTRACT_ALL(line,'(.*?)::(.*?)::(.*?)::(.*?)')) AS (uid:int, mid:int, rating:int, timestamp:long);
grouped_mid = GROUP ratings BY mid;
dump grouped_mid;
Tested.
Related
here is my deal today. Well, I have created a relation as result of a couple of transformations after have read the relation from hive. the thing is that I want to store the final relation after a couple of analysis back in Hive but I can't. Let see that in my code much clear.
The first String is when I LOAD from Hive and transform my result:
july = LOAD 'POC.july' USING org.apache.hive.hcatalog.pig.HCatLoader ;
july_cl = FOREACH july GENERATE GetDay(ToDate(start_date)) as day:int,start_station,duration; jul_cl_fl = FILTER july_cl BY day==31;
july_gr = GROUP jul_cl_fl BY (day,start_station);
july_result = FOREACH july_gr {
total_dura = SUM(jul_cl_fl.duration);
avg_dura = AVG(jul_cl_fl.duration);
qty_trips = COUNT(jul_cl_fl);
GENERATE FLATTEN(group),total_dura,avg_dura,qty_trips;
};
So, now when I try to store the relation july_result I can't because the schema has changed and I suppose that it's not compatible with Hive:
STORE july_result INTO 'poc.july_analysis' USING org.apache.hive.hcatalog.pig.HCatStorer ();
Even if I have tried to set a special scheme for the final relation I haven't figured it out.
july_result = FOREACH july_gr {
total_dura = SUM(jul_cl_fl.duration);
avg_dura = AVG(jul_cl_fl.duration);
qty_trips = COUNT(jul_cl_fl);
GENERATE FLATTEN(group) as (day:int),total_dura as (total_dura:int),avg_dura as (avg_dura:int),qty_trips as (qty_trips:int);
};
After a research in hortonworks community, I got the solution about how to define an output format for a group relation in pig. My new code looks like:
july_result = FOREACH july_gr {
total_dura = SUM(jul_cl_fl.duration);
avg_dura = AVG(jul_cl_fl.duration);
qty_trips = COUNT(jul_cl_fl);
GENERATE FLATTEN( group) AS (day, code_station),(int)total_dura as (total_dura:int),(float)avg_dura as (avg_dura:float),(int)qty_trips as (qty_trips:int);
};
Thanks guys.
I have these data that I need to group by two columns and then sum up two other fields.
Suppose the name for these four columns are:OS,device,view,click. I basically want to know the count for each OS and device, how many views they have and how many clicks it have.
(2,3346,1,)
(3,3953,1,1)
(25,4840,1,1)
(2,94840,1,1)
(14,0526,1,1)
(37,4864,1,)
(2,7353,1,)
This is what I have so far
A is data: OS,device,view,click
B = GROUP A BY (OS,device);
Result = FOREACH B {
GENERATE group AS OS,device, SUM(view) AS visits, SUM(click) AS clicks;};
dump Result;
This one won't work, error message is: Projected field [OS] does not exist in schema: group:tuple(OS:int,device:long),B:bag{:tuple(OS:int,device:long,view:int,click:int)}.
Here is the code which is tested, you are missing FLATTEN:
A = LOAD '/user/root/pig_data' using PigStorage(',') AS (OS, device, view, click);
B = GROUP A BY (OS, device);
RESULT = FOREACH B GENERATE FLATTEN(group) AS (OS, device), SUM(A.view) as views, SUM(A.click) as clicks;
dump RESULT;
I think you meant B in your example instead of J2 or J3, which may be in your actual code. Try:
B = GROUP A BY (OS, device);
Result = FOREACH B GENERATE
group.OS AS OS:int,
group.device AS device:long,
SUM(B.view) AS visits:int,
SUM(B.click) AS clicks:int;
dump Result;
Currently My data is coming in this way but i want my data to show RANK with respect to pid fields changing sequence.My script is this.I have tried rank operator and dense rank operator but still no desired output.
trans_c1 = LOAD '/mypath/data_file.csv' using PigStorage(',') as (date,Product_id);
(DATE,Product id)
(2015-01-13T18:00:40.622+05:30,B00XT)
(2015-01-13T18:00:40.622+05:30,B00XT)
(2015-01-13T18:00:40.622+05:30,B00XT)
(2015-01-13T18:00:40.622+05:30,B00XT)
(2015-01-13T18:00:40.622+05:30,B00OZ)
(2015-01-13T18:00:40.622+05:30,B00OZ)
(2015-01-13T18:00:40.622+05:30,B00OZ)
(2015-01-13T18:00:40.622+05:30,B00VB)
(2015-01-13T18:00:40.622+05:30,B00VB)
(2015-01-13T18:00:40.622+05:30,B00VB)
(2015-01-13T18:00:40.622+05:30,B00VB)
The final output should look like this where the rank sequence changes with the change in (Product_id) and resets by 1.Is it possible in pig to do that?
(1,2015-01-13T18:00:40.622+05:30,B00XT)
(2,2015-01-13T18:00:40.622+05:30,B00XT)
(3,2015-01-13T18:00:40.622+05:30,B00XT)
(4,2015-01-13T18:00:40.622+05:30,B00XT)
(1,2015-01-13T18:00:40.622+05:30,B00OZ)
(2,2015-01-13T18:00:40.622+05:30,B00OZ)
(3,2015-01-13T18:00:40.622+05:30,B00OZ)
(1,2015-01-13T18:00:40.622+05:30,B00VB)
(2,2015-01-13T18:00:40.622+05:30,B00VB)
(3,2015-01-13T18:00:40.622+05:30,B00VB)
(4,2015-01-13T18:00:40.622+05:30,B00VB)
This question can be solved by using piggybank functions Stitch and Over. It can also be solved by using dataFu's Enumerate function.
Script using Piggybank functions:
REGISTER <path to piggybank folder>/piggybank.jar;
DEFINE Stitch org.apache.pig.piggybank.evaluation.Stitch;
DEFINE Over org.apache.pig.piggybank.evaluation.Over('int');
input_data = LOAD 'data_file.csv' USING PigStorage(',') AS (date:chararray, pid:chararray);
group_data = GROUP input_data BY pid;
rank_grouped_data = FOREACH group_data GENERATE FLATTEN(Stitch(input_data, Over(input_data, 'row_number')));
display_data = FOREACH rank_grouped_data GENERATE stitched::result AS rank_number, stitched::date AS date, stitched::pid AS pid;
DUMP display_data;
Script using dataFu's Enumerate function:
REGISTER <path to pig libraries>/datafu-1.2.0.jar;
DEFINE Enumerate datafu.pig.bags.Enumerate('1');
input_data = LOAD 'data_file.csv' USING PigStorage(',') AS (date:chararray, pid:chararray);
group_data = GROUP input_data BY pid;
data = FOREACH group_data GENERATE FLATTEN(Enumerate(input_data));
display_data = FOREACH data GENERATE $2, $0, $1;
DUMP display_data;
DataFu jar file can be downloaded from Maven repository: http://search.maven.org/#search%7Cga%7C1%7Cg%3a%22com.linkedin.datafu%22
Output:
(1,2015-01-13T18:00:40.622+05:30,B00OZ)
(2,2015-01-13T18:00:40.622+05:30,B00OZ)
(3,2015-01-13T18:00:40.622+05:30,B00OZ)
(1,2015-01-13T18:00:40.622+05:30,B00VB)
(2,2015-01-13T18:00:40.622+05:30,B00VB)
(3,2015-01-13T18:00:40.622+05:30,B00VB)
(4,2015-01-13T18:00:40.622+05:30,B00VB)
(1,2015-01-13T18:00:40.622+05:30,B00XT)
(2,2015-01-13T18:00:40.622+05:30,B00XT)
(3,2015-01-13T18:00:40.622+05:30,B00XT)
(4,2015-01-13T18:00:40.622+05:30,B00XT)
Ref:
Implementing row number function in apache pig
Usage of Apache Pig rank function
I am trying to calculate maximum values for different groups in a relation in Pig. The relation has three columns patientid, featureid and featurevalue (all int).
I group the relation based on featureid and want to calculate the max feature value of each group, heres the code:
grpd = GROUP features BY featureid;
DUMP grpd;
temp = FOREACH grpd GENERATE $0 as featureid, MAX($1.featurevalue) as val;
Its giving me Invalid scalar projection: grpd Exception. I read on different forums that MAX takes in a "bag" format for such functions, but when I take the dump of grpd, it shows me a bag format. Here's a small part of the output from the dump:
(5662,{(22579,5662,1)})
(5663,{(28331,5663,1),(2624,5663,1)})
(5664,{(27591,5664,1)})
(5665,{(30217,5665,1),(31526,5665,1)})
(5666,{(27783,5666,1),(30983,5666,1),(32424,5666,1),(28064,5666,1),(28932,5666,1)})
(5667,{(31257,5667,1),(27281,5667,1)})
(5669,{(31041,5669,1)})
Whats the issue ?
The issue was with column addressing, heres the correct working code:
grpd = GROUP features BY featureid;
temp = FOREACH grpd GENERATE group as featureid, MAX(features.featurevalue) as val;
I think I already know the answer to this, but I just wanted to check here before I give up and do something ugly.
I have a query that needs to count total clicks, and also total distinct users. Total clicks would just be this code without the distinct:
report = FOREACH report GENERATE user, genre, title;
report = DISTINCT report;
report = GROUP report BY (genre, title);
My question is essentially: is there any way to write a conditional statement that would skip the DISTINCT step in this process? Pseudo:
report = FOREACH report GENERATE user, genre, title;
if $report_type == 'users':
report = DISTINCT report;
end if
report = GROUP report BY (genre, title);
I'd rather not have two separate files, and up to this point the only solutions I can find involve using a Python, etc. wrapper to dynamically deal with it. I'd rather keep everything in a simple .pig file, but can't find a way to do it.
One option could be you can try something like this. Can you check with your input?
input:
user1,action,aa
user2,comedy,cc
user3,drama,dd
user1,action,aa
user1,action,aa
user2,comedy,cc
PigScript:
A = LOAD 'input' USING PigStorage(',') AS (user, genre, title);
B = FOREACH A GENERATE user, genre, title;
C = GROUP B BY (genre, title);
D = FOREACH C {
noDistValue = FOREACH B GENERATE user,genre,title;
distValue = DISTINCT B;
GENERATE $0 AS grp,noDistValue,distValue;
}
E = FOREACH D GENERATE grp,(('$report_type' == 'users')?distValue:noDistValue) AS mybag;
DUMP E;
Output1:
>>pig -x local -param "report_type=users" test.pig
((action,aa),{(user1,action,aa)})
((comedy,cc),{(user2,comedy,cc)})
((drama,dd),{(user3,drama,dd)})
Output2:
>>pig -x local -param "report_type=nonusers" test.pig
((action,aa),{(user1,action,aa),(user1,action,aa),(user1,action,aa)})
((comedy,cc),{(user2,comedy,cc),(user2,comedy,cc)})
((drama,dd),{(user3,drama,dd)})
In case if you want to calculate the Count then project the relation E and also you can modify the above script according to your need.