Grouping a databag by identical values in pig - hadoop
I created the following Pig script to filter the sentences from a collection of web documents (Common Crawl) that mention a movie title (from a predefined data file of movie titles), apply sentiment analysis on those sentences and group those sentiments by movie.
register ../commoncrawl-examples/lib/*.jar;
set mapred.task.timeout= 1000;
register ../commoncrawl-examples/dist/lib/commoncrawl-examples-1.0.1-HM.jar;
register ../dist/lib/movierankings-1.jar
register ../lib/piggybank.jar;
register ../lib/stanford-corenlp-full-2014-01-04/stanford-corenlp-3.3.1.jar;
register ../lib/stanford-corenlp-full-2014-01-04/stanford-corenlp-3.3.1-models.jar;
register ../lib/stanford-corenlp-full-2014-01-04/ejml-0.23.jar;
register ../lib/stanford-corenlp-full-2014-01-04/joda-time.jar;
register ../lib/stanford-corenlp-full-2014-01-04/jollyday.jar;
register ../lib/stanford-corenlp-full-2014-01-04/xom.jar;
DEFINE IsNotWord com.moviereviewsentimentrankings.IsNotWord;
DEFINE IsMovieDocument com.moviereviewsentimentrankings.IsMovieDocument;
DEFINE ToSentenceMoviePairs com.moviereviewsentimentrankings.ToSentenceMoviePairs;
DEFINE ToSentiment com.moviereviewsentimentrankings.ToSentiment;
DEFINE MoviesInDocument com.moviereviewsentimentrankings.MoviesInDocument;
DEFINE SequenceFileLoader org.apache.pig.piggybank.storage.SequenceFileLoader();
-- LOAD pages, movies and words
pages = LOAD '../data/textData-*' USING SequenceFileLoader as (url:chararray, content:chararray);
movies_fltr_grp = LOAD '../data/movie_fltr_grp_2/part-*' as (group: chararray,movies_fltr: {(movie: chararray)});
-- FILTER pages containing movie
movie_pages = FILTER pages BY IsMovieDocument(content, movies_fltr_grp.movies_fltr);
-- SPLIT pages containing movie in sentences and create movie-sentence pairs
movie_sentences = FOREACH movie_pages GENERATE flatten(ToSentenceMoviePairs(content, movies_fltr_grp.movies_fltr)) as (content:chararray, movie:chararray);
-- Calculate sentiment for each movie-sentence pair
movie_sentiment = FOREACH movie_sentences GENERATE flatten(ToSentiment(movie, content)) as (movie:chararray, sentiment:int);
-- GROUP movie-sentiment pairs by movie
movie_sentiment_grp_tups = GROUP movie_sentiment BY movie;
-- Reformat and print movie-sentiment pairs
movie_sentiment_grp = FOREACH movie_sentiment_grp_tups GENERATE group, movie_sentiment.sentiment AS sentiments:{(sentiment: int)};
describe movie_sentiment_grp;
Test runs on a small subset of the web crawl showed to be successfully give me pairs of a movie title with a databag of integers (from 1 to 5, representing very negative, negative, neutral, positive and very positive). As a last step I would like to transform this data into pairs movie title and a databag containing tuples with all distinct integers existing for this movie title and their count. The describe movie_sentiment_grp at the end of the script returns:
movie_sentiment_grp: {group: chararray,sentiments: {(sentiment: int)}}
So basically I probably need to FOREACH over each element of movie_sentiment_grp and GROUP the sentiments databag into groups of identical values and then use the COUNT() function to get the number of elements in each group. I was however not able to find anything on how to group a databag of integers into groups of identical values. Does anyone know how to do this?
Dummy solution:
movie_sentiment_grp_cnt = FOREACH movie_sentiment_grp{
sentiments_grp = GROUP sentiments BY ?;
}
Check out the CountEach UDF from Apache DataFu. Given a bag it will produce a new bag of the distinct tuples, with the count appended to each corresponding tuple.
Example from the documentation should make this clear:
DEFINE CountEachFlatten datafu.pig.bags.CountEach('flatten');
-- input:
-- ({(A),(A),(C),(B)})
input = LOAD 'input' AS (B: bag {T: tuple(alpha:CHARARRAY, numeric:INT)});
-- output_flatten:
-- ({(A,2),(C,1),(B,1)})
output_flatten = FOREACH input GENERATE CountEachFlatten(B);
For your case:
DEFINE CountEachFlatten datafu.pig.bags.CountEach('flatten');
movie_sentiment_grp_cnt = FOREACH movie_sentiment_grp GENERATE
group,
CountEach(sentiments);
You were on the right track. movie_sentiment_grp is in the right format, and a nested FOREACH would be correct, except you can not use a GROUP in it. The solution is to use a UDF. Something like this:
myudfs.py
#!/usr/bin/python
#outputSchema('sentiments: {(sentiment:int, count:int)}')
def count_sentiments(BAG):
res = {}
for s in BAG:
if s in res:
res[s] += 1
else:
res[s] = 1
return res.items()
This UDF is used like:
Register 'myudfs.py' using jython as myfuncs;
movie_sentiment_grp_cnt = FOREACH movie_sentiment_grp
GENERATE group, myfuncs.count_sentiments(sentiments) ;
Related
How to get a SQL like GROUP BY using Apache Pig?
I have the following input called movieUserTagFltr: (260,{(260,starwars),(260,George Lucas),(260,sci-fi),(260,cult classic),(260,Science Fiction),(260,classic),(260,supernatural powers),(260,nerdy),(260,Science Fiction),(260,critically acclaimed),(260,Science Fiction),(260,action),(260,script),(260,"imaginary world),(260,space),(260,Science Fiction),(260,"space epic),(260,Syfy),(260,series),(260,classic sci-fi),(260,space adventure),(260,jedi),(260,awesome soundtrack),(260,awesome),(260,coming of age)}) (858,{(858,Katso Sanna!)}) (924,{(924,slow),(924,boring)}) (1256,{(1256,Marx Brothers)}) it follows the schema: (movieId:int, tags:bag{(movieId:int, tag:cararray),...}) Basically the first number represents a movie id, and the subsequent bag holds all the keywords associated with that movie. I would like to group those key words in such way that I would have an output something like this: (260,{(1,starwars),(1,George Lucas),(1,sci-fi),(1,cult classic),(4,Science Fiction),(1,classic),(1,supernatural powers),(1,nerdy),(1,critically acclaimed),(1,action),(1,script),(1,"imaginary world),(1,space),(1,"space epic),(1,Syfy),(1,series),(1,classic sci-fi),(1,space adventure),(1,jedi),(1,awesome soundtrack),(1,awesome),(1,coming of age)}) (858,{(1,Katso Sanna!)}) (924,{(1,slow),(1,boring)}) (1256,{(1,Marx Brothers)}) Note that the tag Science Fiction has appeared 4 times for the movie with id 260. Using the GROUP BY and COUNT I manged to count the distinct keywords for each movie using the following script: sum = FOREACH group_data { unique_tags = DISTINCT movieUserTagFltr.tags::tag; GENERATE group, COUNT(unique_tags) as tag; }; But that only returns a global count, I want a local count. So the logic of what I was thinking was: result = iterate over each tuple of group_data { generate a tuple with $0, and a bag with { foreach distinct tag that group_data has on it's $1 variable do { generate a tuple like: (tag_name, count of how many times that tag appeared on $1) } } }
You can flatten out your original input so that each movieID and tag are their own record. Then group by movieID and tag to get a count for each combination. Finally, group by movieID so that you end up with a bag of tags and counts for each movie. Let's say you start with movieUserTagFltr with the schema you described: A = FOREACH movieUserTagFltr GENERATE FLATTEN(tags) AS (movieID, tag); B = GROUP A BY (movieID, tag); C = FOREACH B GENERATE FLATTEN(group) AS (movieID, tag), COUNT(A) AS movie_tag_count; D = GROUP C BY movieID; Your final schema is: D: {group: int,C: {(movieID: int,tag: chararray,movie_tag_count: long)}}
Filter inner bag in Pig
The data looks like this: 22678, {(112),(110),(2)} 656565, {(110), (109)} 6676, {(2),(112)} This is the data structure: (id:chararray, event_list:{innertuple:(innerfield:chararray)}) I want to filter those rows where event_list contains 2. I thought initially to flatten the data and then filter those rows that have 2. Somehow flatten doesn't work on this dataset. Can anyone please help?
There might be a simpler way of doing this, like a bag lookup etc. Otherwise with basic pig one way of achieving this is: data = load 'data.txt' AS (id:chararray, event_list:bag{}); -- flatten bag, in order to transpose each element to a separate row. flattened = foreach data generate id, flatten(event_list); -- keep only those rows where the value is 2. filtered = filter flattened by (int) $1 == 2; -- keep only distinct ids. dist = distinct (foreach filtered generate $0 as (id:chararray)); -- join distinct ids to origitnal relation jnd = join a by id, dist by id; -- remove extra fields, keep original fields. result = foreach jnd generate a::id, a::event_list; dump result; (22678,{(112),(110),(2)}) (6676,{(2),(112)})
You can filter the Bag and project a boolean which says if 2 is present in the bag or not. Then, filter the rows which says that projection is true or not So.. input = LOAD 'data.txt' AS (id:chararray, event_list:bag{}); input_filt = FOREACH input { bag_filter = FILTER event_list BY (val_0 matches '2'); GENERATE id, event_list, isEmpty(bag_filter.$0) ? false : true AS is_2_present:boolean; ; }; output = FILTER input_filt BY is_2_present;
Pig: is it possible to write a loop over variables in a list?
I have to loop over 30 variables in a list [var1,var2, ... , var30] and for each variable I use some PIG group by statement such as grouped = GROUP data by var1; data_var1 = FOREACH grouped{ GENERATE group as mygroup, COUNT(data) as count; }; Is there a way to loop over the list of variables or I am forced to repeat the code above manually 30 times in my code? Thanks!
I think what you're looking for is the pig macro Create a relation for your 30 variables, and iterate on them by foreach, and call a macro which get 2 params: your data relation and the var you want to group by. Just check the example in the link the macro is really similar what you'd like to do. UPDATE & code So here's the macro you can use: DEFINE my_cnt(data, group_field) RETURNS C { $C = FOREACH (GROUP $data by $group_field) GENERATE group AS mygroup, COUNT($data) AS count; }; Use the macro: IMPORT 'cnt.macro'; data = LOAD 'data.txt' USING PigStorage(',') AS (field:chararray, value:chararray); DESCRIBE data; e = my_cnt(data,'the_field_you_group_by'); DESCRIBE e; DUMP e; I'm still thinking on how can you iterate through on your fields you'd like to group by. My original suggestion to foreach through a relation what contains the filed names not correct. (To create a UDF for this always works.) Let me think about it. But this macro works as is if you call by all the filed name you want to group.
Pig: Counting the occurence of a grouped column
In this raw data we have info of baseball players, the schema is: name:chararray, team:chararray, position:bag{t:(p:chararray)}, bat:map[] Using the following script we are able to list out players and the different positions they have played. How do we get a count of how many players have played a particular position? E.G. How many players were in the 'Designated_hitter' position? A single position can't appear multiple times in position bag for a player. Pig Script and output for the sample data is listed below. --pig script players = load 'baseball' as (name:chararray, team:chararray,position:bag{t:(p:chararray)}, bat:map[]); pos = foreach players generate name, flatten(position) as position; groupbyposition = group pos by position;dump groupbyposition; --dump groupbyposition (output of one position i.e Designated_hitter) (Designated_hitter,{(Michael Young,Designated_hitter)})
From what I can tell you've already done all of the 'grunt' (Ha!, Pig joke) work. All there it left to do is use COUNT on the output of the GROUP BY. Something like: groupbyposition = group pos by position ; pos_count = FOREACH groupbyposition GENERATE group AS position, COUNT(pos) ; Note: Using UDFs you may be able to get a more efficient solution. If you care about counting a certain few fields then it should be more efficient to filter the postion bag before hand (This is why I said UDF, I forgot you could just use a nested FILTER). For example: pos = FOREACH players { -- you can also add the DISTINCT that alexeipab points out here -- make sure to change postion in the FILTER to dist! -- dist = DISTINCT position ; filt = FILTER postion BY p MATCHES 'Designated_hitter|etc.' ; GENERATE name, FLATTEN(filt) ; } If none of the positions you want appear in postion then it will create an empty bag. When empty bags are FLATTENed the row is discarded. This means you'll be FLATTENing bags of N or less elements (where N is the number of fields you want) instead of 7-15 (didn't really look at the data that closely), and the GROUP will be on significantly less data. Notes: I'm not sure if this will be significantly faster (if at all). Also, using a UDF to preform the nested FILTER may be faster.
You can use nested DISTINCT to get the list of players and than count it. players = load 'baseball' as (name:chararray, team:chararray,position:bag{t:(p:chararray)}, bat:map[]); pos = foreach players generate name, flatten(position) as position; groupbyposition = group pos by position; pos_count = foreach groupbyposition generate { players = DISTINCT name; generate group, COUNT(players) as num, pos; }
How would I make a pig script that only returns fields with entries over a certain length?
The data I have is already fielded, I just want a document that contains two of the fields and even then it only contains an entry if the title field is over a certain length. This is what I have so far. records = LOAD '$INPUT' USING PigStorage('\t') AS (url:chararray, title:chararray, meta:chararray, copyright:chararray, aboutUSLink:chararray, aboutTitle:chararray, aboutMeta:chararray, contactUSLink:chararray, contactTitle:chararray, contactMeta:chararray, phones:chararray); E = FOREACH records IF SIZE(title)>10 GENERATE url,title; STORE E INTO '$OUTPUT/phoneNumbersAndTitles'; Why does the code exit at IF?
You should use FILTER, which selects tuples from a relation based on some condition: filtered = FILTER records BY SIZE(title) > 10; E = FOREACH filtered GENERATE url,title;