I am new to hadoop and all its derivatives. And I am really getting intimidated by the abundance of information available.
But one thing I have realized is that to start implementing/using hadoop or distributed codes, one has to basically change the way they think about a problem.
I was wondering if someone can help me in the following.
So, basically (like anyone else) I have a raw data.. I want to parse it and extract some information and then run some algorithm and save the results.
Lets say I have a text file "foo.txt" where data is like:
id,$value,garbage_field,time_string\n
1, 200, grrrr,2012:12:2:13:00:00
2, 12.22,jlfa,2012:12:4:15:00:00
1, 2, ajf, 2012:12:22:13:56:00
As you can see that the id can be repeated.This id can be like how much money a customer has spent!!
What I want to do is save the result in a file which contains how much money each of the customer has spent in "morning","afternoon""evening""night"
(You can define your some time buckets to define what morning and all is.
For example here probably
1, 0,202,0,0
1 is the id, 0--> 0$ spent in morning, 202 in afternon, 0 in evening and night
Now I have a python code for it.. But I have to implement this in pig.. to get started.
If anyone can just write/guide me thru this.. Thats all I need to get started.
Thanks
I'd start like this:
foo = LOAD 'foo.txt' USING PigStorage(',') AS (
CUSTOMER_ID:int,
DOLLARS_SPENT:float,
GARBAGE_FIELD,
TIME_STRING:chararray
);
foo_with_timeslots = FOREACH foo {
GENERATE
CUSTOMER_ID,
DOLLARS_SPENT,
/* DO TIME SLOT CALCULATION HERE */ AS TIME_SLOT
;
}
I don't have much knowledge of date/time values in pig, so I'll leave how to do conversion from time string to timeslot, to you.
id_grouped_foo_with_timeslots = GROUP foo_with_timeslots BY (
CUSTOMER_ID,
TIME_SLOT
);
-- Calculate how much each customer spent at time slots
spent_per_customer_per_timeslot = FOREACH id_grouped_foo_with_timeslots {
GENERATE
group.CUSTOMER_ID as CUSTOMER_ID,
group.TIME_SLOT as TIME_SLOT,
SUM(foo_with_timeslots.DOLLARS_SPENT) as TOTAL_SPENT
;
}
You'll have an output like below in spent_per_customer_per_timeslot
1,Morning,200
1,Evening,100
2,Afternoon,30
At this point it should be trivial to re-group the data and put it in the shape you want.
Related
I am having trouble to understand why a for loop construction does not work. I am not really used to for loops so I apologize if I am missing something basic. Anyhow, I appreciate any piece of advice you might have.
I am using a party level dataset from the parlgov project. I am trying to create a variable which captures how many times a party has been in government before the current observation. Time is important, the counter should be zero if a party has not been in government before, even if after the observation period it entered government multiple times. Parties are nested in countries and in cabinet dates.
The code is as follows:
use "http://eborbath.github.io/stackoverflow/loop.dta", clear //to get the data
if this does not work, I also uploaded in a csv format, try:
import delimited "http://eborbath.github.io/stackoverflow/loop.csv", bindquote(strict) encoding(UTF-8) clear
The loop should go through each country-specific cabinet date, identify the previous observation and check if the party has already been in government. This is how far I have got:
gen date2=cab_date
gen gov_counter=0
levelsof country, local(countries) // to get to the unique values in countries
foreach c of local countries{
preserve // I think I need this to "re-map" the unique cabinet dates in each country
keep if country==`c'
levelsof cab_date, local(dates) // to get to the unique cabinet dates in individual countries
restore
foreach i of local dates {
egen min_date=min(date2) // this is to identify the previous cabinet date
sort country party_id date2
bysort country party_id: replace gov_counter=gov_counter+1 if date2==min_date & cabinet_party[_n-1]==1 // this should be the counter
bysort country: replace date2=. if date2==min_date // this is to drop the observation which was counted
drop min_date //before I restart the nested loop, so that it again gets to the minimum value in `dates'
}
}
The code works without an error, but it does not do the job. Evidently there's a mistake somewhere, I am just not sure where.
BTW, it's a specific application of a problem I super often encounter: how do you count frequencies of distinct values in a multilevel data structure? This is slightly more specific, to the extent that "time matters", and it should not just sum all encounters. Let me know if you have an easier solution for this.
Thanks!
The problem with your loop is that it does not keep the replaced gov_counter after the loop. However, there is a much easier solution I'd recommend:
sort country party_id cab_date
by country party_id: gen gov_counter=sum(cabinet_party[_n-1])
This sorts the data into groups and then creates a sum by group, always up to (but not including) the current observation.
I would start here. I have stripped the comments so that we can look at the code. I have made some tiny cosmetic alterations.
foreach i of local dates {
egen min_date = min(date2)
sort country party_id date2
bysort country party_id: replace gov_counter=gov_counter+1 ///
if date2 == min_date & cabinet_party[_n-1] == 1
bysort country: replace date2 = . if date2 == min_date
drop min_date
}
This loop includes no reference to the loop index i defined in the foreach statement. So, the code is the same and completely unaffected by the loop index. The variable min_date is just a constant for the dataset and the same each time around the loop. What does depend on how many times the loop is executed is how many times the counter is incremented.
The fallacy here appears to be a false analogy with constructs in other software, in which a loop automatically spawns separate calculations for different values of a loop index.
It's not illegal for loop contents never to refer to the loop index, as is easy to see
forval j = 1/3 {
di "Hurray"
}
produces
Hurray
Hurray
Hurray
But if you want different calculations for different values of the loop index, that has to be explicit.
I would like to store user purchase custom tags on each transaction, example if user bought shoes then tags are "SPORTS", "NIKE", SHOES, COLOUR_BLACK, SIZE_12,..
These tags are that seller interested in querying back to understand the sales.
My idea is when ever new tag comes in create new code(something like hashcode but sequential) for that tag, and code starts from "a-z" 26 letters then "aa, ab, ac...zz" goes on. Now keep all the tags given for in one transaction in the one column called tag (varchar) by separating with "|".
Let us assume mapping is (at application level)
"SPORTS" = a
"TENNIS" = b
"CRICKET" = c
...
...
"NIKE" = z //Brands company
"ADIDAS" = aa
"WOODLAND" = ab
...
...
SHOES = ay
...
...
COLOUR_BLACK = bc
COLOUR_RED = bd
COLOUR_BLUE = be
...
SIZE_12 = cq
...
So storing the above purchase transaction, tag will be like tag="|a|z|ay|bc|cq|" And now allowing seller to search number of SHOES sold by adding WHERE condition tag LIKE %|ay|%. Now the problem is i cannot use index (sort key in redshift db) for "LIKE starts with %". So how to solve this issue, since i might have 100 millions of records? dont want full table scan..
any solution to fix this?
Update_1:
I have not followed bridge table concept (cross-reference table) since I want to perform group by on the results after searching the specified tags. My solution will give only one row when two tags matched in a single transaction, but bridge table will give me two rows? then my sum() will be doubled.
I got suggestion like below
EXISTS (SELECT 1 FROM transaction_tag WHERE tag_id = 'zz' and trans_id
= tr.trans_id) in the WHERE clause once for each tag (note: assumes tr is an alias to the transaction table in the surrounding query)
I have not followed this; since i have to perform AND and OR condition on the tags, example ("SPORTS" AND "ADIDAS") ---- "SHOE" AND ("NIKE" OR "ADIDAS")
Update_2:
I have not followed bitfield, since dont know redshift has this support also I assuming if my system will be going to have minimum of 3500 tags, and allocating one bit for each; which results in 437 bytes for each transaction, though there will be only max of 5 tags can be given for a transaction. Any optimisation here?
Solution_1:
I have thought of adding min (SMALL_INT) and max value (SMALL_INT) along with tags column, and apply index on that.
so something like this
"SPORTS" = a = 1
"TENNIS" = b = 2
"CRICKET" = c = 3
...
...
"NIKE" = z = 26
"ADIDAS" = aa = 27
So my column values are
`tag="|a|z|ay|bc|cq|"` //sorted?
`minTag=1`
`maxTag=95` //for cq
And query for searching shoe(ay=51) is
maxTag <= 51 AND tag LIKE %|ay|%
And query for searching shoe(ay=51) AND SIZE_12 (cq=95) is
minTag >= 51 AND maxTag <= 95 AND tag LIKE %|ay|%|cq|%
Will this give any benefit? Kindly suggest any alternatives.
You can implement auto-tagging while the files get loaded to S3. Tagging at the DB level is too-late in the process. Tedious and involves lot of hard-coding
While loading to S3 tag it using the AWS s3API
example below
aws s3api put-object-tagging --bucket --key --tagging "TagSet=[{Key=Addidas,Value=AY}]"
capture tags dynamically by sending and as a parameter
2.load the tags to dynamodb as a metadata store
3.load data to Redshift using S3 COPY command
You can store tags column as varchar bit mask, i.e. a strictly defined bit sequence of 1s or 0s, so that if a purchase is marked by a tag there will be 1 and if not there will be 0, etc. For every row, you will have a sequence of 0s and 1s that has the same length as the number of tags you have. This sequence is sortable, however you would still need lookup into the middle but you will know at which specific position to look so you don't need like, just substring. For further optimization, you can convert this bit mask to integer values (it will be unique for each sequence) and make matching based on that but AFAIK Redshift doesn't support that yet out of box, you will have to define the rules yourself.
UPD: Looks like the best option here is to keep tags in a separate table and create an ETL process that unwraps tags into tabular structure of order_id, tag_id, distributed by order_id and sorted by tag_id. Optionally, you can create a view that joins the this one with the order table. Then lookups for orders with a particular tag and further aggregations of orders should be efficient. There is no silver bullet for optimizing this in a flat table, at least I don't know of such that would not bring a lot of unnecessary complexity versus "relational" solution.
Lets say I have set of values in file.txt
a,b,c
a,b,d
k,l,m
k,l,n
k,l,o
And my code is:
file = LOAD 'file.txt' using PigStorage(',');
events = foreach file generate session_id, user_id, code, type;
gr = group events by (session_id, user_id);
and I have set of value:
((a,b),{(a,b,c),(a,b,d)})
((k,l),{(k,l,m),(k,l,n),(k,l,o)})
And I'd like to have:
(a,b,(c,d))
(k,l,(m,n,o))
Have you got any idea how to do it?
Regards
Pawel
Note: you are inconsistent in your question. You say session_id, user_id, code, type in the FOREACH line, but your have a PigStorage not providing values. Also, that FOREACH has 4 values, while your sample data only has 3. I'll assume that type doesn't exist in order to answer your question.
After your gr relation, you are left with the group by key (in this case (session_id, user_id)) in a automatically generated tuple called group.
So, first step: gr2 = FOREACH gr GENERATE FLATTEN(group);
This will give you the tuples (a,b) and (k,l). You need to use FLATTEN because group is a tuple and you are asking for session_id and user_id to be individual columns. FLATTEN does that for you.
Ok, so now modify the gr2 line to also use a projection to tease out the third value:
gr2 = FOREACH gr GENERATE FLATTEN(group), events.code;
events.code creates a bag out of all the code values. events is the name of the bag of grouped tuples (it's named after the original relation).
This should give you:
(a, b, {c, d})
(k, l, {m, n, o})
It's very important to note that the values in the list are in a bag not a tuple, like you asked for. Keeping it in a bag is the right idea because the bag is a variable list, while a tuple is not.
Additional advice: Understanding how GROUP BY outputs data is something I see a lot of people struggle with when first using Pig. If you think my answer doesn't make much sense, I'd recommend spending some time to really get to understand GROUP BY. Understanding versus thinking it is magic will pay off in the long run.
I'm trying to develop a sample program using Pig to analyse some log files. I want to analyze the running time of different jobs. When I read in the log file of the job, I get the start time and the end time of the job, like this:
(Wed,03/20/13,01:03:37,EDT)
(Wed,03/20/13,01:05:00,EDT)
Now, to calculate the elapsed time, I need to subtract these 2 timestamps, but since both timestamps are in the same bag, I'm not sure how to compare them. So I'm looking for an idea on how to do this. thanks!
Is there a unique ID for the job that is in both log lines? Also is there something to indicate which event is start, and which is end?
If so, you could read the dataset twice, once for start events, once for end-events, and join the two together. Then you'll have one record with both events in it.
so:
A = FOREACH logline GENERATE id, type, timestamp;
START = FILTER A BY (type == 'start');
END = FILTER A BY (type == 'end');
JOINED = JOIN START by ID, END by ID;
DIFF = FOREACH JOINED GENERATE (START.timestamp - END.timestamp); // or whatever;
Is there any easy way to compare dates, that ignores year, using Linq and the Entity Framework?
Say I have the following
var result = context.SomeEntity.Where(e => e.SomeDate > startDate);
This is assuming that SomeDate and startDate are .NET DateTime's.
What I would like to do is compare these dates without comparing year. SomeDate can be any year. Is there any easy way to do this? The only way I could think of would be to use the following:
var result = context.SomeEntity(e =>
e.SomeDate.Month > startDate.Month ||
(e.SomeDate.Month == startDate.Month && e.SomeDate.Day >= startDate));
This method quickly gets more complicated if I am looking to have an endDate as well, as I will have to do things like take account for when the start date is at the end of the year and the end date is at the beginning.
Is there any easy way to go about this?
Update:
I ended up just going about it the way I had initially thought in the post... a heck of a lot of code for something conceptually simple. Basically just had to find if a date fell within a range, ignoring year, and looping the calendar if startDate > endDate If anyone knows an easier way, please post as I am still interested.
If you really need to compare only dates (not times) then DateTime.DayOfYear property might help. But you should be careful regarding leap years in this case. Other of this I cannot imagine anything more simple than your approach with comparing months and days.
If all you care about is that this method will become more complicated after introducing second comparison then simple method extraction should help.
Another approach might be creating an extension method which will return a number applicable for your comparison. For example let's call this method GetYearIgnoringOrdinal():
public static int GetYearIgnoringOrdinal(this DateTime date)
{
return date.Month*100 + date.Day;
}
And then use it like this:
var result = context.SomeEntity.Where(e => e.SomeDate.GetYearIgnoringOrdinal() > startDate.GetYearIgnoringOrdinal());
Slightly simpler looking way
var result = context.SomeEntity(e =>
e.SomeDate.Month * 100 + e.SomeDate.Day > startDate.Month * 100 + startDate.Day
);
You could also create a user defined function (assuming SQL server is used) and that function can be used in the query.