I have a database table with a datetime column and I simply want to count how many records per day going back 3 months. I am currently using this query:
var minDate = DateTime.Now.AddMonths(-3);
var stats = from t in TestStats
where t.Date > minDate
group t by EntityFunctions.TruncateTime(t.Date) into g
orderby g.Key
select new
{
date = g.Key,
count = g.Count()
};
That works fine, but the problem is that if there are no records for a day then that day is not in the results at all. For example:
3/21/2008 = 5
3/22/2008 = 2
3/24/2008 = 7
In that short example I want to make 3/23/2008 = 0. In the real query all zeros should show between 3 months ago and today.
Fabricating missing data is not straightforward in SQL. I would recommend getting the data that is in SQL, then joining it to an in-memory list of all relevant dates:
var stats = (from t in TestStats
where t.Date > minDate
group t by EntityFunctions.TruncateTime(t.Date) into g
orderby g.Key
select new
{
date = g.Key,
count = g.Count()
}).ToList(); // hydrate so we only query the DB once
var firstDate = stats.Min(s => s.date);
var lastDate = stats.Max(s => s.date);
var allDates = Enumerable.Range(1,(lastDate - firstDate).Days)
.Select(i => firstDate.AddDays(i-1));
stats = (from d in allDates
join s in stats
on d equals s.date into dates
from ds in dates.DefaultIfEmpty()
select new {
date = d,
count = ds == null ? 0 : ds.count
}).ToList();
You could also get a list of dates not in the data and concatenate them.
I agree with #D Stanley's answer but want to throw an additional consideration into the mix. What are you doing with this data? Is it getting processed by the caller? Is it rendered in a UI? Is it getting transferred over a network?
Consider the size of the data. Why do you need to have the gaps filled in? If it is known to be returning over a network for instance, I'd advise against filling in the gaps. All you're doing is increasing the data size. This has to be serialised, transferred, then deserialised.
If you are going to loop the data to render in a UI, then why do you need the gaps? Why not implement the loop from min date to max date (like D Stanley's join) then place a default when no value is found.
If you ARE transferring over a network and you still NEED a single collection, consider applying D Stanley's resolution on the other side of the wire.
Just things to consider...
I am trying to join two tables and divide a number from one table by a number from another table. I have attempted to do it in the original and generate a new table with the same values but I get the same error both times which is extra confusing to me.
--get the data
lines = LOAD '/historicaldata.csv' USING PigStorage(' ') AS (ticker:chararray, date:long, open:long, high:long, low:long, close:long, volume:long);
--limit it between the dates we want
specDates = FILTER lines BY (date<=20000103 and date>=19900101);
--sort by ticker symbol
companies = GROUP specDates BY ticker;
--sort DESC and get the top to get the ending date
sorted_end = FOREACH companies {
sorted1 = ORDER specDates BY date DESC;
endDate = LIMIT sorted1 1;
GENERATE endDate.ticker AS ticker, endDate.open AS open, endDate.close AS close;
}
--sort ASC and get the top to get the starting date
sorted_begin = FOREACH companies {
sorted2 = ORDER specDates BY date ASC;
startDate = LIMIT sorted2 1;
GENERATE startDate.ticker AS ticker, startDate.open AS open, startDate.close AS close;
}
joined = JOIN sorted_end BY ticker, sorted_begin BY ticker;
final = FOREACH joined GENERATE sorted_end::ticker as ticker, sorted_begin::open as open, sorted_end::close as close;
final2 = FOREACH final GENERATE ticker as ticker, (float)(close/open) as growth_factor;
The error I keep getting is:
(Name: Divide Type: null Uid: null)incompatible types in Divide Operator left hand side:bag :tuple(close:float) right hand side:bag :tuple(open:float)
Both are floats so I am not sure why they are "incompatible types" other than that they come from different bags, but adding them to "final" and trying to do it from there doesn't work.
The data is in the form:
AA,20140131,11.60,11.80,11.45,11.48,33014100
AA,20140130,12.05,12.07,11.83,11.92,23223500
AA,20140129,11.64,12.23,11.58,11.96,44433000
Every entry includes all columns and are well formatted, non-zero numbers
Based on your query, I tried to create a dummy table on my system and generate the result. I found no issue and the division operation was completed successfully. PFB some sample queries which I fired on Pig:-
A = LOAD '/home/training/716391/pig/pigdata.csv' USING PigStorage(',') as (ID:INT, name:CHARARRAY, GPC:FLOAT)
B = LOAD '/home/training/716391/pig/pigdata2.csv' USING PigStorage(',') as (ID:INT, name:CHARARRAY, GPC:FLOAT)
C = join A by ID, B by ID
D = FOREACH C generate A::ID as IDA, A::name as NAMEA, A::GPC as GPCA, B::ID as IDB, B::name as NAMEB, B::GPC as GPCB;
E = FOREACH D GENERATE IDA, (FLOAT)(GPCA/GPCB) AS VALUE;
Can you please confirm, if the divisor value in your case has no Null value or 0?
Could you please share the load statements for sorted_end and sorted_begin?
I have data that I have grouped and aggregated, looks like this-
Date Country Browser Count
---- ------- ------- -----
2015-07-11,US,Chrome,13
2015-07-11,US,Opera Mini,1
2015-07-11,US,Firefox,2
2015-07-11,US,IE,1
2015-07-11,US,Safari,1
...
2015-07-11,UK,Chrome Mobile,1026
2015-07-11,UK,IE,455
2015-07-11,UK,Mobile Safari,4782
2015-07-11,UK,Mobile Firefox,40
...
2015-07-11,DE,Android browser,1316
2015-07-11,DE,Opera Mini,3
2015-07-11,DE,PS4 Web browser,11
I want to get the top n browsers (by count) per country, and want to aggregate the rest under 'Other'. I looked into Pig's built-in TOP function, but how would I do the grouping in other. The result I want, for example (n = 2) ->
2015-07-11,US,Chrome,13
2015-07-11,US,Firefox,2
2015-07-11,US,Other,3
What would be the best way to go about this?
Ok.. This requirement is nice..
I am simply using your input in LOAD statement of Pig script .
Input :
2015-07-11,US,Chrome,13
2015-07-11,US,Opera Mini,1
2015-07-11,US,Firefox,2
2015-07-11,US,IE,1
2015-07-11,US,Safari,1
2015-07-11,UK,Chrome Mobile,1026
2015-07-11,UK,IE,455
2015-07-11,UK,Mobile Safari,4782
2015-07-11,UK,Mobile Firefox,40
2015-07-11,DE,Android browser,1316
2015-07-11,DE,Opera Mini,3
2015-07-11,DE,PS4 Web browser,11
2015-07-11,US,Chrome,13
2015-07-11,US,Firefox,2
2015-07-11,US,Other,3
Below is the coding for this.
You can pass a value for n paramater to pig script, currently I set value 2 for n in the LIMIT statement itself.(i.e n=2).
Actually i hardcoded n=2 in this below code.
records = LOAD '/user/cloudera/inputfiles/entries.txt' USING PigStorage(',') as (dt:chararray,country:chararray,browser:chararray,count:int);
records_each = FOREACH(GROUP records BY (dt,country,browser)) GENERATE flatten(group) AS (dt,country,browser), MAX(records.count) as counts;
records_grp_order = ORDER records_each BY dt ASC , country ASC , counts DESC;
records_grp = GROUP records_grp_order BY (dt, country);
rec_each = FOREACH records_grp {
top_2_recs = LIMIT records_grp_order 2;
generate MAX(top_2_recs.dt) AS temp_dt, MAX(top_2_recs.country) AS temp_country, flatten(top_2_recs.browser) AS temp_browser;
};
rec_join = JOIN records_each BY (dt,country,browser) left outer , rec_each BY (temp_dt,temp_country,temp_browser);
rec_join_each = FOREACH rec_join generate dt,country, (temp_browser is not null ? browser : 'OTHERS') AS browser, counts AS counts;
rec_final_grp = GROUP rec_join_each BY (dt,country,browser);
final_output = FOREACH rec_final_grp generate flatten(group) AS (dt,country,browser), SUM(rec_join_each.counts) AS total_counts;
sorted_output = ORDER final_output BY dt ASC , country ASC, total_counts DESC;
dump sorted_output;
output
(2015-07-11,DE,Android browser,1316)
(2015-07-11,DE,PS4 Web browser,11)
(2015-07-11,DE,OTHERS,3)
(2015-07-11,UK,Mobile Safari,4782)
(2015-07-11,UK,Chrome Mobile,1026)
(2015-07-11,UK,OTHERS,495)
(2015-07-11,US,Chrome,13)
(2015-07-11,US,OTHERS,3)
(2015-07-11,US,Firefox,2)
New to cascading, trying to find out a way to get top N tuples based on a sort/order. for example, I'd like to know the top 100 first names people are using.
here's what I can do similar in teradata sql:
select top 100 first_name, num_records
from
(select first_name, count(1) as num_records
from table_1
group by first_name) a
order by num_records DESC
Here's similar in hadoop pig
a = load 'table_1' as (first_name:chararray, last_name:chararray);
b = foreach (group a by first_name) generate group as first_name, COUNT(a) as num_records;
c = order b by num_records DESC;
d = limit c 100;
It seems very easy to do in SQL or Pig, but having a hard time try to find a way to do it in cascading. Please advise!
Assuming you just need the Pipe set up on how to do this:
In Cascading 2.1.6,
Pipe firstNamePipe = new GroupBy("topFirstNames", InPipe,
new Fields("first_name"),
);
firstNamePipe = new Every(firstNamePipe, new Fields("first_name"),
new Count("num_records"), Fields.All);
firstNamePipe = new GroupBy(firstNamePipe,
new Fields("first_name"),
new Fields("num_records"),
true); //where true is descending order
firstNamePipe = new Every(firstNamePipe, new Fields("first_name", "num_records")
new First(Fields.Args, 100), Fields.All)
Where InPipe is formed with your incoming tap that holds the tuple data that you are referencing above. Namely, "first_name". "num_records" is created when new Count() is called.
If you have the "num_records" and "first_name" data in separate taps (tables or files) then you can set up two pipes that point to those two Tap sources and join them using CoGroup.
The definitions I used were are from Cascading 2.1.6:
GroupBy(String groupName, Pipe pipe, Fields groupFields, Fields sortFields, boolean reverseOrder)
Count(Fields fieldDeclaration)
First(Fields fieldDeclaration, int firstN)
Method 1
Use a GroupBy and group them base on the columns required and u can make use of secondary sorting that is provided by the cascading ,by default it provies them in ascending order ,if we want them in descing order we can do them by reverseorder()
To get the TOP n tuples or rows
Its quite simple just use a static variable count in FILTER and increment it by 1 for each tuple count value increases by 1 and check weather it is greater than N
return true when count value is greater than N or else return false
this will provide the ouput with first N tuples
method 2
cascading provides an inbuit function unique which returns firstNbuffer
see the below link
http://docs.cascading.org/cascading/2.2/javadoc/cascading/pipe/assembly/Unique.html
In Hadoop I have many that look like this:
(item_id,owner_id,counter) - there could be duplicates but ALWAYS the item_id has the same owner_id!
I want to get the SUM of the counter for each item_id so I have the following script:
alldata = LOAD '/path/to/data/*' USING D; -- D describes the structure
known_items = FILTER alldata BY owner_id > 0L;
group_by_item = GROUP known_data BY (item_id);
data = FOREACH group_by_item GENERATE group AS item_id, OWNER_ID_COLUMN_SOMEHOW, SUM(known_items.counter) AS items_count;
The problem is that in the FOREACH if I want to take known_items.owner_id - that would be a tuple that has the sum of all grouped item_id. What would be the most efficient way to get the first one of the owners?
The simplest solution gives you the right answer if your assumption that each item_id has the same owner_id is correct, and will let you know if it is not: incude the owner_id as part of the group.
alldata = LOAD '/path/to/data/*' USING D; -- D describes the structure
known_items = FILTER alldata BY owner_id > 0L;
group_by_item = GROUP known_data BY (item_id, owner_id);
data = FOREACH group_by_item GENERATE FLATTEN(group), SUM(known_items.counter) AS items_count;