Hibernate Criteria - get sum(hours between two dates) - spring
I have the following table:
start_date | end_date | age
---------------------+---------------------+-----------------
2020-11-30 09:00:00 | 2020-12-05 23:00:00 | 5 days 14:00:00
2020-11-30 09:00:00 | 2020-11-30 10:00:00 | 01:00:00
2020-11-30 09:00:00 | 2020-12-03 19:00:00 | 3 days 10:00:00
2020-11-30 09:00:00 | 2020-12-03 19:00:00 | 3 days 10:00:00
2020-11-30 09:00:00 | 2020-12-03 19:00:00 | 3 days 10:00:00
2020-12-01 09:00:00 | 2020-12-03 19:00:00 | 2 days 10:00:00
2020-12-03 09:00:00 | 2020-12-03 19:00:00 | 10:00:00
2020-12-04 09:00:00 | 2020-12-04 19:00:00 | 10:00:00
from the following query:
select start_date, end_date, age(end_date, start_date) from event;
how can I get the sum(number of hours/day) between start and end date for each day in java Hibernate Criteria?
until now:
public List<StatsDto> lastXDaysEvents(LocalDateTime xDaysBefore, LocalDateTime xDaysAfter) {
Session session = openSession();
Criteria cr = session.createCriteria(Event.class);
cr.setResultTransformer(Criteria.DISTINCT_ROOT_ENTITY);
cr.add(Restrictions.ge("startDate", xDaysBefore));
cr.add(Restrictions.lt("endDate", xDaysAfter));
// cr.createAlias("date_part('hour', age(end_date, start_date))", "sumDate");
// cr.add(Expression.sql("datediff('Hour', start_date, end_date)"));
ProjectionList projectionList = Projections.projectionList();
projectionList.add(Projections.alias(Projections.sqlGroupProjection("date(start_date) as sDate", "sDate", new String[] { "sDate" }, new Type[] { StandardBasicTypes.DATE }), "x"));
projectionList.add(Projections.alias(Projections.rowCount(), "y"));
// projectionList.add(Projections.alias(Projections.sum("sum(sumDate)"), "y"));
cr.addOrder(Order.asc("x"));
cr.setProjection(projectionList);
cr.setResultTransformer(Transformers.aliasToBean(StatsDto.class));
List<StatsDto> result = cr.list();
return result;
}
I get the rowCount() 'y' projection which gives the number of events grouped by date but not all events have exactly one hour. How can I get in the 'y' projection the sum(hour_diff(start_date, end_date))?
I think I need to:
create an aliast for "date_part('hour', age(end_date, start_date))"
create a sum projection for it
but I don't know how.
The AGE function returns the time between 2 timestamps as an INTERVAL. The DATE_PART function extracts a specific portion of a timestamp or interval. See documentation 9.9. Date/Time Functions and Operators. If you want the total duration in terms of specific units (in this case hours) then you must apply the part conversions to calculate the value.
with test_dates (start_date,end_date) as
(values ('2020-11-30 09:00:00'::timestamp, '2020-12-05 23:00:00'::timestamp)
, ('2020-11-30 09:00:00'::timestamp, '2020-11-30 10:00:00'::timestamp)
, ('2020-11-30 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-11-30 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-11-30 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-12-01 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-12-03 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-12-04 09:00:00'::timestamp, '2020-12-04 19:00:00'::timestamp)
)
select start_date
, end_date
, 24*date_part('day',diff)+date_part('hour',diff) num_hours
from ( select start_date,end_date,age(end_date,start_date) diff
from test_dates
) d
You actually do not need the AGE function as direct subtraction of timestamps produces the same interval. So (see fiddle for both)
with test_dates (start_date,end_date) as
(values ('2020-11-30 09:00:00'::timestamp, '2020-12-05 23:00:00'::timestamp)
, ('2020-11-30 09:00:00'::timestamp, '2020-11-30 10:00:00'::timestamp)
, ('2020-11-30 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-11-30 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-11-30 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-12-01 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-12-03 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-12-04 09:00:00'::timestamp, '2020-12-04 19:00:00'::timestamp)
)
select start_date
, end_date
, 24*date_part('day',end_date-start_date) + date_part('hours', end_date-start_date) num_hours
from work_dates;
Finally to get daily total hours just sum the result:
with test_dates (start_date,end_date) as
(values ('2020-11-30 09:00:00'::timestamp, '2020-12-05 23:00:00'::timestamp)
, ('2020-11-30 09:00:00'::timestamp, '2020-11-30 10:00:00'::timestamp)
, ('2020-11-30 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-11-30 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-11-30 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-12-01 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-12-03 09:00:00'::timestamp, '2020-12-03 19:00:00'::timestamp)
, ('2020-12-04 09:00:00'::timestamp, '2020-12-04 19:00:00'::timestamp)
)
select start_date::date for_date
, sum( (24*date_part('day',diff)+date_part('hour',diff))) daily_num_hours
from ( select start_date,end_date,age(end_date,start_date) diff
from test_dates
) d
group by start_date::date
order by start_date::date;
Related
Laravel Eloquent multiple sums
Columns of contact table Name=client_id Name=room_id Name=stats - a number Name=date I need to calculate stats for a year, per weekly basis, monthly, and quaterly, many sums of the stats column. I made this SQL for the older system, now I need to get the sum the stats column, not count them like before. SELECT SUM( date > '2021-01-11 00:00:00' AND date < '2021-01-18 00:00:00' AND room_id = 6 AND client_id = 1 ) as week1, SUM( date > '2021-12-20 00:00:00' AND date < '2021-12-27 00:00:00' AND room_id = 6 AND client_id = 1 ) as week51, SUM( date > '2021-01-01 00:00:00' AND date < '2021-01-31 23:59:00' AND room_id = 6 AND client_id = 1 ) as month1, SUM( date > '2021-12-01 00:00:00' AND date < '2021-12-31 23:59:00' AND room_id = 6 AND client_id = 1 ) as month12 FROM contact; Is is possible to do it with Eloquent, or I need to use DB facade? If it's an easier way to do it with Eloquent, i'd like to use it.
you can do something like this $conditions = [ 'week1' => [ ['date' , '>' , '2021-01-11 00:00:00' ] , ['date' , '<' , '2021-01-18 00:00:00' ] , ['room_id' , 6 ] , ['client_id' , 1 ] , ] , 'month1' => [ ['date' , '>' , '2021-12-01 00:00:00' ] , ['date' , '<' , '2021-01-31 23:59:00' ] , ['room_id' , 6 ] , ['client_id' , 1 ] , ] , ]; $output = []; foreach ($conditions as $k=>$condition ) $output[$k] = Contact::where($condition)->sum('stats'); ---------------- using union ----------------- $conditions = [ 'week1' => [ ['date' , '>' , '2021-01-11 00:00:00' ] , ['date' , '<' , '2021-01-18 00:00:00' ] , ['room_id' , 6 ] , ['client_id' , 1 ] , ] , 'month1' => [ ['date' , '>' , '2021-12-01 00:00:00' ] , ['date' , '<' , '2021-01-31 23:59:00' ] , ['room_id' , 6 ] , ['client_id' , 1 ] , ] , ]; $base_query = false ; foreach ($conditions as $k=>$condition ) { $query = Contact::selectRaw("SUM(stats) as total , '$k' as 'title' ")->where($condition) ; if($base_query === false ) $base_query = $query ; else $base_query->union($query); } $results = $base_query->get() ; foreach ($results as $result ) { dump($result->title , $result->total ); }
hive get each month end date
I want each month last date ,like Jan - 31 , Feb - 28 , so on. I tried below with current_date and it works but when I use my date column it returns null: SELECT datediff(CONCAT(y, '-', (m + 1), '-', '01'), CONCAT(y, '-', m, '-', '01')) FROM (SELECT month(from_unixtime(unix_timestamp(C_date, 'yyyyMMdd'),'yyyy-MM-dd') ) as m, year(from_unixtime(unix_timestamp(C_date, 'yyyyMMdd'),'yyyy-MM-dd') ) as y, day(from_unixtime(unix_timestamp(C_date, 'yyyyMMdd'),'yyyy-MM-dd') ) from table2 ) t returns : _c0 NULL SELECT month(from_unixtime(unix_timestamp(C_date, 'yyyyMMdd'),'yyyy-MM-dd') ) as m, year(from_unixtime(unix_timestamp(C_date, 'yyyyMMdd'),'yyyy-MM-dd') ) as y, day(from_unixtime(unix_timestamp(C_date, 'yyyyMMdd'),'yyyy-MM-dd') ) from table2) t returns: m | y | _c2| 3 |2017| 21| Thanks in advance.
Use last_day() and day() functions: hive> select day(last_day(current_date)) ; OK 31 Time taken: 0.079 seconds, Fetched: 1 row(s) Apply to_date() to convert your column before applying last_day().
How to fill out grouped data by day?
From events table, I get grouped events count by day and for this I use SELECT event_date, COUNT(event_id) event_count FROM events WHERE event_date >= TRUNC(SYSDATE-1, 'DD') GROUP BY event_date ORDER BY event_date My problem here is that this returns only these days, where are some events 2017-04-03 , 4 2017-04-05 , 2 but I need get all consecutive days from Yesterday to next 30 day, and fill out this days from my grouped events data, like this 2017-03-31 , 2017-04-01 , 2017-04-02 , 2017-04-03 , 4 2017-04-04 , 2017-04-05 , 2 2017-04-06 , ... Next 30 days (with events count when events exists in my table) ... How to do that? Thank you for any help
So, you can generate next 30 days from yesterday (see first subquery) and then simply left join it to your existing query. Try this: select all_days.days , certain_days.event_count from ( select TRUNC( sysdate + level - 2, 'DD') as days from DUAL connect by level <= 30 ) all_days left join ( SELECT event_date, COUNT(event_id) event_count FROM events WHERE event_date >= TRUNC(SYSDATE-1, 'DD') GROUP BY event_date ) certain_days on all_days.days = certain_days.event_date order by all_days.days
What is the meaning of "unable to open iterator for an alias" in pig?
I was trying to use the union operator like as show below uni_b = UNION A, B, C, D, E, F, G, H; here all the relations A,B,C...H are having same schema when ever I am using the dump operator, till 85% it running fine.. after that it is showing the following error.. ERROR 1066: Unable to open iterator for alias uni_b what is this? where is the problem? how should I debug? this is my pig script... ip = load '/jee/jee_data.txt' USING PigStorage(',') as (id:Biginteger, fname:chararray , lname:chararray , board:chararray , eid:chararray , gender:chararray , math:double , phy:double , chem:double , jeem:double , jeep:double , jeec:double ,cat:chararray , dob:chararray); todate_ip = foreach ip generate id, fname , lname , board , eid , gender , math , phy , chem , jeem , jeep , jeec , cat , ToDate(dob,'dd/MM/yyyy') as dob; jnbresult1 = foreach todate_ip generate id, fname , lname , board , eid , gender , math , phy , chem , jeem , jeep , jeec, ROUND_TO(AVG(TOBAG( math , phy , chem )),3) as bresult, ROUND_TO(SUM(TOBAG(jeem , jeep , jeec )),3) as jresult , cat , dob; rankjnbres = rank jnbresult1 by jresult DESC , bresult DESC , jeem DESC, math DESC, jeep DESC, phy DESC, jeec DESC, chem DESC, gender ASC, dob ASC, fname ASC, lname ASC DENSE; rankjnbres1 = rank jnbresult1 by bresult DESC , jeem DESC, math DESC, jeep DESC, phy DESC, jeec DESC, chem DESC, gender ASC, dob ASC, fname ASC, lname ASC DENSE; allper = foreach rankjnbres generate id, rank_jnbresult1 , fname , lname , board , eid , gender , math , phy , chem , jeem , jeep , jeec, bresult, jresult , cat , dob , ROUND_TO(((double)((10000-rank_jnbresult1)/100.000)),3) as aper; allper1 = foreach rankjnbres1 generate id, rank_jnbresult1 , fname , lname , board , eid , gender , math , phy , chem , jeem , jeep , jeec, bresult, jresult , cat , dob , ROUND_TO(((double)((10000-rank_jnbresult1)/100.000)),3) as a1per; SPLIT allper into cbseB if board=='CBSE', anbB if board=='Andhra Pradesh', apB if board=='Arunachal Pradesh', bhB if board=='Bihar', gjB if board=='Gujarat' , jnkB if board=='Jammu and Kashmir', mpB if board=='Madhya Pradesh', mhB if board=='Maharashtra', rjB if board=='Rajasthan' , ngB if board=='Nagaland' , tnB if board=='Tamil Nadu' , wbB if board=='West Bengal' , upB if board=='Uttar Pradesh'; rankcbseB = rank cbseB by jresult DESC , bresult DESC , jeem DESC, math DESC, jeep DESC, phy DESC, jeec DESC, chem DESC, gender ASC, dob ASC, fname ASC, lname ASC DENSE; grp = group rankcbseB all; maxno = foreach grp generate MAX(rankcbseB.rank_cbseB) as max1; cbseper = foreach rankcbseB generate id, rank_cbseB , fname , lname , board , eid , gender , math , phy , chem , jeem , jeep , jeec, bresult, jresult , cat , dob , ROUND_TO(((double)((maxno.max1-rank_cbseB)*100.000/maxno.max1)),3) as per , aper; rankBcbseB = rank cbseB by bresult DESC , jeem DESC, math DESC, jeep DESC, phy DESC, jeec DESC, chem DESC, gender ASC, dob ASC, fname ASC, lname ASC DENSE; grp = group rankBcbseB all; maxno = foreach grp generate MAX(rankBcbseB.rank_cbseB) as max1; Bcbseper = foreach rankBcbseB generate id, rank_cbseB , fname , lname , board , eid , gender , math , phy , chem , jeem , jeep , jeec, bresult, jresult , cat , dob , ROUND_TO(((double)((maxno.max1-rank_cbseB)*100.000/maxno.max1)),3) as bper , aper; rankanbB = rank anbB by jresult DESC , bresult DESC , jeem DESC, math DESC, jeep DESC, phy DESC, jeec DESC, chem DESC, gender ASC, dob ASC, fname ASC, lname ASC DENSE; grp = group rankanbB all; maxno = foreach grp generate MAX(rankanbB.rank_anbB) as max1; anbper = foreach rankanbB generate id, rank_anbB , fname , lname , board , eid , gender , math , phy , chem , jeem , jeep , jeec, bresult,jresult , cat , dob , ROUND_TO(((double)((maxno.max1-rank_anbB)*100.000/maxno.max1)),3) as per , aper; rankBanbB = rank anbB by bresult DESC , jeem DESC, math DESC, jeep DESC, phy DESC, jeec DESC, chem DESC, gender ASC, dob ASC, fname ASC, lname ASC DENSE; grp = group rankBanbB all; maxno = foreach grp generate MAX(rankBanbB.rank_anbB) as max1; Banbper = foreach rankanbB generate id, rank_anbB , fname , lname , board , eid , gender , math , phy , chem , jeem , jeep , jeec, bresult, jresult , cat , dob , ROUND_TO(((double)((maxno.max1-rank_anbB)*100.000/maxno.max1)),3) as bper , aper; joinall = join cbseper by (per) , Bcbseper by (bper) ; joinall = foreach joinall generate Bcbseper::id as id,cbseper::jresult as b1; A = cross Bcbseper , allper; A1 = foreach A generate Bcbseper::id as id,Bcbseper::rank_cbseB as rank,Bcbseper::fname as fname,Bcbseper::lname as lname,Bcbseper::board as board,Bcbseper::eid as eid ,Bcbseper::gender as gender, Bcbseper::bresult as bresult,Bcbseper::jresult as jresult,Bcbseper::cat as cat,Bcbseper::dob as dob,Bcbseper::bper as bper,Bcbseper::aper as aper,allper::jresult as b2,allper::aper as a1per; B = filter A1 by bper > a1per; C = group B by id; Dcbse = foreach C { E = order B by a1per DESC; F = limit E 1; generate FLATTEN(F.id) , FLATTEN(F.b2); }; joincbse = join joinall by id , Dcbse by id; joincbse = foreach joincbse generate joinall::id as id , joinall::b1 as b1, Dcbse::null::b2 as b2; joinall = join anbper by (per) , Banbper by (bper) ; joinall = foreach joinall generate Banbper::id as id,anbper::jresult as b1; A = cross Banbper , allper; A1 = foreach A generate Banbper::id as id,Banbper::rank_anbB as rank,Banbper::fname as fname,Banbper::lname as lname,Banbper::board as board,Banbper::eid as eid ,Banbper::gender as gender, Banbper::bresult as bresult,Banbper::jresult as jresult,Banbper::cat as cat,Banbper::dob as dob,Banbper::bper as bper,Banbper::aper as aper,allper::jresult as b2,allper::aper as a1per; B = filter A1 by bper > a1per; C = group B by id; Danb = foreach C { E = order B by a1per DESC; F = limit E 1; generate FLATTEN(F.id) , FLATTEN(F.b2); }; joinanb = join joinall by id , Danb by id; joinanb = foreach joinanb generate joinall::id as id , joinall::b1 as b1, Danb::null::b2 as b2; uni_b = UNION joincbse , joinanb ;
I got the solution to this. What is did was the following... First i stored all the relations A, B, C, .... using store operation as follows STORE A into into '/opA/' using PigStorage(','); Then, I Loaded the input for all the relation using load operation as follows ipA = load '/opA/part-r-00000' USING PigStorage (',') as (id:Biginteger, b1: double, b2: double); And at last I did the union using the union operation as follows uni_b = UNION ipA ,ipB ,ipC , ipD ,ipE ; I got the answer without any error.
Schema growth in Oracle 11g
what is the best way to get the schema growth in Oracle? I tried using many of the dba_hist_ tables and none of them seem to get me the size and growth of the schema for a specific time (7day) period. These tables seem to hold tablespace growth information, not # the schema level. Can somebody help? Tried the following tables, dba_hist_tablespace_stat, dba_hist_seg_stat, dba_hist_seg_stat_obj, dba_hist_snapshot etc.
Check this query: SELECT b.tsname tablspc_name , MAX(b.used_size_mb) cur_used_size_mb , ROUND(AVG(inc_used_size_mb),2) avg_incr_mb FROM (SELECT a.days, a.tsname , used_size_mb , used_size_mb - LAG (used_size_mb,1) OVER ( PARTITION BY a.tsname ORDER BY a.tsname,a.days) inc_used_size_mb FROM (SELECT TO_CHAR(sp.begin_interval_time,'MM-DD-YYYY') days , ts.tsname , MAX(ROUND((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) used_size_mb FROM dba_hist_tbspc_space_usage tsu , dba_hist_tablespace_stat ts , dba_hist_snapshot sp, dba_tablespaces dt WHERE tsu.tablespace_id = ts.ts# AND tsu.snap_id = sp.snap_id AND ts.tsname = dt.tablespace_name AND sp.begin_interval_time > sysdate-7 GROUP BY TO_CHAR(sp.begin_interval_time,'MM-DD-YYYY'), ts.tsname ORDER BY ts.tsname, days ) a ) b GROUP BY b.tsname ORDER BY b.tsname; And group by owner: SELECT ds.owner as owner, MAX(b.used_size_mb) cur_used_size_mb , ROUND(AVG(inc_used_size_mb),2) avg_incr_mb FROM (SELECT a.days, a.tsname , used_size_mb , used_size_mb - LAG (used_size_mb,1) OVER ( PARTITION BY a.tsname ORDER BY a.tsname,a.days) inc_used_size_mb FROM (SELECT TO_CHAR(sp.begin_interval_time,'MM-DD-YYYY') days , ts.tsname , MAX(ROUND((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) used_size_mb FROM dba_hist_tbspc_space_usage tsu , dba_hist_tablespace_stat ts , dba_hist_snapshot sp, dba_tablespaces dt WHERE tsu.tablespace_id = ts.ts# AND tsu.snap_id = sp.snap_id AND ts.tsname = dt.tablespace_name AND sp.begin_interval_time > sysdate-7 GROUP BY TO_CHAR(sp.begin_interval_time,'MM-DD-YYYY'), ts.tsname ORDER BY ts.tsname, days ) a ) b JOIN dba_segments ds on ds.tablespace_name = b.tsname GROUP BY ds.owner ORDER BY ds.owner;