I have a query which I run on a table TXN_DEC(id, resourceid, usersid, date, eventdesc) which return distinct count of users for a given date-range and resourceid, group by date and eventdesc (each resource can have 4 to 5 eventdesc)
if there is no value of distinct users count on a date in the range, for an eventdesc, then it skips that date row in the resultset.
I need to have all date rows in my resultset or collection such that if there is no value of count for a date,eventdesc combination, then its value is set to 0 but that date still exists in the collection..
How do I go about getting such a collection
I know getting the final dataset entirely from the query result would be too complicated,
but I can use collections in groovy to modify and populate my map/list to get the data in the required format
something similar to following: if
input date range = 5th Feb to 3 March 2011
DataMap = [dateval: '02/05/2011' eventdesc: 'Read' dist_ucnt: 23,
dateval: '02/06/2011' eventdesc: 'Read' dist_ucnt: 23,
dateval: '02/07/2011' eventdesc: 'Read' dist_ucnt: 0, -> this row was not present in query resultset, but row exists in the map with value 0
....and so on till 3 march 2011 and then whole range repeated for each eventdesc
]
If you want all dates (including those with no entries in your TXN_DEC table) for a given range, you could use Oracle to generate your date range and then use an outer join to your existing query. Then you would just need to fill in null values. Something like:
select
d.dateInRange as dateval,
'Read' as eventdesc,
nvl(td.dist_ucnt, 0) as dist_ucnt
from (
select
to_date('02-FEB-2011','dd-mon-yyyy') + rownum - 1 as dateInRange
from all_objects
where rownum <= to_date('03-MAR-2011','dd-mon-yyyy') - to_date('02-FEB-2011','dd-mon-yyyy') + 1
) d
left join (
select
date,
count(distinct usersid) as dist_ucnt
from
txn_dec
where eventDesc = 'Read'
group by date
) td on td.date = d.dateInRange
That's my purely Oracle solution since I'm not a Groovy guy (well, actually, I am a pretty groovy guy...)
EDIT: Here's the same version wrapped in a stored procedure. It should be easy to call if you know the API....
create or replace procedure getDateRange (
p_begin_date IN DATE,
p_end_date IN DATE,
p_event IN txn_dec.eventDesc%TYPE,
p_recordset OUT SYS_REFCURSOR)
AS
BEGIN
OPEN p_recordset FOR
select
d.dateInRange as dateval,
p_event as eventdesc,
nvl(td.dist_ucnt, 0) as dist_ucnt
from (
select
p_begin_date + rownum - 1 as dateInRange
from all_objects
where rownum <= p_end_date - p_begin_date + 1
) d
left join (
select
date,
count(distinct usersid) as dist_ucnt
from
txn_dec
where eventDesc = p_event
group by date
) td on td.date = d.dateInRange;
END getDateRange;
Related
i have a same code in procedure with only 1 different (date variable).
in my code i have 27 p_dt variable. Code below is not full.
when i run procedure with p_dt it cost more than 10hours but when i write to_date('01.01.2020','dd.mm.yyyy') instead p_dt it cost 300 sec
create or replace ru.t_maha(p_dt in date default trunc(sysdate) -1) as
begin
delete from t_maha_1
where dt = to_date(add_months(trunc(p_dt,'MONTH'),-1),'dd.mm.yyyy');
commit;
insert into t_maha_1
with scheta_Snt as (
select
inn,
add_months(trunc(p_dt,'MONTH'),-1) || last_Day(add_months(trunc(p_dt,'MONTH'),-1)) interval,
sum(case when t.dt_open between add_months(trunc(p_dt,'MONTH'),-1) and last_Day(add_months(trunc(p_dt,'MONTH'),-1)) then value_nat end ) scheta_snt
from fct_Carry t
)
select * from scheta_snt
join scheta_pop (same subquery but for another calculate)
join dep_snt (same subquery but for another calculate)
join dep_pop (same subquery but for another calculate)
I need to aggregate date ranges allowing for max 2 days gaps in between for each id. Any help would be much appreciated
create table tt ( id int, startdate date, stopdate date);
Insert into TT values (1,'24/05/2010', '29/05/2010');
Insert into TT values (1,'30/05/2010', '22/06/2010');
Insert into TT values (10,'26/06/2012', '28/06/2012');
Insert into TT values (10,'29/06/2012', '30/06/2012');
Insert into TT values (10,'01/07/2012', '30/07/2012');
Insert into TT values (10,'03/08/2012', '30/12/2012');
insert into TT values (90,'08/03/2002', '16/03/2002');
insert into TT values (90,'31/01/2002', '15/02/2002');
insert into TT values (90,'15/02/2002', '28/02/2002');
insert into TT values (90,'31/01/2002', '15/02/2004');
insert into TT values (90,'15/02/2004', '15/04/2004');
insert into TT values (90,'01/03/2002', '07/03/2002');
expected output would be:
1 24/05/2010 22/06/2010
10 26/06/2012 30/07/2012
10 03/08/2012 30/12/2012
90 31/01/2002 15/04/2004
If you're on 12c, you can use one of my favourite SQL features: pattern matching (match_recognize).
With this you need to define a pattern variable. This is where you'll check that the start date of the current row is within two days of the stop date for the previous row. Which is:
startdate <= prev ( stopdate ) + 2
The pattern you're searching for is any row, followed by zero or more rows that meet this criterium.
So you have an "always true" strt variable, followed by * (regular expression zero-or-more quantifier) occurrences of the within2 variable:
( strt within2* )
I'm guessing you also need to split the ranges up by ID. So I've added a partition by for this.
Put it all together and you get:
select *
from tt match_recognize (
partition by id
order by startdate, stopdate
measures
first ( startdate ) startdate,
last ( stopdate ) stopdate
pattern ( strt within2* )
define
within2 as startdate <= prev ( stopdate ) + 2
);
ID STARTDATE STOPDATE
1 24/05/2010 22/06/2010
10 26/06/2012 30/07/2012
10 03/08/2012 30/12/2012
If you want to know more about this, you can find several match_recognize examples here.
I'm working in Oracle 12.2.
I've got a complex query the results of which I would like to receive as a CLOB in JSON format. I've looked into json_object, but this means completely rewriting the query.
Is there a way to simply pass the ref cursor or result set and receive a JSON array with each row being a JSON object inside?
My query:
SELECT
*
FROM
(
SELECT
LABEL_USERS.*,
ROWNUM AS RANK ,
14 AS TOTAL
FROM
(
SELECT DISTINCT
SEC_VS_USER_T.USR_ID,
SEC_VS_USER_T.USR_FIRST_NAME,
SEC_VS_USER_T.USR_LAST_NAME,
SEC_USER_ROLE_PRIV_T.ROLE_ID,
SEC_ROLE_DEF_INFO_T.ROLE_NAME,
1 AS IS_LABEL_MANAGER,
LOWER(SEC_VS_USER_T.USR_FIRST_NAME ||' '||SEC_VS_USER_T.USR_LAST_NAME) AS
SEARCH_STRING
FROM
SEC_VS_USER_T,
SEC_USER_ROLE_PRIV_T,
SEC_ROLE_DEF_INFO_T
WHERE
SEC_VS_USER_T.USR_ID = SEC_USER_ROLE_PRIV_T.USR_ID
AND SEC_VS_USER_T.USR_SITE_GRP_ID IS NULL
ORDER BY
UPPER(USR_FIRST_NAME),
UPPER(USR_LAST_NAME)) LABEL_USERS) LABEL_USER_LIST
WHERE
LABEL_USER_LIST.RANK >= 0
AND LABEL_USER_LIST.RANK < 30
I couldn't find a procedure which I could use to generate the JSON, but I was able to use the new 12.2 functions to create the JSON I needed.
SELECT JSON_ARRAYAGG( --Used to aggregate all rows into single scalar value
JSON_OBJECT( --Creating an object for each row
'USR_ID' VALUE USR_ID,
'USR_FIRST_NAME' VALUE USR_FIRST_NAME,
'USR_LAST_NAME' VALUE USR_LAST_NAME,
'IS_LABEL_MANAGER' VALUE IS_LABEL_MANAGER,
'SEARCH_STRING' VALUE SEARCH_STRING,
'USR_ROLES' VALUE USR_ROLES
)returning CLOB) AS JSON --Need to cpecify CLOB, otherwise the result is limited by VARCHARC2(4000)
FROM
(
SELECT * FROM (
SELECT LABEL_USERS.*, ROWNUM AS RANK, 14 AS TOTAL from
(SELECT
SEC_VS_USER_T.USR_ID,
SEC_VS_USER_T.USR_FIRST_NAME,
SEC_VS_USER_T.USR_LAST_NAME,
1 AS IS_LABEL_MANAGER,
LOWER(SEC_VS_USER_T.USR_FIRST_NAME ||' '||SEC_VS_USER_T.USR_LAST_NAME) AS SEARCH_STRING,
(
SELECT --It is much easier to create the JSON here and simply use this column in the outer JSON_OBJECT select
JSON_ARRAYAGG(JSON_OBJECT('ROLE_ID' VALUE ROLE_ID,
'ROLE_NAME' VALUE ROLE_NAME)) AS USR_ROLES
FROM
(
SELECT DISTINCT
prv.ROLE_ID,
def.ROLE_NAME
FROM
SEC_user_ROLE_PRIV_T prv
JOIN
SEC_ROLE_DEF_INFO_T def
ON
prv.ROLE_ID = def.ROLE_ID
ORDER BY
ROLE_ID DESC)) AS USR_ROLES
FROM
SEC_VS_USER_T,
SEC_USER_ROLE_PRIV_T,
SEC_ROLE_DEF_INFO_T
WHERE
SEC_VS_USER_T.USR_ID = SEC_USER_ROLE_PRIV_T.USR_ID
AND SEC_USER_ROLE_PRIV_T.ROLE_PRIV_ID = SEC_ROLE_DEF_INFO_T.ROLE_ID
AND SEC_VS_USER_T.USR_SITE_GRP_ID IS NULL
ORDER BY UPPER(USR_FIRST_NAME),
UPPER(USR_LAST_NAME))LABEL_USERS)) LABEL_USER_LIST
WHERE LABEL_USER_LIST.RANK >= 0--:bv_Min_Rows
AND LABEL_USER_LIST.RANK < 30--:bv_Max_Rows
How to loop Oracle query through the date? I have to put variable in 4 place. My query start with WITH AS, so I can't use Oracle SQL Loop through Date Range solution.
I also can't create temporary table.
Here is my attempt:
WITH d
AS (
SELECT DATE'2015-06-22' + LEVEL - 1 AS current_d
FROM dual
CONNECT BY DATE'2015-06-22' + LEVEL - 1 < DATE'2015-10-04'
),
OrderReserve
AS (
SELECT cvwarehouseid
,lproductid
,SUM(lqty) lqty
FROM ABBICS.iOrdPrdQtyDate
GROUP BY cvwarehouseid
,lproductid
)
SELECT
...
WHERE IORDREFILL.DNCONFIRMEDDELDATE < CAST(TO_CHAR(d.current_d , 'YYYYMMDD') AS NUMBER(38))
...
If I understand you correctly, you assume that you can only use 1 inline table per query. That is not true, you can use multiple inline tables and expand the existing WITH clause with another to loop through dates:
with OrderReserve as (
SELECT cvwarehouseid
,lproductid
,SUM(lqty) lqty
FROM ABBICS.iOrdPrdQtyDate
GROUP BY cvwarehouseid
,lproductid
), date_range as (
select sysdate+level
from dual
connect by level <= 30
)
select *
from OrderReserve, date_range
... -- expand with date_range as you see fit
;
I got one Source table with a timestamp column (YYYY.MM.DD HH24:MI:SS) and a target table with aggregated rows on daily basis (Date column: YYYY.MM.DD).
My Problem is: How do I bring new data from source to target and aggregate it?
I tried:
select
a.Sales,
trunc(a.timestamp,'DD') as TIMESTAMP,
count(1) as COUNT,
from
tbl_Source a
where trunc(a.timestamp,'DD') > nvl((select MAX(b.TIME_TO_DAY)from tbl_target b), to_date('01.01.1975 00:00:00','dd.mm.yyyy hh24:mi:ss'))
group by a.sales,
trunc(a.Timestamp,'DD')
The problem with that is: when I have a row with timestamp '2013.11.15 00:01:32' and the max day from target is the 14th of november, it will only aggregate the 15th. Would I use >= instead of > some rows would get loaded twice.
It looks like you are looking for a merge statement: If the day is already present in tbl_target then update the count else insert the record.
merge into tbl_target dest
using
(
select sales, trunc(timestamp) as theday , count(*) as sales_count
from tbl_Source
where trunc(timestamp) >= ( select nvl(max(time_to_day),to_date('01.01.1975','dd.mm.yyyy')) from tbl_target )
group by sales, trunc(timestamp)
) src
on (src.theday = dest.time_to_day)
when matched then update set
dest.sales_count = src.sales_count
when not matched then
insert (time_to_day, sales_count)
values (src.theday, src.sales_count)
;
As far as I can understand your question: you need to get everything since the last reload to target table.
The problem here: you need this date, but it is truncated during the update.
If my guesses are correct you cannot do anything except to store the date of reload as an additional column because there is no way to get it back from the data presented here.
about your query:
count(*) and count(1) are the same in performance (proved many times, at least in 10-11 versions) - do not make this count(1), looks really ugly
do not use nvl, use coalesce instead of it - it is much faster
I would write your query like that:
with t as (select max(b.time_to_day) mx from tbl_target b)
select a.sales,trunc(a.timestamp,'dd') as timestamp,count(*) as count
from tbl_source a,t
where trunc(a.timestamp,'dd') > t.mx or t.mx is null
group by a.sales,trunc(a.timestamp,'dd')
Does this fit your needs:
WHERE trunc(a.timestamp,'DD') > nvl((select MAX(b.TIME_TO_DAY) + 1 - 1/(24*60*60) from tbl_target b), to_date('01.01.1975 00:00:00','dd.mm.yyyy hh24:mi:ss'))
i.e. instead of 2013-11-15 00:00:00 compare to 2013-11-16 23:59:59
Update:
This one?
WHERE trunc(a.timestamp,'DD') BETWEEN nvl((select MAX(b.TIME_TO_DAY) from ...) AND nvl((select MAX(b.TIME_TO_DAY) + 1 - 1/(24*60*60) from ...)