How to calculate longest period between two specific dates in SQL? - oracle

I have problem with the task which looks like that I have a table Warehouse containing a list of items that a company has on stock. This
table contains the columns ItemID, ItemTypeID, InTime and OutTime, where InTime (OutTime)
specifies the point in time where a respective item has entered (left) the warehouse. I have to calculate the longest period that the company has gone without an item entering or leaving the warehouse. I am trying to resolve it this way:
select MAX(OutTime-InTime) from Warehouse where OutTime is not null
Is my understanding correct? Because I believe that it is not ;)

You want the greatest gap between any two consecutive actions (item entering or leaving the warehouse). One method is to unpivot the in and out times to rows, then use lag() to get the date of the "previous" action. The final step is aggregation:
select max(x_time - lag_x_time) max_time_diff
from warehouse w
cross apply (
select x_time, lag(x.x_time) over(order by x.x_time) lag_x_time
from (
select w.in_time as x_time from dual
union all select w.out_time from dual
) x
) x

You can directly perform date calculation in oracle.
The result is calculated in days.
If you want to do it in hours, multiply the result by 24.
To calculate the duration in [day], and check all the information in the table:
SELECT round((OutTime - InTime)) as periodDay, Warehouse .*
FROM Warehouse
WHERE OutTime is not null
ORDER BY periodDay DESC
To calculate the duration in [hour]:
SELECT round((OutTime - InTime)*24) AS periodHour, Warehouse .*
FROM Warehouse
WHERE OutTime is not null
ORDER periodHour DESC
round() is used to remove the digits.
Select only the record with maximum period.
SELECT *
FROM Warehouse
WHERE (OutTime - InTime) =
( SELECT MAX(OutTime - InTime) FROM Warehouse)
Select only the record with maximum period, with the period indicated.
SELECT (OutTime - InTime) AS period, Warehouse.*
FROM Warehouse
WHERE (OutTime - InTime) =
( SELECT MAX(OutTime - InTime) FROM Warehouse)
When finding the longest period, the condition where OutTime is null is not needed.

SQL Server has DateDiff, Oracle you can just take one date away from the other.
The code looks ok. Oracle has a live SQL tool where you can test out queries in your browser that should help you.
https://livesql.oracle.com/

Related

Type wise summation and subtracting in oracle

I have two table of my store and working on Oracle. Image First table describe about my transaction in store, there are two types of transaction (MR & SR), MR means adding products in Store and SR means removing products from my storage. What I wanted to do get the final closing of my storage. After transaction final Quantity every products as shown in Image. I have tried many solution but can't finish it. so I could not show now. Please help me to sort this problem. Thanks
You can use case as below to decrease and increase the quantity based on type and then group by Name and find the sum of quantity derived from the case statement to get your desired result.
select row_number() over (order by a.Name) as Sl,a.Name, sum(a.qntity) as qntity
from
(select t2.Name,case when t1.type='MR' then t2.qntity else -(t2.qntity) end as qntity
from table1 t1,table2 t2 where t1.oid=t2.table01_oid) a
group by a.Name;
This query will provide result as below:
SL NAME QNTITY
1 Balls 0
2 Books 6
3 Pencil 13

Seeking hints to simplify complex query

Building a web page to show summary data and a chart. The query to obtain my summary data appears to be overly complex and there must be a simpler way to accomplish. I'm mainly experienced with SQL Server, and under SQL Server, getting row and column level totals is done within the main query. No unions or sub queries required, unless you are doing some more complex things.
However, under Oracle 10g, this appears to be the way to accomplish the same thing.
The resulting data is put into a JSON array and populates a v1.10 DataTable.
The source data has a row containing the date, item and a count of items.
The ending table uses a pivot, becoming 8 columns, 6 for the items, a date and row-level total. I trimmed 2 columns to simplify reduce the clutter in the question. The final row has column-level totals and the final grand total. Any suggestions welcome.
The query is here
SELECT *
FROM (
SELECT TO_CHAR("DATE", 'MM/DD/YYYY') AS "DATE"
, ITEM_NAME
, SUM(ITEM_COUNT) AS TOTAL
FROM MY_VIEW
WHERE 1=1
AND "DATE" > ADD_MONTHS(TO_DATE(SYSDATE, 'DD-MM-RR'), -1)
AND ITEM_NAME IN ('ITEM-01','ITEM-02','ITEM-03','ITEM-04')
GROUP BY "DATE", ITEM_NAME
UNION ALL
SELECT TO_CHAR("DATE", 'MM/DD/YYYY') AS "DATE"
, 'ROW_TOTAL' AS ITEM_NAME
, SUM(ITEM_COUNT) AS TOTAL
FROM MY_VIEW
WHERE 1=1
AND "DATE" > ADD_MONTHS(TO_DATE(SYSDATE, 'DD-MM-RR'), -1)
AND ITEM_NAME IN ('ITEM-01','ITEM-02','ITEM-03','ITEM-04')
GROUP BY "DATE"
)
PIVOT
(
MAX(TOTAL) FOR ITEM_NAME IN ('ITEM-01','ITEM-02','ITEM-03','ITEM-04','ROW_TOTAL')
)
UNION ALL
SELECT *
FROM (
SELECT 'GRAND TOTAL' AS "DATE"
, ITEM_NAME
, SUM(ITEM_COUNT) AS TOTAL
FROM MY_VIEW
WHERE 1=1
AND "DATE" > ADD_MONTHS(TO_DATE(SYSDATE, 'DD-MM-RR'), -1)
AND ITEM_NAME IN ('ITEM-01','ITEM-02','ITEM-03','ITEM-04')
GROUP BY ITEM_NAME
UNION ALL
SELECT 'GRAND TOTAL' AS "DATE"
, 'ROW_TOTAL' AS ITEM_NAME
, SUM(ITEM_COUNT) AS TOTAL
FROM MY_VIEW
WHERE 1=1
AND "DATE" > ADD_MONTHS(TO_DATE(SYSDATE, 'DD-MM-RR'), -1)
AND ITEM_NAME IN ('ITEM-01','ITEM-02','ITEM-03','ITEM-04')
)
PIVOT
(
MAX(TOTAL) FOR ITEM_NAME IN ('ITEM-01','ITEM-02','ITEM-03','ITEM-04', 'ROW_TOTAL')
)
ORDER BY 1
And the end results should look like this:
DATE ITEM-01 ITEM-02 ITEM-03 ITEM-04 ROW_TOTAL
======================================================
4/18/17 1,063,008 460,436 106,715 97,532 1,829,364
4/19/17 1,061,819 479,338 103,946 108,179 1,859,825
4/20/17 1,095,853 536,835 107,437 101,949 1,944,677
4/21/17 1,153,345 642,364 108,940 106,988 2,121,068
4/22/17 1,075,849 633,873 102,459 99,999 2,012,710
4/23/17 913,952 591,783 95,291 100,144 1,794,358
4/24/17 1,036,377 626,043 115,105 98,339 1,977,043
4/25/17 1,079,163 602,237 118,189 100,478 2,001,529
4/26/17 1,110,499 639,640 109,793 103,360 2,069,311
4/27/17 1,119,696 620,081 105,781 108,276 2,061,452
4/28/17 1,125,676 618,763 113,234 96,326 2,057,169
4/29/17 1,026,974 620,059 102,856 96,150 1,940,394
4/30/17 903,913 539,694 83,531 97,073 1,716,114
5/1/17 1,043,598 590,027 100,272 96,519 1,932,843
5/2/17 1,074,912 623,392 101,793 97,724 2,000,981
5/3/17 1,078,865 620,662 101,699 102,900 2,010,014
5/4/17 1,090,501 628,785 110,248 103,593 2,040,658
5/5/17 1,125,984 686,945 128,657 105,356 2,150,037
5/6/17 1,031,267 625,189 117,290 99,358 1,967,819
5/7/17 921,467 551,497 97,482 93,520 1,752,940
5/8/17 1,064,291 624,366 93,463 98,860 1,979,863
5/9/17 1,085,062 661,509 97,791 98,083 2,039,114
5/10/17 1,103,794 634,868 94,364 102,345 2,033,911
5/11/17 1,107,449 617,931 94,420 103,717 2,024,126
5/12/17 1,130,463 647,744 97,616 102,684 2,079,009
5/13/17 1,056,653 621,182 96,743 99,801 1,974,710
5/14/17 970,969 583,865 87,953 97,682 1,831,516
5/15/17 1,075,979 633,102 95,356 101,336 2,003,830
5/16/17 1,094,805 634,421 96,802 99,533 2,026,891
GRAND TOTAL 30,822,183 17,596,631 2,985,226 2,917,804 57,233,276
It might go faster if you use 'analytical queries' to perform totalling without needing to run separate grouping queries. An example analytic expression might be:
Select
Sum(item_count) over(partition by date) --btw "date" is a poor name choice for a column
From
Table
Where
Item_name in ...
Or alternatively, use 'grouping sets', 'cube' or 'rollup'
The difference? Analytics establish grouping characteristics that add an extra column to a report with aggregation of the row. Grouping sets, cubes and roll ups add extra rows to a report with aggregations of a column
Apologies for not giving an example of this; they're quite an extensive topic requiring in depth discussion so it's partly beyond the scope of my answer, and partly that I'm writing this on an iPad with no recent use of them to call on from memory (the topic is that vast) and no way to test or run one, so I'll leave it as a pointer for you to do further background research. Essentially a grouping set is an instruction akin to "here's a single data set, iterate it once and perform these N number of different group by aggregates as you go.." essentially one group would be by date and name (so single lines are output) and the other group by is probably by name (so totals for each name are output)..
then do your pivot. For more info, the 'phrases in quotes' are what you'd look up in the manual/web
All this is a little bit dirty, by the way.. your reporting tool from end should really be building this summary, rather than oracle, though doing grouping (but not pivoting) in the DB helpfully reduces network traffic

Oracle tuning for query with query annidate

i am trying to better a query. I have a dataset of ticket opened. Every ticket has different rows, every row rappresent an update of the ticket. There is a field (dt_update) that differs it every row.
I have this indexs in the st_remedy_full_light.
IDX_ASSIGNMENT (ASSIGNMENT)
IDX_REMEDY_INC_ID (REMEDY_INC_ID)
IDX_REMDULL_LIGHT_DTUPD (DT_UPDATE)
Now, the query is performed in 8 second. Is high for me.
WITH last_ticket AS
( SELECT *
FROM st_remedy_full_light a
WHERE a.dt_update IN
( SELECT MAX(dt_update)
FROM st_remedy_full_light
WHERE remedy_inc_id = a.remedy_inc_id
)
)
SELECT remedy_inc_id, ASSIGNMENT FROM last_ticket
This is the plan
How i could to better this query?
P.S. This is just a part of a big query
Additional information:
- The table st_remedy_full_light contain 529.507 rows
You could try:
WITH last_ticket AS
( SELECT remedy_inc_id, ASSIGNMENT,
rank() over (partition by remedy_inc_id order by dt_update desc) rn
FROM st_remedy_full_light a
)
SELECT remedy_inc_id, ASSIGNMENT FROM last_ticket
where rn = 1;
The best alternative query, which is also much easier to execute, is this:
select remedy_inc_id
, max(assignment) keep (dense_rank last order by dt_update)
from st_remedy_full_light
group by remedy_inc_id
This will use only one full table scan and a (hash/sort) group by, no self joins.
Don't bother about indexed access, as you'll probably find a full table scan is most appropriate here. Unless the table is really wide and a composite index on all columns used (remedy_inc_id,dt_update,assignment) would be significantly quicker to read than the table.

How to avoid expensive Cartesian product using row generator

I'm working on a query (Oracle 11g) that does a lot of date manipulation. Using a row generator, I'm examining each date within a range of dates for each record in another table. Through another query, I know that my row generator needs to generate 8500 dates, and this amount will grow by 365 days each year. Also, the table that I'm examining has about 18000 records, and this table is expected to grow by several thousand records a year.
The problem comes when joining the row generator to the other table to get the range of dates for each record. SQLTuning Advisor says that there's an expensive Cartesian product, which makes sense given that the query currently could generate up to 8500 x 18000 records. Here's the query in its stripped down form, without all the date logic etc.:
with n as (
select level n
from dual
connect by level <= 8500
)
select t.id, t.origdate + n origdate
from (
select id, origdate, closeddate
from my_table
) t
join n on origdate + n - 1 <= closeddate -- here's the problem join
order by t.id, t.origdate;
Is there an alternate way to join these two tables without the Cartesian product?
I need to calculate the elapsed time for each of these records, disallowing weekends and federal holidays, so that I can sort on the elapsed time. Also, the pagination for the table is done server-side, so we can't just load into the table and sort client-side.
The maximum age of a record in the system right now is 3656 days, and the average is 560, so it's not quite as bad as 8500 x 18000; but it's still bad.
I've just about resigned myself to adding a field to store the opendays, computing it once and storing the elapsed time, and creating a scheduled task to update all open records every night.
I think that you would get better performance if you rewrite the join condition slightly:
with n as (
select level n
from dual
connect by level <= 8500
)
select t.id, t.origdate + n origdate
from (
select id, origdate, closeddate
from my_table
) t
join n on Closeddate - Origdate + 1 <= n --you could even create a function-based index
order by t.id, t.origdate;

Oracle Daily count/average over a year

I'm pulling two pieces of information over a specific time period, but I would like to fetch the daily average of one tag and the daily count of another tag. I'm not sure how to do daily averages over a specific time period, can anyone provide some advice? Below were my first ideas on how to handle this however to change every date would be annoying. Any help is appreciated thanks
SELECT COUNT(distinct chargeno), to_char(chargetime, 'mmddyyyy') AS chargeend
FROM batch_index WHERE plant=1 AND chargetime>to_date('2012-06-18:00:00:00','yyyy-mm-dd:hh24:mi:ss')
AND chargetime<to_date('2012-07-19:00:00:00','yyyy-mm-dd:hh24:mi:ss')
group by chargetime;
The working version of the daily sum
SELECT to_char(bi.chargetime, 'mmddyyyy') as chargtime, SUM(cv.val)*0.0005
FROM Charge_Value cv, batch_index bi WHERE cv.ValueID =97
AND bi.chargetime<=to_date('2012-07-19','yyyy-mm-dd')
AND bi.chargeno = cv.chargeno AND bi.typ=1
group by to_char(bi.chargetime, 'mmddyyyy')
seems like in the first one you want to change the group to the day - not the time... (plus i dont think you need to specify all those 0's for seconds..)
SELECT COUNT(distinct chargeno), to_char(chargetime, 'mmddyyyy') AS chargeend
FROM batch_index WHERE plant=1 AND chargetime>to_date('2012-06-18','yyyy-mm-dd')
AND chargetime<to_date('2012-07-19','yyyy-mm-dd')
group by to_char(chargetime, 'mmddyyyy') ;
not 100% I'm following your question, but if you just want to do aggregates (sums, avg), then do just that. I threw in the rollup just in case that is what you were looking for
with fakeData as(
select trunc(level *.66667) nr
, trunc(2*level * .33478) lvl --these truncs just make the doubles ints
,trunc(sysdate+trunc(level*.263784123)) dte --note the trunc, this gets rid of the to_char to drop the time
from dual
connect by level < 600
) --the cte is just to create fake data
--below is just some aggregates that may help you
select sum(nr) daily_sum_of_nr
, avg(nr) daily_avg_of_nr
, count(distinct lvl) distinct_lvls_per_day
, count(lvl) count_of_nonNull_lvls_per_day
, dte days
from fakeData
group by rollup(dte)
--if you want the query to supply a total for the range, you may use rollup ( http://psoug.org/reference/rollup.html )

Resources