I have an Oracle DB View like:
DATE | PRODUCT_NUMBER | PRODUCT_COUNT | PRODUCT_FACTOR
2018-01-01 | 1 | 10 | 3
2018-03-15 | 1 | 8 | 3
2019-02-11 | 1 | 11 | 3
2019-08-01 | 1 | 5 | 3
2019-08-01 | 2 | 20 | 5
2019-08-02 | 2 | 15 | 5
2019-06-01 | 2 | 5 | 5
2020-07-01 | 2 | 30 | 5
2018-07-07 | 3 | 100 | 2
Where,
DATE is the date
NUMBER is a unique Product Number
COUNT is the number of items from the Product Number in the storage facility
FACTOR is the number of products that fit into a storage rack
I now need to know how much it changed since the last update for every Product Number.
Since the first entry has no past date to compare to, change is undefined and something like NULL, NONE, 0 or so. Doesn't matter as long as I can filter those out later.
Some products only have 1 entry, those should be ignored (nothing to calculate difference on).
End result should be:
DATE | PRODUCT_NUMBER | PRODUCT_COUNT | PRODUCT_FACTOR | PRODUCT_CHANGE | CHANGE_FACTOR
2018-01-01 | 1 | 10 | 3 | NULL | NULL
2018-03-15 | 1 | 8 | 3 | 2 # 10-8 | 6 # 2*3
2019-02-11 | 1 | 11 | 3 | -3 # 8-11 | -9 # 3*-3
2019-08-01 | 1 | 5 | 3 | 6 # 11-5 | 18 # 6*3
2019-08-01 | 2 | 20 | 5 | -15 # 5-20 | -75 # -15*5
2019-08-02 | 2 | 15 | 5 | 5 # 20-15 | 25 # 5*5
2019-06-01 | 2 | 5 | 5 | NULL | NULL
2020-07-01 | 2 | 30 | 5 | -15 # 15-30 | -75 # -15*5
How can I achieve this within Oracle SQL?
End result is a bit unclear:
Why for product_number 2 15 and 5 values are compared - 2019-06-01 is less than 2019-08-01 and should be first row
Why change_factor for product 1 on the first row is 3 and for product 2 it's null
Why change_factor for 2019-02-11 is calculated as 11 * 0 instead of 0 * 3
Assumming all of this as typos(changed 2019-06-01 to 2019-09-01) you can use something like below
select dt, product_number, product_count, product_factor, product_change, product_change*product_factor change_factor
from (
select "DATE" dt, product_number, product_count, product_factor,
greatest(lag(product_count) over(partition by product_number order by "DATE") - product_count, 0) product_change
from test_tab t1
where (select count(1) from test_tab t2 where t1.product_number = t2.product_number and rownum < 3) > 1
)
fiddle
See also LAG documentation
Related
I have 2 table duty_sheets
centerId | centerName | p1 | p2 | p3 | p4 | ...p22 | examiId
1 | xyz | 1 | 5 | 8 | 7 | 1 | 1
2 | abc | 9 | 1 | 6 | 6 | 1 | 1
and feedback
id | centerId | inspectorId | A | B | C | examiId
1 | 1 | 1 | 1 | 5 | 8 | 1
2 | 2 | 9 | 9 | 1 | 6 | 1
here is my code
$center = DutySheet::select('duty_sheets.centerId', 'duty_sheets.centerName','feedback.id')
->leftJoin('feedback', function ($leftJoin) {
$leftJoin->on('duty_sheets.examId', 'feedback.examId')
->where("duty_sheets.centerId", 'feedback.centerId')
->where("feedback.inspectorId", 1);
})
->where("duty_sheets.examId", 1)
->where("p20", 1)
->get();
dd($center);
to retrieve "All rows from DutySheet where p20 = 1 and dutysheet.examId = 1, and relevant rows from feedback depend on centerId, inspectorId and examId.
The problem is that the query return feedback.id as null while the record exist in feedback table with the ids.
Laravel version = 9
The problem is in left Join
->where("duty_sheets.centerId", 'feedback.centerId')
This build a where against the value 'feedback.centerId'
duty_sheets.centerId='feedback.centerId'
You need use
->on("duty_sheets.centerId",'=', 'feedback.centerId')
Or
->whereColumn("duty_sheets.centerId", 'feedback.centerId')
I have an Oracle query that uses DUAL to produce a list of dates in a subquery, and a case when to identify business days:
SELECT DATES
,case when to_char(DATES, 'd') in (1,7)
then 0
else 1 end as business_day
FROM (
SELECT to_date('1/1/2020','MM/DD/YYYY') + (LEVEL -1) AS DATES
FROM DUAL connect by level <=(to_date('1/1/2021','MM/DD/YYYY') - to_date('1/1/2020','MM/DD/YYYY'))
) L1
So far so good. Now when I nest this in a subquery, and add a row_number() function, all my business_day values become 0. If I remove the row_number() function, business_day goes back to normal.
SELECT L2.DATES
, L2.business_day
, row_number() OVER (PARTITION BY L2.business_day ORDER BY L2.DATES ASC) as dateindex
FROM (
SELECT DATES
,case when to_char(DATES, 'd') in (1,7)
then 0
else 1 end as business_day
FROM (
SELECT to_date('1/1/2020','MM/DD/YYYY') + (LEVEL -1) AS DATES
FROM DUAL connect by level <=(to_date('1/1/2021','MM/DD/YYYY') - to_date('1/1/2020','MM/DD/YYYY'))
) L1
) L2
Any idea how adding a new column causes another's values to change?
I suspect you aren't paying attention to the actual dates; running your code in this db<>fiddle, the first query returns:
DATES | BUSINESS_DAY
:-------- | -----------:
01-JAN-20 | 1
02-JAN-20 | 1
03-JAN-20 | 1
04-JAN-20 | 1
05-JAN-20 | 0
06-JAN-20 | 0
07-JAN-20 | 1
08-JAN-20 | 1
09-JAN-20 | 1
10-JAN-20 | 1
11-JAN-20 | 1
...
while the second returns:
DATES | BUSINESS_DAY | DATEINDEX
:-------- | -----------: | --------:
05-JAN-20 | 0 | 1
06-JAN-20 | 0 | 2
12-JAN-20 | 0 | 3
13-JAN-20 | 0 | 4
19-JAN-20 | 0 | 5
20-JAN-20 | 0 | 6
26-JAN-20 | 0 | 7
27-JAN-20 | 0 | 8
02-FEB-20 | 0 | 9
03-FEB-20 | 0 | 10
...
All the business_day values are indeed zero... or at least, if you only look at the start of the result set. If you look further down:
...
27-DEC-20 | 0 | 103
28-DEC-20 | 0 | 104
01-JAN-20 | 1 | 1
02-JAN-20 | 1 | 2
03-JAN-20 | 1 | 3
04-JAN-20 | 1 | 4
...
You don't have an order-by clause, and the analytic processing internally happens to return in an order you aren't expecting. If you add an order-by then it looks more sensible, as in this db<>fiddle:
DATES | BUSINESS_DAY | DATEINDEX
:-------- | -----------: | --------:
01-JAN-20 | 1 | 1
02-JAN-20 | 1 | 2
03-JAN-20 | 1 | 3
04-JAN-20 | 1 | 4
05-JAN-20 | 0 | 1
06-JAN-20 | 0 | 2
07-JAN-20 | 1 | 5
08-JAN-20 | 1 | 6
09-JAN-20 | 1 | 7
10-JAN-20 | 1 | 8
11-JAN-20 | 1 | 9
12-JAN-20 | 0 | 3
...
Incidentally, the 'd' format element is NLS-sensitive, so someone else running this code in a session with different settings could see different results. It would safer to do:
when to_char(DATES, 'Dy', 'NLS_DATE_LANGUAGE=ENGLISH') in ('Sat', 'Sun')
How do I use 2 for loops in Hive?
I have input data as below:
1 a 3
15 b 4
1 b 2
25 a 5
15 c 3
1 a 3
15 c 2
25 b 4
Intermediate Output: For 1 count total no. of a and b, similar for 15 and 25
1 a 6
1 b 2
15 b 4
15 c 5
25 a 5
25 b 4
Final output: Need for 1 max count
1 a 6
15 c 5
25 a 5
You can use window functions and get the results.Check this out:
> select * from shailesh;
INFO : OK
+----------------+----------------+----------------+--+
| shailesh.col1 | shailesh.col2 | shailesh.col3 |
+----------------+----------------+----------------+--+
| 1 | a | 3 |
| 15 | b | 4 |
| 1 | b | 2 |
| 25 | a | 5 |
| 15 | c | 3 |
| 1 | a | 3 |
| 15 | c | 2 |
| 25 | b | 4 |
+----------------+----------------+----------------+--+
8 rows selected (0.359 seconds)
> create table shailesh2 as select col1, col2, max(col3s) col3s2 from (select col1,col2,sum(col3) over(partition by col1,col2) col3s from shailesh ) t group by col1, col2;
INFO : OK
+-----------------+-----------------+-------------------+--+
| shailesh2.col1 | shailesh2.col2 | shailesh2.col3s2 |
+-----------------+-----------------+-------------------+--+
| 1 | a | 6 |
| 1 | b | 2 |
| 15 | b | 4 |
| 15 | c | 5 |
| 25 | a | 5 |
| 25 | b | 4 |
+-----------------+-----------------+-------------------+--+
6 rows selected (0.36 seconds)
> select col1, col2, col3s2 from (select col1,col2,col3s2, rank() over(partition by col1 order by col3s2 desc) as rk from shailesh2) t2 where rk=1;
INFO : OK
+-------+-------+---------+--+
| col1 | col2 | col3s2 |
+-------+-------+---------+--+
| 1 | a | 6 |
| 15 | c | 5 |
| 25 | a | 5 |
+-------+-------+---------+--+
3 rows selected (37.224 seconds)
I have a table a show below
Date | Customer | Count | Daily_Count | ITD_Count
d1 | A | 3 | 3 |
d2 | B | 4 | 4 |
d3 | A | 7 | 16 |
d3 | B | 9 | 16 |
d4 | A | 8 | 9 |
d4 | B | 1 | 9 |
Descrption of Fields:
Date : date
customer : name of customer
Count : # of customers
daily_Count : # of customers on daily basis calculated as
SUM(count) OVER (partition BY date )as Daily_Count
Question :
How do I calculate the Running Total or Rolling Total in the ITD_Count ?
The output should look like
Date | Customer | Count | Daily_Count | ITD_Count
d1 | A | 3 | 3 | 3
d2 | B | 4 | 4 | 7
d3 | A | 7 | 16 | 23
d3 | B | 9 | 16 | 23
d4 | A | 8 | 9 | 31
d4 | B | 1 | 9 | 31
I have tried several variations of using the Window functionality.. But hit a road-block in all my attempts.
Attempt 1 ;
SUM(daily_COunt) OVER (partition BY date order by date rows between unbounded preceding and current row ) as ITD_account_linking
Attempt 2 :
SUM(daily_COunt) OVER (partition BY date, daily_count order by date rows between unbounded preceding and current row ) as ITD_account_linking
and several more attempts following this. :(
Any possible suggestions to guide me in the right direction are welcome.
Please let me know if you need more details.
Use Hive Windowing and Analytics functions.
SELECT Date, Customer, Count, Daily_Count,
SUM(Daily_Count) OVER (ORDER BY Date ROWS UNBOUNDED PRECEDING) AS ITD_Count
FROM table;
If ID is even I must sort the values that correspond to that ID DESC , if the ID is odd I must sort the values ASC. This is the table called Grades.
ID|COL1|COL2|COL3|COL4|COL5|COL6|COL7|
1 | 6 | 3 | 8 | 4 | 7 | 8 | 4 |
2 | 5 | 7 | 9 | 2 | 1 | 7 | 8 |
3 | 2 | 7 | 4 | 8 | 1 | 5 | 9 |
4 | 8 | 4 | 7 | 9 | 4 | 1 | 4 |
5 | 7 | 5 | 2 | 5 | 2 | 6 | 4 |
The result must be this:
ID|COL1|COL2|COL3|COL4|COL5|COL6|COL7|
1 | 3 | 4 | 4 | 6 | 7 | 8 | 8 |
2 | 9 | 8 | 7 | 7 | 5 | 2 | 1 |
3 | 1 | 2 | 4 | 5 | 7 | 8 | 9 |
4 | 9 | 8 | 7 | 4 | 4 | 4 | 1 |
5 | 2 | 2 | 4 | 5 | 5 | 6 | 7 |
As you can see ID=1->odd number so the values must be sorted ASC
This is the code so far:
declare
type grades_array is table of grades%rowtype index by pls_integer;
grades_a grades_array;
cnt number;
begin
Select count(id) into cnt from grades;
For i in 1..cnt loop
--I used an associative array
Select * into grades_a(i) from grades where grades.id=i;
end loop;
For i in grades_a.FIRST..grades_a.LAST loop
if (mod(grades_a(i).id,2)=1)then .......
--I don't know how to sort the specific rows, in this case ASC
--dbms_output.put_line(grades_a(i).col1);
end if;
end loop;
--Also it is specified in the exercise that the table can change, e.g add more columns
end;
I would simply use PIVOT/UNPIVOT for this.
First UNPIVOT the table and assign a rank to each column value in ascending/descending order.
SQL Fiddle
Query 1:
SELECT id,
colval,
ROW_NUMBER () OVER (
PARTITION BY id
ORDER BY CASE MOD (id, 2) WHEN 1 THEN colval END,
CASE MOD (id, 2) WHEN 0 THEN colval END DESC) r
FROM x UNPIVOT (colval FOR colname
IN (col1 AS 'col1', col2 AS 'col2', col3 AS 'col3', col4 AS 'col4',
col5 AS 'col5', col6 AS 'col6', col7 AS 'col7')
)
Results:
| ID | COLVAL | R |
|----|--------|---|
| 1 | 3 | 1 |
| 1 | 4 | 2 |
| 1 | 4 | 3 |
| 1 | 6 | 4 |
| 1 | 7 | 5 |
| 1 | 8 | 6 |
| 1 | 8 | 7 |
| 2 | 9 | 1 |
| 2 | 8 | 2 |
| 2 | 7 | 3 |
| 2 | 7 | 4 |
| 2 | 5 | 5 |
| 2 | 2 | 6 |
| 2 | 1 | 7 |
| 3 | 1 | 1 |
| 3 | 2 | 2 |
| 3 | 4 | 3 |
| 3 | 5 | 4 |
| 3 | 7 | 5 |
| 3 | 8 | 6 |
| 3 | 9 | 7 |
| 4 | 9 | 1 |
| 4 | 8 | 2 |
| 4 | 7 | 3 |
| 4 | 4 | 4 |
| 4 | 4 | 5 |
| 4 | 4 | 6 |
| 4 | 1 | 7 |
| 5 | 2 | 1 |
| 5 | 2 | 2 |
| 5 | 4 | 3 |
| 5 | 5 | 4 |
| 5 | 5 | 5 |
| 5 | 6 | 6 |
| 5 | 7 | 7 |
Then PIVOT the result based on the rank.
Query 2:
WITH pivoted AS (
SELECT id,
colval,
ROW_NUMBER () OVER (
PARTITION BY id
ORDER BY CASE MOD (id, 2) WHEN 1 THEN colval END,
CASE MOD (id, 2) WHEN 0 THEN colval END DESC) r
FROM x UNPIVOT (colval FOR colname
IN (col1 AS 'col1', col2 AS 'col2', col3 AS 'col3', col4 AS 'col4',
col5 AS 'col5', col6 AS 'col6', col7 AS 'col7')
)
)
SELECT * FROM pivoted
PIVOT (MAX (colval)
FOR r
IN (1 AS col1, 2 AS col2, 3 AS col3, 4 AS col4,
5 AS col5, 6 AS col6, 7 AS col7))
Results:
| ID | COL1 | COL2 | COL3 | COL4 | COL5 | COL6 | COL7 |
|----|------|------|------|------|------|------|------|
| 1 | 3 | 4 | 4 | 6 | 7 | 8 | 8 |
| 2 | 9 | 8 | 7 | 7 | 5 | 2 | 1 |
| 3 | 1 | 2 | 4 | 5 | 7 | 8 | 9 |
| 4 | 9 | 8 | 7 | 4 | 4 | 4 | 1 |
| 5 | 2 | 2 | 4 | 5 | 5 | 6 | 7 |