Merge Columns into One Column Oracle PL/SQL - oracle

I have the following script and tables where upon running the script produces the output for LOG_ID, YEAR, WA.SUB_DIVISION, AI.SUB_DIVISION, EA.SUB_DIVISION, FI.SUB_DIVISION
Is it possible to Merge four columns into one column
WA.SUB_DIVISION, AI.SUB_DIVISION, EA.SUB_DIVISION, FI.SUB_DIVISION into SUB_DIVISION a single column
Not sure how to proceed.
I have created a sample sql fiddle
https://dbfiddle.uk/?rdbms=oracle_18&fiddle=3c4abb924462dcf5e5f8b0f91019b6b6
select distinct L.LOG_ID,
FC.LOG_YR as YEAR,
WA.SUB_DIVISION,
AI.SUB_DIVISION AS SUB_DIV,
EA.SUB_DIVISION AS SUB_DIV3,
FI.SUB_DIVISION AS SUB_DIV4
FROM FINAL_CALENDAR FC
JOIN LOG L
ON TO_DATE ( TO_CHAR (L.LOG_DATE, 'MM/DD/YYYY'), 'MM/DD/YYYY') = FC.CAL_DATE
LEFT OUTER JOIN LOG_WATER WA
ON WA.LOG_ID = L.LOG_ID
LEFT OUTER JOIN LOG_AIR AI
ON AI.LOG_ID = L.LOG_ID
LEFT OUTER JOIN LOG_EARTH EA
ON EA.LOG_ID = L.LOG_ID
LEFT OUTER JOIN LOG_FIRE FI
ON FI.LOG_ID = L.LOG_ID
Actual Output / ISSUE / Existing Output
LOG_ID YEAR SUB_DIVISION SUB_DIV SUB_DIV3 SUB_DIV4
990741 2020 NULL NULL NULL NULL
990742 2020 NULL NULL NULL NULL
991122 2020 NULL NULL NULL NULL
991123 2020 NULL NULL NULL NULL
994461 2020 NULL 4 NULL NULL
994468 2020 NULL 2 NULL NULL
994466 2020 NULL 2 NULL NULL
994480 2020 8 NULL NULL NULL
994479 2020 8 NULL NULL NULL
994476 2020 6 NULL NULL NULL
994478 2020 6 NULL NULL NULL
994440 2020 NULL NULL NULL NULL
994432 2020 NULL NULL NULL NULL
994450 2020 NULL NULL NULL NULL
994154 2020 NULL NULL NULL NULL
Required / Desired Output
LOG_ID YEAR SUB_DIVISION DISPLAY_NAME
990741 2020 NULL NULL
990742 2020 NULL NULL
991122 2020 NULL NULL
991123 2020 NULL NULL
994461 2020 4 Triangle
994468 2020 2 Circle
994466 2020 2 Circle
994480 2020 8 Rhombus
994479 2020 8 Rhombus
994476 2020 6 Dot
994478 2020 6 Dot
994440 2020 NULL NULL
994432 2020 NULL NULL
994450 2020 NULL NULL
994154 2020 NULL NULL
Table LOG;
LOG_ID, LOG_DATE,
990741, to_date('21-JAN-20','DD-MON-RR')
990742 21-JAN-20
991122 24-JAN-20
991123 25-JAN-20
994461 25-JAN-20
994468 25-JAN-20
994466 25-JAN-20
994480 25-JAN-20
994479 25-JAN-20
994476 25-JAN-20
994478 25-JAN-20
994440 25-JAN-20
994432 25-JAN-20
994450 25-JAN-20
994154 25-JAN-20
TABLE FINAL_CALENDAR;
CAL_DATE CAL_MONTH LOG_YR
21-JAN-20 1 2020
21-JAN-20 1 2020
24-JAN-20 1 2020
25-JAN-20 1 2020
25-JAN-20 1 2020
25-JAN-20 1 2020
25-JAN-20 1 2020
25-JAN-20 1 2020
25-JAN-20 1 2020
25-JAN-20 1 2020
25-JAN-20 1 2020
25-JAN-20 1 2020
25-JAN-20 1 2020
25-JAN-20 1 2020
25-JAN-20 1 2020
TABLE LOG_AIR;
ID LOG_ID SUB_DIVISION
134 994468 2
132 994461 4
133 994466 2
TABLE LOG_WATER;
ID LOG_ID SUB_DIVISION
9345 994480 8
9344 994479 8
9342 994476 6
9343 994478 6
TABLE LOG_EARTH;
ID LOG_ID SUB_DIVISION
0118 994440 null
0117 994432 null
TABLE LOG_FIRE;
ID LOG_ID SUB_DIVISION
706 994450 null
705 994154 null
TABLE Z_SUB_DIVISION_TYPE;
SUB_DIVISION DISPLAY_NAME
1 Parallelogram
2 Circle
3 Square
4 Triangle
5 Tangent
6 Dot
7 Line
8 Rhombus
9 Trapezium

If that's the final result you can merge last four columns assuming one column has value and rest are null(sub division columns)
SELECT log_id,
year,
CASE
WHEN sub_division IS NULL AND SUB_DIV3 IS NULL AND SUB_DIV4 IS NULL THEN sub_div
WHEN sub_division IS NULL AND SUB_DIV IS NULL AND SUB_DIV4 IS NULL THEN sub_div3
WHEN sub_division IS NULL AND SUB_DIV IS NULL AND SUB_DIV3 IS NULL THEN sub_div4
ELSE
sub_division
END as sub_division,
display_name
FROM (SELECT DISTINCT L.log_id,
FC.log_yr AS YEAR,
WA.sub_division,
AI.sub_division AS SUB_DIV,
EA.sub_division
AS SUB_DIV3,
FI.sub_division AS SUB_DIV4,
(SELECT display_name
FROM z_sub_division_type a
WHERE a.sub_division = WA.sub_division
OR a.sub_division = AI.sub_division
OR a.sub_division = EA.sub_division
OR a.sub_division = FI.sub_division
) AS DISPLAY_NAME
FROM final_calendar FC
join log L
ON To_date (To_char (L.log_date, 'MM/DD/YYYY'), 'MM/DD/YYYY') =
FC.cal_date
left outer join log_water WA
ON WA.log_id = L.log_id
left outer join log_air AI
ON AI.log_id = L.log_id
left outer join log_earth EA
ON EA.log_id = L.log_id
left outer join log_fire FI
ON FI.log_id = L.log_id)
Edit 1:-This sql works assuming atleast one column has value and rest are null
Edit 2:- You can replace case clause with coalesce
coalesce(sub_division,sub_div,sub_div3,sub_div4) as sub_division

Related

How to fill in null values with previous values (Snowflake db)

I have CTEs that result in a table similar to this:
US_DATE_TIME
US_PRICE
NON_US_DATE_TIME
NON_US_PRICE
NULL
NULL
2022-06-08 14:40:13.762
NULL
2022-03-03 15:02:05.963
11
NULL
NULL
NULL
NULL
2022-06-28 21:58:43.558
14
2022-03-03 15:42:08.203
41
NULL
NULL
2022-06-08 21:57:07.909
10
2022-03-03 15:00:21.814
14
NULL
NULL
NULL
38
I would like to show the changes in price for the two columns PRICE and NON_US_PRICE. For example, in the PRICE column, the price was originally NULL, then became 11 and stayed at 11 until in changed to 41, so the price would for row 3 for PRICE would be 11 instead of NULL and similarly, it would be 10 for the last row. I would like to apply this to the NON_US_PRICE column as well, where the price was NULL until it became 14, so the 4th and 5th row would also be 14 (instead of NULL) until it became 38. Is there a way I can do this? Would it be using something like lag()? In order to result in:
US_DATE_TIME
US_PRICE
NON_US_DATE_TIME
NON_US_PRICE
NULL
NULL
2022-06-08 14:40:13.762
NULL
2022-03-03 15:02:05.963
11
NULL
NULL
NULL
11
2022-06-28 21:58:43.558
14
2022-03-03 15:42:08.203
41
NULL
14
2022-06-08 21:57:07.909
10
2022-03-03 15:00:21.814
14
NULL
10
NULL
38
So where Greg is trying to point out, is to "get the last value" you must have some details the sorts the data.
for your data, I have ignored asking you "how are you going to sort the data" and just adding those values, and you can work-out how to do this bit, thus, with this data:
WITH data(row_order, STOCK_id, US_DATE_TIME, US_PRICE, NON_US_DATE_TIME, NON_US_PRICE) as (
SELECT *
FROM VALUES
(1, 1, NULL, NULL, '2022-06-08 14:40:13.762', NULL),
(2, 1, '2022-03-03 15:02:05.963', 11, NULL, NULL),
(3, 1, NULL, NULL, '2022-06-28 21:58:43.558', 14),
(4, 1, '2022-03-03 15:42:08.203', 41, NULL, NULL),
(5, 1, '2022-06-08 21:57:07.909', 10, '2022-03-03 15:00:21.814', 14),
(6, 1, NULL, NULL, NULL, 38)
)
we can not do a per stock_id sorting by row_order, thus:
select
row_order
,US_DATE_TIME
,NVL(US_PRICE, LAG(US_PRICE) IGNORE NULLS OVER (partition by stock_id order by row_order)) as US_PRICE
,NON_US_DATE_TIME
,NVL(NON_US_PRICE, LAG(NON_US_PRICE) IGNORE NULLS OVER (partition by stock_id order by row_order)) as NON_US_PRICE
from data
order by row_order;
which gives:
ROW_ORDER
US_DATE_TIME
US_PRICE
NON_US_DATE_TIME
NON_US_PRICE
1
null
null
2022-06-08 14:40:13.762
null
2
2022-03-03 15:02:05.963
11
null
null
3
null
11
2022-06-28 21:58:43.558
14
4
2022-03-03 15:42:08.203
41
null
14
5
2022-06-08 21:57:07.909
10
2022-03-03 15:00:21.814
14
6
null
10
null
38
this uses two function NVL (which is functionally the same as COALESCE) and LAG with the IGNORE NULLS OVER clause

rownum in Oracle sql with group by

I need to build a query to retrieve information group by Members and an expiration Date but I need to have a sequence number for every Member..
So for example:
If Member "A" has 3 records to expire, "B" has only 1 and "C" has 2, I need a result like this:
Number Member ExpDate
1 A 01/01/2020
2 A 02/01/2020
3 A 03/01/2020
1 B 01/01/2020
1 C 01/01/2020
2 C 02/01/2020
My query now is:
SELECT ROW_NUMBER() OVER(ORDER BY TRUNC(EXPIRATION_DT) ASC) AS SEQUENCE, MEMBER_ID AS MEMBER, SUM(ACCRUALED_VALUE) - SUM(USED_VALUE) AS POINTS, trunc(EXPIRATION_DT) AS EXPDATE
FROM TABLE1
WHERE EXPIRATION_DT > SYSDATE AND EXPIRATION_DT < SYSDATE + 90
GROUP BY MEMBER_ID, TRUNC(EXPIRATION_DT)
HAVING SUM(ACCRUALED_VALUE) - SUM(USED_VALUE) > 0
ORDER BY 4 ASC;
But I cant' "group" the sequence number.... The result now is:
Seq Mem Points Date
1 1-O 188 2018-03-01 00:00:00
2 1-C 472 2018-03-01 00:00:00
3 1-A 485 2018-03-01 00:00:00
4 1-1 267 2018-03-01 00:00:00
5 1-E 500 2018-03-01 00:00:00
6 1-P 55 2018-03-01 00:00:00
7 1-E 14 2018-03-01 00:00:00
I think you need a DENSE_RANK window function. try this -
SELECT DENSE_RANK() OVER (PARTITION BY MEMBER ORDER BY TRUNC(EXPIRATION_DT) ASC) AS SEQUENCE
,MEMBER_ID AS MEMBER
,SUM(ACCRUALED_VALUE) - SUM(USED_VALUE) AS POINTS
,trunc(EXPIRATION_DT) AS EXPDATE
FROM TABLE1
WHERE EXPIRATION_DT > SYSDATE AND EXPIRATION_DT < SYSDATE + 90
GROUP BY MEMBER_ID
,TRUNC(EXPIRATION_DT)
HAVING SUM(ACCRUALED_VALUE) - SUM(USED_VALUE) > 0
ORDER BY 4 ASC;
with g as (
select *
From TABLE1 g
group by MEMBER_ID
,TRUNC(EXPIRATION_DT)
HAVING SUM(ACCRUALED_VALUE) - SUM(USED_VALUE) > 0 ---- etc
)
select rownum, g.* From g
this select return first column with sequence number

Oracle: Combine Two Tables with Different Columns

This is table 1:
col_1 col_2 date_1
----- ----- ------
1 3 2016
2 4 2015
And this is table 2:
col_3 col_4 date_2
----- ----- ------
5 8 2014
6 9 2012
I want a result like this:
col_1 col_2 col_3 col_4 date_1 date_2
----- ----- ----- ----- ------ ------
1 3 NULL NULL 2016 NULL
2 4 NULL NULL 2015 NULL
NULL NULL 5 8 NULL 2014
NULL NULL 6 9 NULL 2012
Any solutions?
Using Union All and Null as a different column:
SELECT col_1, col_2, NULL as col_3, NULL as col_4,
date_1, NULL as date_2
FROM table_1
Union All
SELECT NULL, NULL, col_3, col_4, NULL, date_2
FROM table_2
Use union all:
select col_1, col_2, NULL as col_3, NULL as col_4, date_1, NULL as date_2
from table1
union all
select NULL, NULL, col_3, col_4, NULL, date_2
from table2;
Using Join:
select t1.col_1,t1.col_2,t2.col_3,t2.col_4,t1.date_1,t2.date_2
from t1
full join t2
on t1.col_1=t2.col_3
order by t1.col_1;

Convert columns to rows in oracle [duplicate]

This question already has answers here:
Oracle SQL pivot query
(4 answers)
Closed 8 years ago.
this is my table in oracle 11g:
**date qty1 qty2 qty3 qty4**
2-Feb-14 61 64 52 54
2-Mar-14 124 130 149 156
i want to convert it into the following table. i.e. add 7 days to the date and transpose the qty. And i have till qty52 such metrics
***date qty***
**2-Feb-14 61**
9-Feb-14 64
16-Feb-14 52
23-Feb-14 54
**2-Mar-14 124**
9-Mar-14 130
16-Mar-14 149
23-Mar-14 156
have a try:
WITH t(my_date, val, val2, val3, val4)
AS (
SELECT to_date('01/01/2014 12:00:00 AM', 'dd/mm/yyyy hh:mi:ss am'), 1,2,3,4 from dual
UNION ALL
SELECT to_date('01/02/2014 12:00:00 AM', 'dd/mm/yyyy hh:mi:ss am'), 5,6,7,8 FROM dual
)
SELECT (my_date-7) + (row_number() OVER (partition by my_date ORDER BY my_date)*7) my_date, value as qty
FROM (
( SELECT my_date, val, val2, val3, val4 FROM t
) unpivot ( value FOR value_type IN (val, val2, val3, val4) ) );
output:
MY_DATE QTY
----------------------- ----------
01/01/2014 12:00:00 AM 1
08/01/2014 12:00:00 AM 2
15/01/2014 12:00:00 AM 3
22/01/2014 12:00:00 AM 4
01/02/2014 12:00:00 AM 5
08/02/2014 12:00:00 AM 6
15/02/2014 12:00:00 AM 7
22/02/2014 12:00:00 AM 8
select date,qty from
(select date,qty1 as qty
from tbl
union
select date+7 as date,qty2 as qty
from tbl
union
select date+14 as date,qty3 as qty
from tbl
union
select date+21 as date,qty4 as qty
from tbl)
order by date
If you've got Oracle 11g, I'd look at doing it with UNPIVOT.
select
start_date + to_number(week_number) * 7,
qty
from (
select *
from quantity_data
unpivot (qty for week_number
in (qty1 as '0', qty2 as '1', qty3 as '2', qty4 as '3'))
)
This is an alternative to the example from ajmalmhd04, using to_number instead of the row_number analytic function. The answer from ajmalmhd04 is probably more generic though
If you haven't got Oracle 11g then try this for an option:
with pivot_data as (
select 0 as pivot_col from dual union all
select 1 from dual union all
select 2 from dual union all
select 3 from dual
)
select
start_date + (7 * pivot_col) as start_date,
case
when pivot_col = 0 then qty1
when pivot_col = 1 then qty2
when pivot_col = 2 then qty3
when pivot_col = 3 then qty4 end as qty
from
quantity_data cross join pivot_data
order by 1
Try this
with tab(date_d,qty1,qty2,qty3,qty4) as (
select '2-Feb-14',61,64,52,54 from dual union all
select '2-Mar-14',124,130,149,156 from dual),
tab2(dd, ss) as (select date_d, qty1||','||qty2||','||qty3||','||qty4 from tab)
select to_date(dd) + ((level-1) * 7) "DATE", regexp_substr(ss, '[^(,)]+', 1, level) "QTY"
from tab2
connect by level <= length(ss) - length(replace(ss, ',')) + 1
and prior ss = ss
and prior sys_guid() is not null
output
| DATE | QTY |
|---------------------------------|-----|
| March, 02 2014 00:00:00+0000 | 124 |
| March, 09 2014 00:00:00+0000 | 130 |
| March, 16 2014 00:00:00+0000 | 149 |
| March, 23 2014 00:00:00+0000 | 156 |
| February, 02 2014 00:00:00+0000 | 61 |
| February, 09 2014 00:00:00+0000 | 64 |
| February, 16 2014 00:00:00+0000 | 52 |
| February, 23 2014 00:00:00+0000 | 54 |
Let me know if it meets your requirement.

Oracle SELECT query: collapsing null values when pairing up dates

I have the following Oracle query:
SELECT id,
DECODE(state, 'Open', state_in, NULL) AS open_in,
DECODE(state, 'Not Open', state_in, NULL) AS open_out,
FROM (
SELECT id,
CASE WHEN state = 'Open'
THEN 'Open'
ELSE 'Not Open'
END AS state,
TRUNC(state_time) AS state_in
FROM ...
)
This gives me data like the following:
id open_in open_out
1 2009-03-02 00:00:00
1 2009-03-05 00:00:00
1 2009-03-11 00:00:00
1 2009-03-26 00:00:00
1 2009-03-24 00:00:00
1 2009-04-13 00:00:00
What I would like is data like this:
id open_in open_out
1 2009-03-02 00:00:00 2009-03-05 00:00:00
1 2009-03-11 00:00:00 2009-03-24 00:00:00
That is, keep all the unique pairs of id/open_in and pair with them the earliest open_out that follows open_in. There can be any number of unique open_in values for a given id, and any number of unique open_out values. It is possible that a unique id/open_in will not have a matching open_out value, in which case open_out should be null for that row.
I feel like some analytic function, maybe LAG or LEAD, would be useful here. Perhaps I need MIN used with a PARTITION.
It can be done a little bit simpler. First let's create a sample table:
SQL> create table mytable (id,state,state_time)
2 as
3 select 1, 'Open', date '2009-03-02' from dual union all
4 select 1, 'Closed', date '2009-03-05' from dual union all
5 select 1, 'Open', date '2009-03-11' from dual union all
6 select 1, 'Shut down', date '2009-03-26' from dual union all
7 select 1, 'Wiped out', date '2009-03-24' from dual union all
8 select 1, 'Demolished', date '2009-04-13' from dual
9 /
Table created.
The data equals the output of your select statement:
SQL> SELECT id,
2 DECODE(state, 'Open', state_in, NULL) AS open_in,
3 DECODE(state, 'Not Open', state_in, NULL) AS open_out
4 FROM (
5 SELECT id,
6 CASE WHEN state = 'Open'
7 THEN 'Open'
8 ELSE 'Not Open'
9 END AS state,
10 TRUNC(state_time) AS state_in
11 FROM mytable
12 )
13 /
ID OPEN_IN OPEN_OUT
---------- ------------------- -------------------
1 02-03-2009 00:00:00
1 05-03-2009 00:00:00
1 11-03-2009 00:00:00
1 26-03-2009 00:00:00
1 24-03-2009 00:00:00
1 13-04-2009 00:00:00
6 rows selected.
And here is the slightly easier query:
SQL> select id
2 , min(case when state = 'Open' then state_time end) open_in
3 , min(case when state != 'Open' then state_time end) open_out
4 from ( select id
5 , state
6 , state_time
7 , max(x) over (partition by id order by state_time) grp
8 from ( select id
9 , state
10 , state_time
11 , case state
12 when 'Open' then
13 row_number() over (partition by id order by state_time)
14 end x
15 from mytable
16 )
17 )
18 group by id
19 , grp
20 order by id
21 , open_in
22 /
ID OPEN_IN OPEN_OUT
---------- ------------------- -------------------
1 02-03-2009 00:00:00 05-03-2009 00:00:00
1 11-03-2009 00:00:00 24-03-2009 00:00:00
2 rows selected.
Regards,
Rob.
I think Stack Overflow must be inspirational, or at least it helps me think clearer. After struggling with this thing all day, I finally got it:
SELECT id,
open_in,
open_out
FROM (
SELECT id,
open_in,
LAG(open_out, times_opened) OVER (PARTITION BY id
ORDER BY open_out DESC
NULLS LAST) AS open_out
FROM (
SELECT id,
open_in,
open_out,
COUNT(DISTINCT open_in) OVER (PARTITION BY id)
AS times_opened
FROM (
SELECT id,
DECODE(state, 'Open', state_in, NULL) AS open_in,
DECODE(state, 'Not Open', state_in, NULL)
AS open_out
FROM (
SELECT id,
CASE WHEN state = 'Open'
THEN 'Open'
ELSE 'Not Open'
END AS state,
TRUNC(au_time) AS state_in
FROM ...
)
)
)
)
WHERE open_in IS NOT NULL
Update: looks like this doesn't completely work. It works fine with the example in my question, but when there are multiple unique id's, the LAG stuff gets shifted and dates don't always align. :(

Resources