I am joining two tables with bad performances.
Table1 :
Period
Zone
Country
company
product
01/01/2020
EMEA
DE
WKDM2
Product1
01/02/2020
EMEA
DE
PRL56
Product1
01/03/2020
EMEA
UK
ORD56
Product2
01/04/2020
EMEA
DE
GFDS
Product3
01/05/2020
EMEA
FR
24GFDSGF2
Product1
01/06/2020
EMEA
DE
2GFSDG37
Product3
01/07/2020
EMEA
IT
2GFDSG35
Product1
01/08/2020
EMEA
DE
23GSFDG6
Product4
01/09/2020
EMEA
DE
23GSFDG5
Product6
01/10/2020
EMEA
IT
24GSFD1
Product1
01/11/2020
EMEA
DE
23GSDF6
Product3
01/12/2020
EMEA
FI
24GFSDG1
Product8
Table2:
Period
Zone
Country
Quarter
Year
company
product
01/01/2020
EMEA
DE
01/01/2020
01/01/2020
WKDM2
Product1
01/02/2020
EMEA
DE
01/01/2020
01/01/2020
PRL56
Product1
01/03/2020
EMEA
UK
01/01/2020
01/01/2020
ORD56
Product2
01/04/2020
EMEA
DE
01/04/2020
01/01/2020
GFDS
Product3
01/05/2020
EMEA
FR
01/04/2020
01/01/2020
24GFDSGF2
Product1
01/06/2020
EMEA
DE
01/04/2020
01/01/2020
2GFSDG37
Product3
01/07/2020
EMEA
IT
01/07/2020
01/01/2020
2GFDSG35
Product1
01/08/2020
EMEA
DE
01/07/2020
01/01/2020
23GSFDG6
Product4
01/09/2020
EMEA
DE
01/07/2020
01/01/2020
23GSFDG5
Product6
01/10/2020
EMEA
IT
01/10/2020
01/01/2020
24GSFD1
Product1
01/11/2020
EMEA
DE
01/10/2020
01/01/2020
23GSDF6
Product3
01/12/2020
EMEA
FI
01/10/2020
01/01/2020
24GFSDG1
Product8
In my exemple, the data is the same but in Production ENV, Company on table 1 is the source of truth. And product on 2nd table is the source of truth.
Table1 have 6M rows and table2 600K rows.
When i join like this i have poor performances, how can i improve this ?:
SELECT R."Period",R."Zone",R."Country",S."Quarter",S."Year",R."Company",S."Product"
FROM table1 AS R,
table2 AS S
WHERE R."Period" = S."Period"
AND R."Zone" = S."Zone"
AND R."Country"=S."Country"
GROUP BY 1,2,3,4,5,6,7
UPDATE:
TEST DATASET :
CREATE OR REPLACE TEMPORARY TABLE "TMP_TEST1" (
"Period" TIMESTAMP,
"Country" VARCHAR,
"Quarter" TIMESTAMP,
"Year" TIMESTAMP,
"Company" VARCHAR,
"Product" VARCHAR
);
INSERT INTO "TMP_TEST1"
VALUES
('01/01/2020','DE','01/01/2020 ','01/01/2020 ','WKDM2 ','Product1'),
('01/01/2020','DE','01/01/2020 ','01/01/2020 ','2GFSDG37 ','Product1'),
('01/02/2020','DE','01/01/2020 ','01/01/2020 ','ORD56 ','Product2'),
('01/03/2020','DE','01/01/2020 ','01/01/2020 ','GFDS ','Product3'),
('01/03/2020','DE','01/01/2020 ','01/01/2020 ','24GFDSGF2 ','Product1'),
('01/03/2020','DE','01/01/2020 ','01/01/2020 ','24GSFD1 ','Product1'),
('01/04/2020','DE','01/04/2020 ','01/01/2020 ','2GFSDG37 ','Product4'),
('01/04/2020','DE','01/04/2020 ','01/01/2020 ','23GSFDG5 ','Product6'),
('01/05/2020','DE','01/04/2020 ','01/01/2020 ','23GSDF6 ','Product3'),
('01/06/2020','DE','01/04/2020 ','01/01/2020 ','24GSFD1 ','Product8');
CREATE OR REPLACE TEMPORARY TABLE "TMP_TEST2" (
"Period" TIMESTAMP,
"Country" VARCHAR,
"Quarter" TIMESTAMP,
"Year" TIMESTAMP,
"Company" VARCHAR,
"Product" VARCHAR
);
INSERT INTO "TMP_TEST2"
VALUES
('01/01/2020','DE','01/01/2020 ','01/01/2020 ','WKDM2 ','Product1'),
('01/01/2020','DE','01/01/2020 ','01/01/2020 ','2GFSDG37 ','Product1'),
('01/02/2020','DE','01/01/2020 ','01/01/2020 ','ORD56 ','Product2'),
('01/03/2020','DE','01/01/2020 ','01/01/2020 ','GFDS ','Product3'),
('01/03/2020','DE','01/01/2020 ','01/01/2020 ','24GFDSGF2 ','Product1'),
('01/03/2020','DE','01/01/2020 ','01/01/2020 ','2GFSDG37 ','Product3'),
('01/03/2020','DE','01/01/2020 ','01/01/2020 ','24GSFD1 ','Product1'),
('01/04/2020','DE','01/04/2020 ','01/01/2020 ','2GFSDG37 ','Product4'),
('01/04/2020','DE','01/04/2020 ','01/01/2020 ','23GSFDG5 ','Product6'),
('01/04/2020','DE','01/04/2020 ','01/01/2020 ','24GSFD1 ','Product1'),
('01/05/2020','DE','01/04/2020 ','01/01/2020 ','23GSDF6 ','Product3'),
('01/05/2020','DE','01/04/2020 ','01/01/2020 ','23GSDF6 ','Product9'),
('01/06/2020','DE','01/04/2020 ','01/01/2020 ','24GSFD1 ','Product8');
QUERY:
SELECT DISTINCT T1."Period",
T1."Country",
T1."Quarter",
T1."Year",
T1."Company",
T2."Product"
FROM TMP_TEST1 AS T1 INNER JOIN TMP_TEST2 AS T2
ON T1."Period" = T2."Period"
AND T1."Country"=T2."Country"
GROUP BY 1,2,3,4,5,6
With this test dataset you will see that i loose Products when there is no company. I don't know how to break this relation. I hope i am clear enough for you.
Firstly, given the data in the 2 source tables you have shown, please also provide the result you want to see.
For the query you have given, the correct way to write it, using ANSI SQL, is as follows:
SELECT R."Period",R."Zone",R."Country",S."Quarter",S."Year",R."Company",S."Product"
FROM table1 AS R
INNER JOIN table2 AS S ON
R."Period" = S."Period"
AND R."Zone" = S."Zone"
AND R."Country"=S."Country"
GROUP BY 1,2,3,4,5,6,7
To repeat my questions (and add some more)
How long is the query currently taking?
How long, roughly, should it take for you to consider the performance to be acceptable?
Why are you using GROUP BY when you have no aggregate functions in your query? If you want a distinct list (and your query is definitely producing duplicates) then use SELECT DISTINCT...
What do you mean by "cartesian calculations"? Do you mean cartesian joins and, if you do, why have you mentioned them as you don't have a cartesian join?
Response to Comments
ANSI SQL joins are much easier to read (and debug) as all the join information is in the JOIN statements and all the filtering conditions are in the WHERE statements - plus it is the industry standard so it would be a good idea for you, as a beginner, to get used to using it now rather than learning bad practices. Imagine if you were joining 20 tables with a mixture of inner/outer/left/right joins - the syntax you are using would be pretty incomprehensible whereas ANSI SQL join syntax would be simple to understand.
You still haven't provided the output that you are expecting to see, based on your source tables - so anyone trying to help you is left guessing what it is you are trying to achieve.
You also haven't provided the Explain Plan so no-one can see how your query is executing and therefore what the problem might be. In Snowflake, go to History, click on the relevant Query ID, click on Profile and then attach a screenshot showing all the steps being run, the execution time, the statistics, etc.
So I started making some tables that have 6M rows of data and 600K to show how joins via text is bad and use id's
CREATE TABLE table1 AS
with periods AS (
SELECT dateadd('day', SEQ8(), '1999-01-01'::date) as period
FROM TABLE(GENERATOR(rowcount=>1000))
), zones AS (
SELECT column1 as zone
,seq8() as zone_id
FROM VALUES ('EMEA')
), countries AS (
SELECT column1 as country
,seq8() as country_id
FROM VALUES ('DE'),('UK'),('FR'),('IT'),('NZ'),('AU'),('US'),('CA'),('xx'),('yy')
), company AS (
SELECT seq8() as comp_id
,hash(comp_id) as h_comp_id
,h_comp_id::text as t_h_comp_id
FROM TABLE(GENERATOR(rowcount=>600))
)
SELECT p.*, z.*, c.*, co.*
FROM periods p
JOIN zones z ON true
JOIN countries c ON true
JOIN company co ON true
;
CREATE TABLE table2 AS
SELECT *, YEAR(period) as year, QUARTER(period) as quarter
FROM table1
LIMIT 600000;
and then ran the SQL
SELECT R.Period,R.Zone,R.Country,S.Quarter,S.Year,R.comp_id
FROM table1 AS R
JOIN table2 AS S
ON R.Period = S.Period
AND R.Zone = S.Zone
AND R.Country=S.Country;
-- 1m27s
And it took ages, so ran my "fast" SQL
SELECT R.Period,R.Zone,R.Country,S.Quarter,S.Year,R.comp_id
FROM table1 AS R
JOIN table2 AS S
ON R.Period = S.Period
AND R.Zone_id = S.Zone_id
AND R.Country_id=S.Country_id;
-- 1m18s
and it took ages also.
Looking at the profile 90% of the time was getting the results.
SELECT S.Year, count(*) as c
FROM table1 AS R
JOIN table2 AS S
ON R.Period = S.Period
AND R.Zone = S.Zone
AND R.Country=S.Country
GROUP BY 1;
-- 9s
swapping to an aggregate to avoid the fetch, bad joins are 9 seconds
SELECT S.Year, count(*) as c
FROM table1 AS R
JOIN table2 AS S
ON R.Period = S.Period
AND R.Zone_id = S.Zone_id
AND R.Country_id=S.Country_id
GROUP BY 1;
-- 6s
and good joins are 6 seconds. So text is "still bad" but really fetching 3.6 millions rows of text is really slow.
I want to distribute values of my summed quantities over multiple date ranges, which overlap. So,
the end result of this query is simply 2 records with overlapping date ranges. But, the current query just duplicates the values (quantities) in each. But, I want to place the data in the first record only as it's the first record within the same date range as the 2nd record, so the second record quantity (AllocQty) I want to be 0. right now it's coming up with equal quantities for both records, and they are in the same date range. I am modifying existing code, so I'm wondering if theres a way to just modify the code to somehow subtract the sum from the first group, so it won't be placed in the 2nd group. Is there a way to do that without rewriting this substantially?
from c in com
join b in all on c.SKU equals b.SKU into ps
from b in ps.DefaultIfEmpty()
.Where(x => x == null || (c.StartDate <= x.FinishDate && c.EndDate >= x.FinishDate)).DefaultIfEmpty()
select new
{
ComId = c.ComId,
metadatafromtablec,
etc...
...
AllocQty = b != null ? b.VDC_Alloc : 0, //Here I want this value to be zero in the 2nd overlapping group (startdate)
} into x
//It groups by every single field except for the calculated sum of the VAlloc //and KAlloc quantities
group x by new { x.ComId, x.ComType, x.CustType, x.CustrId, x.SKU, x.StartDate, x.EndDate} into grp
select new CommitmentView
{
CommId = grp.Key.ComId,
...
...
AllocQty = grp.Sum(r => r.AllocQty),
TotalAllocQty = grp.Sum(r => r.AllocQty) + grp.Sum(r => r.KAllocQty),
} into cv
orderby cv.SKU ascending
select cv
table c (com):
COMID
COM_TYPE
CUSTTYPE
CUSTRID
SKU
START_DATE
END_DATE
VDC_CQ
KDC_CQ
COMSTATUS
CREATED_BY
CREATED_DATE
UPDATED_BY
UPDATED_DATE
108
RETAIL
BCL
0
111872
2/1/2021
4/12/2021
2400
1560
APPROVED
humak
2/9/2021 3:26:18 PM
chrj
2/23/2021 11:43:41 AM
107
RETAIL
BCL
0
111872
2/7/2021
4/13/2021
288
84
DRAFT
chrj
2/8/2021 3:28:24 PM
chrj
2/24/2021 6:27:51 PM
table b (all):
EVENT_ID
LINE_ID
SKU
CUSTRID
ALLOCQTY
SUBMISSION_STATUS
ERROR_CODE
ERROR_DESCRIPTION
CREATED_BY
CREATED_DATE
UPDATED_BY
UPDATED_DATE
FinishDate
UOM
CUSTOMER_TYPE
100150
5344
111872
3
12
SCHEDULED
chrj
2/23/2021 6:11:04 PM
2/23/2021 6:11:04 PM
4/13/2021
C
BCL
100148
5342
111872
3
12
SCHEDULED
chrj
2/8/2021 3:23:27 PM
2/8/2021 3:23:27 PM
2/9/2021
C
BCL
100149
5343
111872
3
12
SCHEDULED
chrj
2/9/2021 1:58:30 PM
2/9/2021 1:58:30 PM
2/9/2021
C
BCL
100139
4952
111872
160
12
SCHEDULED
chrj
2/5/2021 4:38:46 PM
2/5/2021 4:38:46 PM
2/11/2021
C
BCL
100139
4954
111872
129
24
SCHEDULED
chrj
2/5/2021 4:38:46 PM
2/5/2021 4:38:46 PM
2/11/2021
C
BCL
100139
4956
111872
228
60
SCHEDULED
chrj
2/5/2021 4:38:46 PM
2/5/2021 4:38:46 PM
2/11/2021
C
BCL
100139
4958
111872
218
12
SCHEDULED
chrj
2/5/2021 4:38:46 PM
2/5/2021 4:38:46 PM
2/11/2021
C
BCL
100139
4960
111872
167
36
SCHEDULED
chrj
2/5/2021 4:38:46 PM
2/5/2021 4:38:46 PM
2/11/2021
C
BCL
100139
4961
111872
158
120
SCHEDULED
chrj
2/5/2021 4:38:46 PM
2/5/2021 4:38:46 PM
2/11/2021
C
BCL
100139
4964
111872
76
36
SCHEDULED
chrj
2/5/2021 4:38:46 PM
2/5/2021 4:38:46 PM
2/11/2021
C
BCL
100139
4966
111872
163
24
SCHEDULED
chrj
2/5/2021 4:38:46 PM
2/5/2021 4:38:46 PM
2/11/2021
C
BCL
100139
4969
111872
174
12
SCHEDULED
chrj
2/5/2021 4:38:46 PM
2/5/2021 4:38:46 PM
2/11/2021
C
BCL
Current Output:
COMID
COM_TYPE
CUSTTYPE
CUSTRID
SKU
START_DATE
END_DATE
VDC_CQ
KDC_CQ
COMSTATUS
TOTALALLOCQTY
CREATED_BY
CREATED_DATE
UPDATED_BY
UPDATED_DATE
107
RETAIL
BCL
0
111872
2/7/2021
4/13/2021
288
84
DRAFT
372
chrj
2/8/2021 3:28:24 PM
chrj
2/24/2021 6:27:51 PM
108
RETAIL
BCL
0
111872
2/1/2021
4/12/2021
2400
1560
APPROVED
360
humak
2/9/2021 3:26:18 PM
chrj
2/23/2021 11:43:41 AM
You can see the quantity of 360 is 'duplicated' in both records. There was a change desired by management to allow these records to overlap by date (You can see start date to end date
overlaps from 2/7/2021 to 4/12/2021. So, now the 360 should be in the record with comid of 108, and the remaining quantity of 12 should be in record with comid of 107 like this:
Desired Output:
COMID
COM_TYPE
CUSTTYPE
CUSTRID
SKU
START_DATE
END_DATE
VDC_CQ
KDC_CQ
COMSTATUS
TOTALALLOCQTY
CREATED_BY
CREATED_DATE
UPDATED_BY
UPDATED_DATE
107
RETAIL
BCL
0
111872
2/7/2021
4/13/2021
288
84
DRAFT
12
chrj
2/8/2021 3:28:24 PM
chrj
2/24/2021 6:27:51 PM
108
RETAIL
BCL
0
111872
2/1/2021
4/12/2021
2400
1560
APPROVED
360
humak
2/9/2021 3:26:18 PM
chrj
2/23/2021 11:43:41 AM
It should distribute the quantity between the records that overlap and the FinishDate that's within the range of the start date and end date. You can see there is one record
from table b that is dated 4/13/2021, so that's the only record that should be in the output of comid == 107.
I need to rank a table with two columns transID & travel_date
here is my data
transID travel_date
2341 2018-04-04 10:00:00
2341 2018-04-04 11:30:00
2891 2018-04-04 12:30:00
2891 2018-04-04 18:30:00
2341 2018-04-05 11:30:00
2891 2018-04-05 22:30:00
this is the query which i have tried
select transID,travel_date,rn,
dense_rank () over (partition by transID order by EarliestDate,transID) as rn2
from
(SELECT transID,travel_date,
ROW_NUMBER() OVER (PARTITION BY transID ORDER BY travel_date) AS rn,
max(travel_date) OVER (partition by travel_date) as EarliestDate
FROM travel_log_info
) t
order by transID;
Current Output from the above query
transID travel_date rn2
2341 2018-04-04 10:00:00 1
2341 2018-04-04 11:30:00 2
2341 2018-04-05 11:30:00 3
2891 2018-04-04 12:30:00 1
2891 2018-04-04 18:30:00 2
2891 2018-04-05 22:30:00 3
Expected Output
transID travel_date rn2
2341 2018-04-04 10:00:00 1
2341 2018-04-04 11:30:00 2
2341 2018-04-05 11:30:00 1
2891 2018-04-04 12:30:00 1
2891 2018-04-04 18:30:00 2
2891 2018-04-05 22:30:00 1
with this output, I can get the desired output by where condition rn2 = 1 to get the output based on travel date and transId.
I am not getting the desired output as shown above. Kindly provide suggestions to achieve the correct output.
Thanks for your time
The main problem with what you have now is:
max(travel_date) OVER (partition by travel_date)
which includes the time part of each date in the partition - so you're really getting the max of every individual date/time, which is that date/time. You seem to want maximum date/time within each day, so you could partition by each day by using trunc() in the partition-by clause:
max(travel_date) OVER (partition by trunc(travel_date))
Just that change gives you:
TRANSID TRAVEL_DATE RN RN2
---------- ------------------- ---------- ----------
2341 2018-04-04 10:00:00 1 1
2341 2018-04-04 11:30:00 2 1
2341 2018-04-05 11:30:00 3 2
2891 2018-04-04 12:30:00 1 1
2891 2018-04-04 18:30:00 2 1
2891 2018-04-05 22:30:00 3 2
The partitioning in the outer query is also wrong though, you need to partition by that 'earliest' date (actually latest, but doesn't matter for this):
select transID,travel_date,rn,
dense_rank () over (partition by transID,EarliestDate order by travel_date) as rn2
from
(SELECT transID,travel_date,
ROW_NUMBER() OVER (PARTITION BY transID ORDER BY travel_date) AS rn,
max(travel_date) OVER (partition by trunc(travel_date)) as EarliestDate
FROM travel_log_info
) t
order by transID;
TRANSID TRAVEL_DATE RN RN2
---------- ------------------- ---------- ----------
2341 2018-04-04 10:00:00 1 1
2341 2018-04-04 11:30:00 2 2
2341 2018-04-05 11:30:00 3 1
2891 2018-04-04 12:30:00 1 1
2891 2018-04-04 18:30:00 2 2
2891 2018-04-05 22:30:00 3 1
But you don't really need that max, or the outer query you currently have; if you include that truncated day in the row_number() partition (which you currently aren't really using) you get:
SELECT transID,travel_date,
ROW_NUMBER() OVER (PARTITION BY transID, trunc(travel_date) ORDER BY travel_date) AS rn
FROM travel_log_info;
TRANSID TRAVEL_DATE RN
---------- ------------------- ----------
2341 2018-04-04 10:00:00 1
2341 2018-04-04 11:30:00 2
2341 2018-04-05 11:30:00 1
2891 2018-04-04 12:30:00 1
2891 2018-04-04 18:30:00 2
2891 2018-04-05 22:30:00 1
and you can then wrap that in an outer query to filter on rn:
SELECT transID,travel_date
FROM (
SELECT transID,travel_date,
ROW_NUMBER() OVER (PARTITION BY transID, trunc(travel_date) ORDER BY travel_date) AS rn
FROM travel_log_info
)
WHERE rn = 1
ORDER BY transID,travel_date;
TRANSID TRAVEL_DATE
---------- -------------------
2341 2018-04-04 10:00:00
2341 2018-04-05 11:30:00
2891 2018-04-04 12:30:00
2891 2018-04-05 22:30:00
You could also do this without a subquery; this gets the same result using first:
SELECT transID,
min(travel_date) keep (dense_rank first order by travel_date) as travel_date
FROM travel_log_info
GROUP BY transID, trunc(travel_date)
ORDER BY transID, travel_date;
This is presentation table:
ID PRESENTATIONDAY PRESENTATIONSTART PRESENTATIONEND PRESENTATIONSTARTDATE PRESENTATIONENDDATE
622 Monday 12:00:00 02:00:00 01-05-2016 04-06-2016
623 Tuesday 12:00:00 02:00:00 01-05-2016 04-06-2016
624 Wednesday 08:00:00 10:00:00 01-05-2016 04-06-2016
625 Thursday 10:00:00 12:00:00 01-05-2016 04-06-2016
I would like to insert availabledate in schedule table. This is my current query :
insert into SCHEDULE (studentID,studentName,projectTitle,supervisorID,
supervisorName,examinerID,examinerName,exavailableID,
availableday,availablestart,availableend,
availabledate) //PROBLEM STARTS HERE
values (?,?,?,?,?,?,?,?,?,?,?,?));
The value availabledate are retrieved based on the exavailableID
. For example, if exavailableID = 2, the availableday = Monday, availablestart= 12pm, availableend = 2pm.
The dates will only be chosen only between PRESENTATIONSTARTDATE to PRESENTATIONENDDATE from presentation table.
In presentation table, it will match PRESENTATIONDAY, PRESENTATIONDATESTART and PRESENTATIONDATEEND with availableday, availablestart and availableend to get a list of all possible dates.
This is the query to get list of all possible dates based on particular days:
select
A.PRESENTATIONID,
A.PRESENTATIONDAY,
A.PRESENTATIONDATESTART+delta LIST_DATE
from
PRESENTATION A,
(
select level-1 as delta
from dual
connect by level-1 <= (
select max(PRESENTATIONDATEEND- PRESENTATIONDATESTART) from PRESENTATION
)
)
where A.PRESENTATIONDATESTART+delta <= A.PRESENTATIONDATEEND
and
a.presentationday = trim(to_char(A.PRESENTATIONDATESTART+delta, 'Day'))
order by 1,2,3;
This query result is:
622 Monday 02-05-2016 12:00:00
...
622 Monday 30-05-2016 12:00:00
623 Tuesday 03-05-2016 12:00:00
...
623 Tuesday 31-05-2016 12:00:00
624 Wednesday 04-05-2016 12:00:00
...
624 Wednesday 01-06-2016 12:00:00
625 Thursday 05-05-2016 12:00:00
...
625 Thursday 02-06-2016 12:00:00
It will automatically assign dates from the SELECT query to be inserted in schedule table. However, each date can be used only 4 times. Once it reached 4 times, it will proceed to next date. For example, if Monday, '02-05-2016' to '09-05-2016'
How can I corporate these two queries (INSERT and SELECT) to have a result like this:
StudentName projectTitle SupervisorID ExaminerID availableday availablestart availableend availabledate
abc Hello 1024 1001 MONDAY 12.00pm 2.00pm 02-05-2016
def Hi 1024 1001 MONDAY 12.00pm 2.00pm 02-05-2016
ghi Hey 1002 1004 MONDAY 12.00pm 2.00pm 02-05-2016
xxx hhh 1020 1011 MONDAY 12.00pm 2.00pm 02-05-2016
jkl hhh 1027 1010 MONDAY 12.00pm 2.00pm 09-05-2016
try ttt 1001 1011 MONDAY 12.00pm 2.00pm 09-05-2016
654 bbb 1007 1012 MONDAY 12.00pm 2.00pm 09-05-2016
gyg 888 1027 1051 MONDAY 12.00pm 2.00pm 09-05-2016
yyi 333 1004 1022 TUESDAY 12.00pm 2.00pm 03-05-2016
fff 111 1027 1041 TUESDAY ..
ggg 222 1032 1007 TUESDAY .. .. .. ..
hhh 444 1007 1001 TUESDAY 12.00pm 2.00pm 03-05-2016
and so on :)
In short, I would like to use the list of dates from presentation table based on the day, start time and end time to insertion query where each date will only used 4 times. Thank you!
I am not sure this kind of syntax works with oracle (and have no good way to check), but changing the select part of insert like this may or may not work.
select
A.PRESENTATIONID,
A.PRESENTATIONDAY,
A.PRESENTATIONDATESTART+delta LIST_DATE
from
PRESENTATION A,
(
select level-1 as delta
from dual
connect by level-1 <= (
select max(PRESENTATIONDATEEND - PRESENTATIONDATESTART) from PRESENTATION
)
),
--MIGHT NEED ADDITIONAL LOGIC FOR THE EXAVAILABLEID COMPARISON
(SELECT count(S.*) as counter FROM SCHEDULE S WHERE S.EXAVAILABLEID=A.ID) C
where A.PRESENTATIONDATESTART+delta <= A.PRESENTATIONDATEEND
and
a.presentationday = trim(to_char(A.PRESENTATIONDATESTART+delta, 'Day'))
and
C.counter<4
order by 1,2,3;
EDIT: Changed the operator. Had >= before. Placed teh WHERE check at the right place. Deleted aliases.
EDIT2: changed the syntax to where the counter select statement is a part of the from clause.