Select where number range does not overlap - oracle
I have two tables that contain records about road construction activites:
table_a is the master list.
table_b is a legacy list.
For each road, in each year, I want to select the records from table_b that do not already exist in table_a.
Also, the records should not overlap spatially along the road. More specifically, the from_m and to_m of the records in table_b should not overlap the from_m and to_m in table_a.
How can I do this? I do not have Oracle Spatial.
The data in Excel (for easy viewing):
Here is what the data looks like in Excel:
The records in green should be selected by the query; the records in red should not.
The DDL:
Table A:
create table table_a
(
id number(4,0),
road_id number(4,0),
year number(4,0),
from_m number(4,0),
to_m number(4,0)
);
insert into table_a (id,road_id,year,from_m,to_m) values (1,1,2000,0,100);
insert into table_a (id,road_id,year,from_m,to_m) values (2,1,2005,0,25);
insert into table_a (id,road_id,year,from_m,to_m) values (3,1,2005,50,75);
insert into table_a (id,road_id,year,from_m,to_m) values (4,1,2005,75,100);
insert into table_a (id,road_id,year,from_m,to_m) values (5,1,2010,10,50);
insert into table_a (id,road_id,year,from_m,to_m) values (6,1,2010,50,90);
insert into table_a (id,road_id,year,from_m,to_m) values (7,1,2015,40,100);
insert into table_a (id,road_id,year,from_m,to_m) values (8,2,2020,0,40);
insert into table_a (id,road_id,year,from_m,to_m) values (9,2,2020,0,40);
insert into table_a (id,road_id,year,from_m,to_m) values (10,3,2025,90,150);
commit;
select * from table_a;
ID ROAD_ID YEAR FROM_M TO_M
---------- ---------- ---------- ---------- ----------
1 1 2000 0 100
2 1 2005 0 25
3 1 2005 50 75
4 1 2005 75 100
5 1 2010 10 50
6 1 2010 50 90
7 1 2015 40 100
8 2 2020 0 40
9 2 2020 0 40
10 3 2025 90 150
Table B:
create table table_b
(
id number(4,0),
road_id number(4,0),
year number(4,0),
from_m number(4,0),
to_m number(4,0)
);
insert into table_b (id,road_id,year,from_m,to_m) values (1,1,1995,0,100);
insert into table_b (id,road_id,year,from_m,to_m) values (2,1,2001,0,50);
insert into table_b (id,road_id,year,from_m,to_m) values (3,1,2005,20,80);
insert into table_b (id,road_id,year,from_m,to_m) values (4,1,2005,0,100);
insert into table_b (id,road_id,year,from_m,to_m) values (5,1,2010,0,10);
insert into table_b (id,road_id,year,from_m,to_m) values (6,1,2010,90,100);
insert into table_b (id,road_id,year,from_m,to_m) values (7,1,2010,5,85);
insert into table_b (id,road_id,year,from_m,to_m) values (8,1,2015,40,100);
insert into table_b (id,road_id,year,from_m,to_m) values (9,1,2015,0,40);
insert into table_b (id,road_id,year,from_m,to_m) values (10,2,2020,0,41);
insert into table_b (id,road_id,year,from_m,to_m) values (11,3,2025,155,200);
insert into table_b (id,road_id,year,from_m,to_m) values (12,3,2025,199,300);
insert into table_b (id,road_id,year,from_m,to_m) values (13,4,2024,5,355);
commit;
select * from table_b;
ID ROAD_ID YEAR FROM_M TO_M
---------- ---------- ---------- ---------- ----------
1 1 1995 0 100
2 1 2001 0 50
3 1 2005 20 80
4 1 2005 0 100
5 1 2010 0 10
6 1 2010 90 100
7 1 2010 5 85
8 1 2015 40 100
9 1 2015 0 40
10 2 2020 0 41
11 3 2025 155 200
12 3 2025 199 300
13 4 2024 5 355
A NOT EXISTS sub-select can help here
SELECT *
FROM table_b b
WHERE
NOT EXISTS (SELECT *
FROM table_a a
WHERE
a.road_id = b.road_id AND
a.year = b.year AND
a.to_m > b.from_m AND
a.from_m < b.to_m)
Let's look at overlapping ranges (f=from, t=to)
a -------------------f=======================t-----------------
b1a -----f=============t-----------------------------------------
b1b --f=============t--------------------------------------------
b2a -------------------------------------------f======t----------
b2b -----------------------------------------------f======t------
b3 ---------------f=========t-----------------------------------
b4 ------------------------f===========t------------------------
b5 ---------------------------------------f===========t---------
The ranges b3, b4 and b5 overlap. for all of them the following is true
a.to > b.from && a.from < b.to
For b1a, b1b and b2a, b2b that don't overlap this condition is false. For b1a a.from == b.to, for b1b a.from > b.to therefore the condition a.from < b.to is false.
For b2a a.to == b.from, for b2b a.to < b.from therefore the condition a.to > b.from is false.
The trick is to compare the from of one range with the to of the other one and vice-versa.
See: http://sqlfiddle.com/#!4/85883/3/0
this will work :
SELECT * FROM table_b b
WHERE
EXISTS (SELECT *
FROM table_a a
WHERE
a.year=b.year and a.from_m<>b.from_m and a.to_m <>b.to_m );
Related
insert records into a table - column as rows based on conditions
I have a table - Base_table create table base_table (ID number,FACTOR_1 number,FACTOR_1 number,FACTOR_3 number,FACTOR_4 number,TOTAL number, J_CODE varchar2(10)) insert into base_table values (1,10,52,5,32,140,'M1'); insert into base_table values (2,null,32,24,12,311,'M2'); insert into base_table values (3,12,null,53,null,110,'M3'); insert into base_table values (4,43,45,42,3,133,'M1'); insert into base_table values (5,432,24,null,68,581,'M2'); insert into base_table values (6,null,7,98,null,196,'M1'); ID FACTOR_1 FACTOR_2 FACTOR_3 FACTOR_4 TOTAL J_CODE 1 10 52 5 32 140 M1 2 null 32 24 12 311 M2 3 12 null 53 null 110 M3 4 43 45 42 3 133 M1 5 432 24 null 68 581 M2 6 null 7 98 null 196 M1 I need to insert this data into another table (FCT_T) based on certain criterias. Also, I am trying to avaoid usage of unpivot as there are several other columns that I need to insert and manage as part of insert. create table fct_t (id number, p_code varchar2(21), p_value number); Logic to use - Below values are not part of table, but needs to be used (hard-coded) in logic/criteria (perhaps CASE statements) - M_VAL FACT_1_CODE FACT_2_CODE FACT_3_CODE FACT_4_CODE M1 R1 R2 R3 R4 M2 R21 R65 R6 R245 M3 R1 R01 R212 R365 What I need is something similar (or any better approach available) - insert into FCT_T values select id, case when FACTOR_1>0 and J_CODE = 'M1' then 'R1' end , factor_1 from base_table; So far not able to figure out, how I can move factor column as rows, given an ID can have any number of rows from 1 to 4 based on criteria. Appreciate help here. Partial final/expected output (FCT_T) - ID P_CODE P_VALUE 1 R1 10 1 R2 52 1 R3 5 1 R4 32 2 R65 32 2 R6 24 2 R245 12
You can join the table to your codes and then UNPIVOT to convert columns into rows: INSERT INTO fct_t (id, p_code, p_value) WITH codes (M_VAL, FACT_1_CODE, FACT_2_CODE, FACT_3_CODE, FACT_4_CODE) AS ( SELECT 'M1', 'R1', 'R2', 'R3', 'R4' FROM DUAL UNION ALL SELECT 'M2', 'R21', 'R65', 'R6', 'R245' FROM DUAL UNION ALL SELECT 'M3', 'R1', 'R01', 'R212', 'R365' FROM DUAL ) SELECT id, p_code, p_value FROM base_table b INNER JOIN codes c ON (b.j_code = c.m_val) UNPIVOT ( (p_code, p_value) FOR factor IN ( (fact_1_code, factor_1) AS 1, (fact_2_code, factor_2) AS 2, (fact_3_code, factor_3) AS 3, (fact_4_code, factor_4) AS 4 ) ) WHERE p_value IS NOT NULL; db<>fiddle here
Retrieving based on specific condition
I have a little complex requirement on couple of tables which I am finding hard to crack. There are 2 tables. TableA and TableB TableA has a structure like: ------------------------------------- ID COL1 COL2 CAT ------------------------------------- 1 RecAA RecAB 3 2 RecBA RecBB 3 3 RecCA RecCB 2 4 RecDA RecDB 2 5 RecEA RecEB 1 ------------------------------------- TableB has a structure like: ----------------- COL3 TYPE ----------------- RecAA 10 RecAA 11 RecAA 12 RecAB 10 RecAB 11 RecAB 12 RecAB 13 RecAB 14 RecBA 10 RecBA 11 RecBA 14 RecBA 15 RecBB 10 ----------------- Requirements: Records in TableA should have CAT = 3. Either COL1 or COL2 of TableA should be available in COL3 of TableB. COL3 should definitely have TYPE in 10,11,12 and should have only that TYPE. i.e As per the above requirements, Of the records available in TableA, records with ID 1 and 2 have CAT = 3 in TableA Both the records have atleast only value in COL3 of TableB. (Record with ID 1 in TableA has both COL1 and COL2 in TableB and record with ID 2 in TableA has COL1 in TableB) RecAA record has Type 10,11,12 and only 10,11,12. So doesnt matter if RecAB has 10,11,12 or not. But RecBA and RecBB both does not have 10,11,12 types. Therefore the result should be: ------------------------------------- ID COL1 COL2 CAT ------------------------------------- 1 RecAA RecAB 3 ------------------------------------- What I tried: WITH TEMP AS (SELECT COL3 FROM TableB GROUP BY COL3 HAVING SUM(CASE WHEN TYPE IN ('10','11','12') THEN 1 ELSE 0 END) = 0) SELECT S.ID, S.COL1, S.COL2, S.CAT FROM TableA S INNER JOIN TEMP T ON S.COL1 = T.COL3 WHERE S.CAT = 3; Can someone please help on achieving this?
I think you're almost there, it's just your row selection in the CTE that seems problematic, and I think you need an OR: WITH TEMP AS ( SELECT COL3 FROM TableB GROUP BY COL3 HAVING SUM(POWER(2, TYPE - 10)) = 7 AND COUNT(*) = 3 ) SELECT S.ID, S.COL1, S.COL2, S.CAT FROM TableA S INNER JOIN TEMP T ON S.COL1 = T.COL3 OR S.COL2 = T.COL3 WHERE S.CAT = 3; I've subtracted 10 from each of your TYPEs to turn your 10,11,12 into 0,1,2 and then used POWER to turn them into 1, 2 and 4 which uniquely sum to 7 - (in other words your 10,11,12 became 2^(10-10), 2^(11-10) and 2^(12-10) which are 1, 2 and 4.. Which must then sum to 7). I also mandate that there be a count of 3; the only way to get to 7 with three numbers that are powers of 2 is to have 1+2+4 which guarantees that 10,11,12 are present initially. If anything was missing, extra or repeated it wouldn't be 3 numbers that sum to 7 I think RecAB is excluded because even though it has 10,11,12 it also has 13,14 which cause it to be excluded.. You also seemed to be saying that COL3 should be present in either COL1 or COL2 of table A
You can use listagg analytic version to turn TYPE column into type_in_list column like below : With temp_TableB (COL3, type_in_list) as ( SELECT distinct COL3, listagg(TYPE, ',') within group (order by TYPE)over(partition by COL3) FROM TableB ) select tA.* --, tb.* from tableA tA INNER JOIN temp_TableB tB on (tA.COL1 = tB.COL3 or tA.COL2 = tB.COL3) Where tA.CAT = 3 AND tB.TYPE_IN_LIST = '10,11,12' ;
Oracle update from random on another table
I have some fields in table1 to update with random values from some fields in table2. I have to random into rows of table2 and update each rows of table1 with the same rows values of table2. Here is my SQL code, but it doesn't work. update owner.table1 t1 set (t1.adress1, t1.zip_code, t1.town) = (select t2.adress, t2.zip_code, t2.town from table1 t2 where id = trunc(dbms_random.value(1,20000))) Result: all rows are updated with the same values, like no random on table 2 rows
How about switching to analytic ROW_NUMBER function? It doesn't really create a random value, but might be good enough. Here's an example: first, create test tables and insert some data: SQL> create table t1 (id number,address varchar2(20), town varchar2(10)); Table created. SQL> create table t2 (id number, address varchar2(20), town varchar2(10)); Table created. SQL> insert into t1 2 select 1, 'Ilica 20', 'Zagreb' from dual union all 3 select 2, 'Petrinjska 30', 'Sisak' from dual union all 4 select 3, 'Stradun 12', 'Dubrovnik' from dual; 3 rows created. SQL> insert into t2 2 select 1, 'Pavelinska 15', 'Koprivnica' from dual union all 3 select 2, 'Baščaršija 11', 'Sarajevo' from dual union all 4 select 3, 'Riva 22', 'Split' from dual; 3 rows created. SQL> select * From t1 order by id; ID ADDRESS TOWN ---------- -------------------- ---------- 1 Ilica 20 Zagreb 2 Petrinjska 30 Sisak 3 Stradun 12 Dubrovnik SQL> select * From t2 order by id; ID ADDRESS TOWN ---------- -------------------- ---------- 1 Pavelinska 15 Koprivnica 2 Baščaršija 11 Sarajevo 3 Riva 22 Split Update t1 with rows from t2: SQL> update t1 set 2 (t1.address, t1.town) = 3 (select x.address, x.town 4 from (select row_number() over (order by address) id, t2.address, t2.town 5 from t2 6 ) x 7 where x.id = t1.id); 3 rows updated. SQL> select * From t1 order by id; ID ADDRESS TOWN ---------- -------------------- ---------- 1 Baščaršija 11 Sarajevo 2 Pavelinska 15 Koprivnica 3 Riva 22 Split SQL>
Insert or update if already exists
merge into bonuses using( select * from bonuses)s ON s.employee_id = '111' WHEN MATCHED THEN update set bonus='555' WHEN NOT MATCHED THEN insert insert into BONUSES (employee_id) values(115) Table`s insert queries are insert into BONUSES (employee_id) values(111) insert into BONUSES (employee_id) values(112) insert into BONUSES (employee_id) values(113) insert into BONUSES (employee_id) values(114) insert into BONUSES (employee_id) values(115) If employee_id=111 already exists it should update else it should insert. Kindly help if someone know
Something like: MERGE INTO bonuses dst USING ( SELECT '111' AS employee_id, '555' AS bonus FROM DUAL ) src ON ( dst.employee_id = src.employee_id ) WHEN MATCHED THEN UPDATE SET bonus = src.bonus WHEN NOT MATCHED THEN INSERT ( employee_id, bonus ) VALUES ( src.employee_id, src.bonus );
Your statement has two syntax errors. You have repeated the insert keyword. You have missed the brackets around the on clause conditions. These are mandatory, unlike the join conditions in a normal from clause. So your code should look like this: merge into bonuses b using( select * from bonuses) s ON (s.employee_id = 115) WHEN MATCHED THEN update set bonus='555' WHEN NOT MATCHED THEN insert(employee_id) values(115) / However, it doesn't make sense to have the target table in the using clause. It doesn't produce the results you think it's going to... SQL> select * from bonuses; EMPLOYEE_ID BONUS ----------- ---------- 111 112 113 114 115 5 rows selected. SQL> merge into bonuses b 2 using( select * from bonuses) s 3 ON (s.employee_id = 115) 4 WHEN MATCHED THEN update set bonus='555' 5 WHEN NOT MATCHED THEN insert (employee_id) values(115) 6 / 9 rows merged. SQL> select * from bonuses; EMPLOYEE_ID BONUS ----------- ---------- 111 555 112 555 113 555 114 555 115 555 115 115 115 115 9 rows selected. SQL> Maybe something like this would suit you? merge into bonuses b using( select * from employees) e ON ( b.employee_id = e.employee_id ) WHEN MATCHED THEN update set bonus= 555 WHEN NOT MATCHED THEN insert (employee_id) values (e.id) If you don't have a source of employee IDs distinct from the BONUSES table you can use the DUAL table to fake it: SQL> merge into bonuses b 2 using( select 115 as employee_id, 555 as bonus from dual union all 3 select 116 as employee_id, 555 as bonus from dual) e 4 ON ( b.employee_id = e.employee_id ) 5 WHEN MATCHED THEN 6 update set bonus= e.bonus 7 WHEN NOT MATCHED THEN 8 insert (employee_id) values (e.employee_id) 9 / 2 rows merged. SQL> select * from bonuses; EMPLOYEE_ID BONUS ----------- ---------- 111 112 113 114 115 555 116 6 rows selected. SQL>
I think what you're after is something like: merge into bonuses tgt using (select '111' employee_id, '555' bonus from dual) src on (tgt.employee_id = src.employee_id) WHEN MATCHED THEN update set tgt.bonus = src.bonus WHEN NOT MATCHED THEN insert (tgt.employee_id, tgt.bonus) values (src.employee_id, src.bonus); As an aside, why are you inserting strings into columns which usually have a datatype of some form of NUMBER? Do these columns really have string datatypes (e.g. VARCHAR2, CHAR, etc)?
What is the Best way to Perform Bulk insert in oracle ?
With this, I mean Inserting millions of records in tables. I know how to insert data using loops but for inserting millions of data it won't be a good approach. I have two tables CREATE TABLE test1 ( col1 NUMBER, valu VARCHAR2(30), created_Date DATE, CONSTRAINT pk_test1 PRIMARY KEY (col1) ) / CREATE TABLE test2 ( col2 NUMBER, fk_col1 NUMBER, valu VARCHAR2(30), modified_Date DATE, CONSTRAINT pk_test2 PRIMARY KEY (col2), FOREIGN KEY (fk_col1) REFERENCES test1(col1) ) / Please suggest a way to insert some dummy records upto 1 million without loops.
As a fairly simplistic approach, which may be enough for you based on your comments, you can generate dummy data using a hierarchical query. Here I'm using bind variables to control how many are created, and to make some of the logic slightly clearer, but you could use literals instead. First, parent rows: var parent_rows number; var avg_children_per_parent number; exec :parent_rows := 5; exec :avg_children_per_parent := 3; -- create dummy parent rows insert into test1 (col1, valu, created_date) select level, dbms_random.string('a', dbms_random.value(1, 30)), trunc(sysdate) - dbms_random.value(1, 365) from dual connect by level <= :parent_rows; which might generate rows like: COL1 VALU CREATED_DA ---------- ------------------------------ ---------- 1 rYzJBVI 2016-11-14 2 KmSWXfZJ 2017-01-20 3 dFSTvVsYrCqVm 2016-07-19 4 iaHNv 2016-11-08 5 AvAxDiWepPeONGNQYA 2017-01-20 Then child rows, which a random fk_col1 in the range generated for the parent: -- create dummy child rows insert into test2 (col2, fk_col1, valu, modified_date) select level, round(dbms_random.value(1, :parent_rows)), dbms_random.string('a', dbms_random.value(1, 30)), trunc(sysdate) - dbms_random.value(1, 365) from dual connect by level <= :parent_rows * :avg_children_per_parent; which might generate: select * from test2; COL2 FK_COL1 VALU MODIFIED_D ---------- ---------- ------------------------------ ---------- 1 2 AqRUtekaopFQdCWBSA 2016-06-30 2 4 QEczvejfTrwFw 2016-09-23 3 4 heWMjFshkPZNyNWVQG 2017-02-19 4 4 EYybXtlaFHkAYeknhCBTBMusGAkx 2016-03-18 5 4 ZNdJBQxKKARlnExluZWkHMgoKY 2016-06-21 6 3 meASktCpcuyi 2016-10-01 7 4 FKgmf 2016-09-13 8 3 JZhk 2016-06-01 9 2 VCcKdlLnchrjctJrMXNb 2016-05-01 10 5 ddL 2016-11-27 11 4 wbX 2016-04-20 12 1 bTfa 2016-06-11 13 4 QP 2016-08-25 14 3 RgmIahPL 2016-03-04 15 2 vhinLUmwLwZjczYdrPbQvJxU 2016-12-05 where the number of children varies for each parent: select fk_col1, count(*) from test2 group by fk_col1 order by fk_col1; FK_COL1 COUNT(*) ---------- ---------- 1 1 2 3 3 3 4 7 5 1 To insert a million rows instead, just change the bind variables. If you needed more of a relationship between the children and parents, e.g. so the modified date is always after the created date, you can modify the query; for example: insert into test2 (col2, fk_col1, valu, modified_date) select * from ( select level, round(dbms_random.value(1, :parent_rows)) as fk_col1, dbms_random.string('a', dbms_random.value(1, 30)), trunc(sysdate) - dbms_random.value(1, 365) as modified_date from dual connect by level <= :parent_rows * :avg_children_per_parent ) t2 where not exists ( select null from test1 t1 where t1.col1 = t2.fk_col1 and t1.created_date > t2.modified_date ); You may also want non-midnight times (I set everything to midnight via the trunc() call, based on the column name being 'date' not 'datetime'), or some column values null; so this might just be a starting point for you.