I am trying to insert the data from other table using INSERT/SELECT combo. I also need to insert increment with specific calculation. However, I can't figure out why it is not working.
I have the table (temp_business_area) like this:
----------
| bname |
----------
| London |
| Sydney |
| Kiev |
----------
I would like to have this in enum table:
-----------------------------------------------------------------
| identifier | language_id | code | data | company_limit |
----------------------------------------------------------------|
| BUSINESS_UNIT | 0 | 100 | London | 126 |
| BUSINESS_UNIT | 0 | 200 | Sydney | 126 |
| BUSINESS_UNIT | 0 | 300 | Kiev | 126 |
-----------------------------------------------------------------
But what I get is this:
-----------------------------------------------------------------
| identifier | language_id | code | data | company_limit |
----------------------------------------------------------------|
| BUSINESS_UNIT | 0 | 100 | London | 126 |
| BUSINESS_UNIT | 0 | 100 | Sydney | 126 |
| BUSINESS_UNIT | 0 | 100 | Kiev | 126 |
| BUSINESS_UNIT | 0 | 200 | London | 126 |
| BUSINESS_UNIT | 0 | 200 | Sydney | 126 |
| BUSINESS_UNIT | 0 | 200 | Kiev | 126 |
| BUSINESS_UNIT | 0 | 300 | London | 126 |
| BUSINESS_UNIT | 0 | 300 | Sydney | 126 |
| BUSINESS_UNIT | 0 | 300 | Kiev | 126 |
-----------------------------------------------------------------
And here is my loop.
BEGIN
FOR x IN 1 .. 3 LOOP
INSERT INTO enum (identifier, language_id, code, data, company_limit)
SELECT 'BUSINESS_UNIT', 0, x*100, bname, 126 FROM temp_business_area;
END LOOP;
END;
I can't figure out where am I making mistake. Help?
You are doing three inserts for each row in temp_business_area, that's why you wind up with 9 rows.
From your description of what you want to achieve you don't need the loop at all.
Just use a single insert:
INSERT INTO enum (identifier, language_id, code, data, company_limit)
SELECT 'BUSINESS_UNIT',
0,
row_number() over (order by null) * 100,
bname, 126
FROM temp_business_area;
The SELECT statement will return 3 rows, and each row will be inserted into the enum table. The row_number() function will return an incrementing value for each row (1,2,3) which multplied by 100 will yield the code that you want.
Edit
(after David's comments):
The use of the windowing function does add a bit of an overhead to the statement. If the additional control over the numbering is not needed, using ROWNUM instead will be a bit more efficient (although it won't matter for only three rows).
INSERT INTO enum (identifier, language_id, code, data, company_limit)
SELECT 'BUSINESS_UNIT',
0,
rownum * 100,
bname, 126
FROM temp_business_area;
you may to use two variants else:
declare
i integer := 1;
BEGIN
FOR x IN (select distinct bname from temp_business_area) LOOP
INSERT INTO enum (identifier, language_id, code, data, company_limit)
SELECT 'BUSINESS_UNIT', 0, i*100, x.bname, 126 FROM temp_business_area;
i := i + 1;
END LOOP;
END;
variant 2
BEGIN
FOR x IN 1..3 LOOP
INSERT INTO enum (identifier, language_id, code, data, company_limit)
SELECT distinct 'BUSINESS_UNIT', 0, x*100, bname, 126 FROM temp_business_area WHERE rownum = x;
END LOOP;
END;
Related
I'm facing unsolvable and impossible performace drop while using UNION ALL with two sub-queries in one cursor (at least I think that's the problem). PL/SQL Developer just freezes when opening cursor results in test window.
If I turn off no matter which sub-query - everything works fine.
If I take the whole query out of cursor to regular SQL Query windows - everything is okay without any need to turn off some parts.
Procedure structure is down below, looking forward any help:
procedure p_proc(p_param varchar2,
outcur out sys_refcursor) is
begin
open outcur for
select *
from (select -- visible cols
si.item_full_name
, si.final_price
, si.full_price
, si.receipt_num
, si.receipt_date
, si.vendor_code
, case when det.br_summary is null and mr.motiv_rate_value is not null then mr.motiv_rate_value
when det.br_summary is not null then det.br_summary
end personal_bonus_amount
, case when det.br_summary is null and mr.motiv_rate_value is not null then 1
when det.br_summary is not null then det.cross_sale_kt
end personal_bonus_koeff
-- service cols
, case when det.br_summary is null and mr.motiv_rate_value is not null then 'approximate'
when det.br_summary is not null then 'definite'
end personal_bonus_type
, coalesce(det.sale_stream, mr.sale_stream, 'Not defined') item_group_name
, si.operation_type
, si.src
-- pagination
, row_number() over (order by si.receipt_date desc) rn
from (-- curr day
select b.cost final_price
, case when b.discount = 0 then null else b.price
end full_price
, b.doc_number receipt_num
, b.receipt_date receipt_date
, i.item_code vendor_code
, i.full_name item_full_name
, b.subsite code_op
, b.operator_id
, to_char(b.businessday, 'yyyymm') sale_period
, b.oper_type operation_type
, 'bill' src
from scheme.bills b
join scheme.items i on i.item_code = b.item
where b.businessday = trunc(p_date_to)
and b.subsite = p_office_id
and b.operator_id = p_emp_id
union all
-- prev days
select l.txn_amount final_price
, case when l.disc = 0 then null else l.price
end full_price
, t.receipt_num receipt_num
, t.ts receipt_date
, i.item_code vendor_code
, i.full_name item_full_name
, s.office_code code_op
, e.emp_code operator_id
, to_char(l.dt,'yyyymm') sale_period
, l.txn_type operation_type
, 'txn' src
from scheme.txn t
join scheme.txn_lines l on t.rtl_txn_id = l.rtl_txn_id
join scheme.items i on l.item_id = i.item_id
join scheme.offices s on t.subsite_id = s.subsite_id
join scheme.employees e on t.employee_id = e.employee_id
where t.ts between trunc(p_date_from) and trunc(p_date_to)
and t.subsite_id = v_op_id
and t.employee_id = v_emp_id
) si
/* fact */
left join scheme.sales_details det on si.sale_period = det.period
and si.code_op = det.op_code
and ltrim(si.operator_id,'0') = ltrim(det.tab_num,'0')
and si.receipt_num = det.rcpt_num
and si.vendor_code = det.item_article
/* prognosis */
left join scheme.rates mr on si.sale_period = mr.motiv_rate_period
and si.code_op = mr.code_op
and si.vendor_code = mr.code_1c
where 1 = 1
and si.final_price between nvl(p_price_from, si.final_price) and nvl(p_price_to, si.final_price)
/* if no filters */
and (item_group_cnt = 0 or coalesce(det.sale_stream, mr.sale_stream, 'Not defined') in (select * from table(p_item_group)))
and si.receipt_num = nvl(p_receipt_num, si.receipt_num)
)
where rn between p_page_num * p_page_size + 1 and (p_page_num + 1) * p_page_size;
end;
UPD Explain plan for the whole query used in a cursor:
----------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost | Time |
----------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 10 | 32810 | 62 | 00:00:01 |
| * 1 | VIEW | | 10 | 32810 | 62 | 00:00:01 |
| * 2 | WINDOW SORT PUSHED RANK | | 2 | 2956 | 62 | 00:00:01 |
| 3 | NESTED LOOPS OUTER | | 2 | 2956 | 61 | 00:00:01 |
| 4 | NESTED LOOPS OUTER | | 2 | 2826 | 53 | 00:00:01 |
| 5 | VIEW | | 2 | 2728 | 46 | 00:00:01 |
| 6 | UNION-ALL | | | | | |
| 7 | NESTED LOOPS | | 1 | 138 | 32 | 00:00:01 |
| 8 | NESTED LOOPS | | 1 | 138 | 32 | 00:00:01 |
| 9 | PARTITION RANGE SINGLE | | 1 | 66 | 29 | 00:00:01 |
| * 10 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED | F003_BILL | 1 | 66 | 29 | 00:00:01 |
| * 11 | INDEX RANGE SCAN | IX_SUBSITE_DOCNUM_BUSINDAY_SEQ | 1 | | 5 | 00:00:01 |
| * 12 | INDEX RANGE SCAN | IX_D001_CODE_1C_ITEM_ID | 1 | | 2 | 00:00:01 |
| 13 | TABLE ACCESS BY INDEX ROWID | D001_ITEM | 1 | 72 | 3 | 00:00:01 |
| 14 | NESTED LOOPS | | 1 | 183 | 14 | 00:00:01 |
| 15 | NESTED LOOPS | | 1 | 183 | 14 | 00:00:01 |
| 16 | NESTED LOOPS | | 1 | 104 | 12 | 00:00:01 |
| 17 | NESTED LOOPS | | 1 | 70 | 7 | 00:00:01 |
| 18 | NESTED LOOPS | | 1 | 30 | 4 | 00:00:01 |
| 19 | TABLE ACCESS BY INDEX ROWID | D005_EMPLOYEE | 1 | 18 | 3 | 00:00:01 |
| * 20 | INDEX UNIQUE SCAN | PK_D005 | 1 | | 2 | 00:00:01 |
| 21 | TABLE ACCESS BY INDEX ROWID | D018_SUBSITE | 1 | 12 | 1 | 00:00:01 |
| * 22 | INDEX UNIQUE SCAN | PK_D018 | 1 | | 0 | 00:00:01 |
| 23 | PARTITION RANGE ITERATOR | | 1 | 40 | 3 | 00:00:01 |
| 24 | PARTITION HASH SINGLE | | 1 | 40 | 3 | 00:00:01 |
| * 25 | TABLE ACCESS FULL | F007_RTL_TXN | 1 | 40 | 3 | 00:00:01 |
| * 26 | TABLE ACCESS BY GLOBAL INDEX ROWID BATCHED | F008_RTL_TXN_LI | 1 | 34 | 5 | 00:00:01 |
| * 27 | INDEX RANGE SCAN | IX_F008_RTL_TXN_ID | 7 | | 3 | 00:00:01 |
| * 28 | INDEX UNIQUE SCAN | PK_D001 | 1 | | 1 | 00:00:01 |
| 29 | TABLE ACCESS BY INDEX ROWID | D001_ITEM | 1 | 79 | 2 | 00:00:01 |
| * 30 | TABLE ACCESS BY INDEX ROWID BATCHED | T_OP_MOTIVATION_RATE_MYRTK | 1 | 49 | 7 | 00:00:01 |
| * 31 | INDEX RANGE SCAN | IDX02_CODE_OP_1C | 3 | | 3 | 00:00:01 |
| * 32 | TABLE ACCESS BY INDEX ROWID BATCHED | DET_SALES_PPT_DWH | 1 | 65 | 4 | 00:00:01 |
| * 33 | INDEX RANGE SCAN | IDX_03_RCPT_NUM | 3 | | 2 | 00:00:01 |
----------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
------------------------------------------
* 1 - filter("RN">=1 AND "RN"<=10)
* 2 - filter(ROW_NUMBER() OVER ( ORDER BY INTERNAL_FUNCTION("SI"."RECEIPT_DATE") DESC )<=10)
* 10 - filter("F003"."OPERATOR_ID"='000189513' AND "F003"."COST">=TO_NUMBER(TO_CHAR("F003"."COST")) AND "F003"."COST"<=TO_NUMBER(TO_CHAR("F003"."COST")))
* 11 - access("F003"."SUBSITE"='S165' AND "F003"."BUSINESSDAY"=TO_DATE(' 2021-11-23 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
* 11 - filter("F003"."BUSINESSDAY"=TO_DATE(' 2021-11-23 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "F003"."DOC_NUMBER" IS NOT NULL)
* 12 - access("I"."D001_CODE_1C"="F003"."ITEM")
* 12 - filter("I"."D001_CODE_1C" IS NOT NULL)
* 20 - access("E"."EMPLOYEE_ID"=3561503543)
* 22 - access("S"."SUBSITE_ID"=29260)
* 25 - filter("T"."EMPLOYEE_ID"=3561503543 AND "T"."SUBSITE_ID"=29260 AND "T"."F007_TS"<=TO_DATE(' 2021-11-23 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "T"."F007_RCPT_NUM_1C" IS NOT NULL)
* 26 - filter("L"."F008_AMOUNT">=TO_NUMBER(TO_CHAR("L"."F008_AMOUNT")) AND "L"."F008_AMOUNT"<=TO_NUMBER(TO_CHAR("L"."F008_AMOUNT")))
* 27 - access("T"."RTL_TXN_ID"="L"."RTL_TXN_ID")
* 28 - access("L"."ITEM_ID"="I"."ITEM_ID")
* 30 - filter("SI"."SALE_PERIOD"="MR"."MOTIV_RATE_PERIOD"(+))
* 31 - access("SI"."CODE_OP"="MR"."CODE_OP"(+) AND "SI"."VENDOR_CODE"="MR"."CODE_1C"(+))
* 32 - filter("SI"."CODE_OP"="DET"."OP_CODE"(+) AND "SI"."VENDOR_CODE"="DET"."ITEM_ARTICLE"(+) AND "DET"."ITEM_ARTICLE"(+) IS NOT NULL AND "DET"."PERIOD"(+)=TO_NUMBER("SI"."SALE_PERIOD") AND
LTRIM("SI"."OPERATOR_ID",'0')=LTRIM("DET"."TAB_NUM_RTK"(+),'0'))
* 33 - access("SI"."RECEIPT_NUM"="DET"."RCPT_NUM"(+))
* 33 - filter("DET"."RCPT_NUM"(+) IS NOT NULL)
Actual solution
Managed to get procedure execution plan from DBA. The problem was that optimizer chose another index for joining scheme.sales_details table when executing query inside the procedure. Added INDEX HINT with the same index which was used in regular query and everything works just fine.
Deprecated ideas down below
As far as I understood the problem is in Oracle optimizer which "thought" that doing UNION ALL first is better than pushing predicate into the sub-query. Separating this union into two single queries make him push pred without any hesitations.
Probably this can be fixed by playing with hints, that's wip for now.
Temporary workaround is to regroup the query, going from this structure
select *
from (select row_number() rn
, u.*
from (select *
from first_query
union all
select *
from second_query) u
-- some joins
join first_table ft
join second_table st
-- predicate block
where 1=1
and a = b
)
where rn between c and d;
to this
select *
from (select row_number() rn
, u.*
from (select *
from first_query) u
-- some joins
join first_table ft
join second_table st
-- predicate block
where 1=1
and a = b
union all
select row_number() rn
, u.*
from (select *
from second_query) u
-- some joins
join first_table ft
join second_table st
-- predicate block
where 1=1
and a = b
)
where rn between c and d;
That's not the perfect solution cause it doubles the JOIN section but at least it works.
In the example below Oracle's optimizer's estimated rows is incorrect by two orders of magnitude. How do I improve the estimated rows?
Table A has rows with numbers 1 through 1,000 for each of the 10 letters A through J.
Table C has 100 copies of table A.
So, table A has a cardinality of 10K and table C has a cardinality of 1M.
A given single-valued predicate on the number in table A will yield 1/1000 of the rows in table A (same for table C).
A given single-valued predicate on the letter in table A will yield 1/10 of the rows in table A (same for table C).
Setup script.
drop table C;
drop table A;
create table A
( num NUMBER
, val VARCHAR2(3 byte)
, pad CHAR(40 byte)
)
;
insert /*+ append enable_parallel_dml parallel (auto) */
into A (num, val, pad)
select mod(level-1, 1000) +1
, chr(mod(ceil(level/1000) - 1, 10) + ascii('A'))
, ' '
from dual
connect by level <= 10*1000
;
create table C
( id NUMBER
, num NUMBER
, val VARCHAR2(3 byte)
, pad CHAR(40 byte)
)
;
insert /*+ append enable_parallel_dml parallel (auto) */
into C (id, num, val, pad)
with
"D1" as
( select /*+ materialize */ null from dual connect by level <= 100 --320
)
, "D" as
( select /*+ materialize */
level rn
, mod(level-1, 1000) + 1 num
, chr(mod(ceil(level/1000) - 1, 10) + ascii('A')) val
, ' ' pad
from dual
connect by level <= 10*1000
order by 1 offset 0 rows
)
select rownum id
, num num
, val val
, pad pad
from "D1", "D"
;
commit;
exec dbms_stats.gather_table_stats(OwnName => null, TabName => 'A', cascade => true);
exec dbms_stats.gather_table_stats(OwnName => null, TabName => 'C', cascade => true);
Consider the explain plan to the following query.
select *
from A
join C
on A.num = C.num
and A.val = C.val
where A.num = 1
and A.val = 'A'
;
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100 | 9900 | 2209 (1)| 00:00:01 |
|* 1 | HASH JOIN | | 100 | 9900 | 2209 (1)| 00:00:01 |
|* 2 | TABLE ACCESS FULL| A | 1 | 47 | 23 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| C | 100 | 5200 | 2185 (1)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("A"."NUM"="C"."NUM" AND "A"."VAL"="C"."VAL")
2 - filter("A"."NUM"=1 AND "A"."VAL"='A')
3 - filter("C"."NUM"=1 AND "C"."VAL"='A')
The row cardinality of each step makes sense to me.
ID=2 --> (1/1,000) * (1/10) * 10,000 = 1
ID=3 --> (1/1,000) * (1/10) * 1,000,000 = 100
ID=1 --> 100 is correct. Predicates in ID=2 and ID=3 are the same, every row from ID=2 will have one and only one match in the row source from ID=3.
Now consider the explain plan to the slightly modified query below.
select *
from A
join C
on A.num = C.num
and A.val = C.val
where A.num in(1,2)
and A.val = 'A'
;
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 198 | 2209 (1)| 00:00:01 |
|* 1 | HASH JOIN | | 2 | 198 | 2209 (1)| 00:00:01 |
|* 2 | TABLE ACCESS FULL| A | 2 | 94 | 23 (0)| 00:00:01 |
|* 3 | TABLE ACCESS FULL| C | 200 | 10400 | 2185 (1)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("A"."NUM"="C"."NUM" AND "A"."VAL"="C"."VAL")
2 - filter("A"."VAL"='A' AND ("A"."NUM"=1 OR "A"."NUM"=2))
3 - filter("C"."VAL"='A' AND ("C"."NUM"=1 OR "C"."NUM"=2))
The row cardinality of each step ID=2 and ID=3 makes sense to me, but now ID=1 is incorrect by two orders of magnitude.
ID=2 --> (1/1,000)(1/10) * 10,000 = 1
ID=3 --> (1/1,000)(1/10) * 1,000,000 = 100
ID=1 --> The optimizer's estimate is two orders of magnitude different from the actual.
Adding unique and foreign constraints and extended statistics did not improve the estimated row counts.
create unique index IU_A on A (num, val);
alter table A add constraint UK_A unique (num, val) rely using index IU_A enable validate;
alter table C add constraint R_C foreign key (num, val) references A (num, val) rely enable validate;
create index IR_C on C (num, val);
select dbms_stats.create_extended_stats(null,'A','(num, val)') from dual;
select dbms_stats.create_extended_stats(null,'C','(num, val)') from dual;
exec dbms_stats.gather_table_stats(OwnName => null, TabName => 'A', cascade => true);
exec dbms_stats.gather_table_stats(OwnName => null, TabName => 'C', cascade => true);
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 198 | 10 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 2 | 198 | 10 (0)| 00:00:01 |
| 3 | INLIST ITERATOR | | | | | |
| 4 | TABLE ACCESS BY INDEX ROWID| A | 2 | 94 | 5 (0)| 00:00:01 |
|* 5 | INDEX UNIQUE SCAN | IU_A | 2 | | 3 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | IR_C | 1 | | 2 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID | C | 1 | 52 | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - access(("A"."NUM"=1 OR "A"."NUM"=2) AND "A"."VAL"='A')
6 - access("A"."NUM"="C"."NUM" AND "C"."VAL"='A')
filter("C"."NUM"=1 OR "C"."NUM"=2)
What do I need to do to make the estimated rows better match reality?
Using Oracle Enterprise Edition 19c.
Thanks in advance.
Edit
After ensuring the most recent optimizer_features_enable was used and modifying one of the predicates, we still have an explain plan whose estimated row count is short by two orders of magnitude.
ID=6 ought to have an estimated rows of 100. It seems it is applying the predicate factor twice. Once for the access and again for the filter.
select /*+ optimizer_features_enable('19.1.0') */
*
from A
join C
on A.num = C.num
and A.val = C.val
where A.num in(1,2)
and A.val in('A','B')
;
-----------------------------------------------------------------------------------------------
| id | Operation | name | rows | Bytes | cost (%CPU)| time |
-----------------------------------------------------------------------------------------------
| 0 | select statement | | 4 | 396 | 16 (0)| 00:00:01 |
| 1 | nested LOOPS | | 4 | 396 | 16 (0)| 00:00:01 |
| 2 | nested LOOPS | | 4 | 396 | 16 (0)| 00:00:01 |
| 3 | INLIST ITERATOR | | | | | |
| 4 | table access by index ROWID BATCHED| A | 4 | 188 | 7 (0)| 00:00:01 |
|* 5 | index range scan | IU_A | 4 | | 3 (0)| 00:00:01 |
|* 6 | index range scan | IR_C | 1 | | 2 (0)| 00:00:01 |
| 7 | table access by index ROWID | C | 1 | 52 | 3 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - access("A"."NUM"=1 or "A"."NUM"=2)
filter("A"."VAL"='A' or "A"."VAL"='B')
6 - access("A"."NUM"="C"."NUM" and "A"."VAL"="C"."VAL")
filter(("C"."NUM"=1 or "C"."NUM"=2) and ("C"."VAL"='A' or "C"."VAL"='B'))
I am wondering about the following strange behaviour.
This function should log the selected data to a table ps_cs_corr_data_tb (this table is empty):
create or replace function cs_corr_data(i_id in varchar2,
i_key1 in varchar2,
i_key2 in varchar2,
i_key3 in varchar2,
i_key4 in varchar2,
i_key5 in varchar2)
return number as pragma autonomous_transaction;
begin
insert into ps_cs_corr_data_tb
(descr,
cs_key_id_01,
cs_key_id_02,
cs_key_id_03,
cs_key_id_04,
cs_key_id_05)
values
(i_id, i_key1, i_key2, i_key3, i_key4, i_key5);
commit;
return 1; /* insert successful */
exception
when dup_val_on_index then
return 0;
end;
Test a)
The test with the following select statement is successful (as expected):
select b.id, b.key1, b.key2, b.key3, b.key4, b.key5
from (select a.id, a.key1, a.key2, a.key3, a.key4, a.key5
from ( -- test data
select '1' as id,'1' as key1,' ' as key2,' ' as key3,' ' as key4,' ' as key5 from dual union all
select '1' as id,'2' as key1,' ' as key2,' ' as key3,' ' as key4,' ' as key5 from dual union all
select '1' as id,'3' as key1,' ' as key2,' ' as key3,' ' as key4,' ' as key5 from dual union all
select '1' as id,'4' as key1,' ' as key2,' ' as key3,' ' as key4,' ' as key5 from dual union all
select '1' as id,'5' as key1,' ' as key2,' ' as key3,' ' as key4,' ' as key5 from dual
) a
-- some conditions
where a.id = '1'
and a.key1 = '4') b
-- log the results of selection
where cs_corr_data(b.id, b.key1, b.key2, b.key3, b.key4, b.key5) = 1;
result of selection:
ID KEY1 KEY2 KEY3 KEY4 KEY5
1 4
result in logging table:
select * from ps_cs_corr_data_tb d;
DESCR CS_KEY_ID_01 CS_KEY_ID_02 CS_KEY_ID_03 CS_KEY_ID_04 CS_KEY_ID_05
1 4
So far the expected result!
Explain Plan:
Plan hash value: 334628103
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5 | 90 | 2 (0)| 00:00:01 |
| 1 | VIEW | | 5 | 90 | 2 (0)| 00:00:01 |
| 2 | UNION-ALL | | | | | |
|* 3 | FILTER | | | | | |
| 4 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 5 | FILTER | | | | | |
| 6 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 7 | FILTER | | | | | |
| 8 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 9 | FILTER | | | | | |
| 10 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
|* 11 | FILTER | | | | | |
| 12 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
-------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter(NULL IS NOT NULL AND "CS_CORR_DATA"('1','1',' ',' ',' ','
')=1)
5 - filter(NULL IS NOT NULL AND "CS_CORR_DATA"('1','2',' ',' ',' ','
')=1)
7 - filter(NULL IS NOT NULL AND "CS_CORR_DATA"('1','3',' ',' ',' ','
')=1)
9 - filter("CS_CORR_DATA"('1','4',' ',' ',' ',' ')=1)
11 - filter(NULL IS NOT NULL AND "CS_CORR_DATA"('1','5',' ',' ',' ','
')=1)
Test b)
Now the same test with different test data preparation (but the same test data):
select b.id, b.key1, b.key2, b.key3, b.key4, b.key5
from (select a.id, a.key1, a.key2, a.key3, a.key4, a.key5
from (select '1' as id,
to_char(level) as key1,
' ' as key2,
' ' as key3,
' ' as key4,
' ' as key5
from dual
connect by level <= 5) a
where a.id = '1'
and a.key1 = '4') b
where cs_corr_data(b.id, b.key1, b.key2, b.key3, b.key4, b.key5) = 1;
result of selection:
ID KEY1 KEY2 KEY3 KEY4 KEY5
1 4
result in logging table:
select * from ps_cs_corr_data_tb d;
DESCR CS_KEY_ID_01 CS_KEY_ID_02 CS_KEY_ID_03 CS_KEY_ID_04 CS_KEY_ID_05
1 1
1 2
1 3
1 4
1 5
Explain Plan:
Plan hash value: 2403765415
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 37 | 2 (0)| 00:00:01 |
|* 1 | VIEW | | 1 | 37 | 2 (0)| 00:00:01 |
|* 2 | CONNECT BY WITHOUT FILTERING| | | | | |
| 3 | FAST DUAL | | 1 | | 2 (0)| 00:00:01 |
--------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("CS_CORR_DATA"("A"."ID","A"."KEY1","A"."KEY2","A"."KEY3","A"."KE
Y4","A"."KEY5")=1 AND "A"."ID"='1' AND "A"."KEY1"='4')
2 - filter(LEVEL<=5)
Any ideas what is going on here?
Oracle (along with just about any relational database) is free to evaluate predicates in whatever order it expects would be most efficient. In either query, it is free to evaluate the function predicate first or to evaluate the a.id = '1' and a.key1 = '4' predicates first or to evaluate the function predicate between those two predicates. It appears that the actual plan the optimizer chose in the second case (at least this time) was to evaluate the function first while it chose to evaluate the function last in the first case. Of course, the optimizer is free to change its mind tomorrow in both cases so you shouldn't depend on a particular query plan.
I have a select statement that generate set value thereafter I want insert that set of values into another table, MY concern is I'm using select statement in select I'm using one one more select clause((select max(org_id)+1 from org)) where I'm trying to get max value and increment by one but I'm not able get incremented value instead I'm getting same value you can see column name id_limit
select abc,abc1,abc3,abc4,(select max(org_id)+1 from org) as id_limit from xyz
current output
-----------------------------------------------------------------
| abc | abc1 | abc3 | abc4 | id_limit |
----------------------------------------------------------------|
| BUSINESS_UNIT | 0 | 100 | London | 6 |
| BUSINESS_UNIT | 0 | 200 | Sydney | 6 |
| BUSINESS_UNIT | 0 | 300 | Kiev | 6 |
-----------------------------------------------------------------
I'm trying to get expected out output
-----------------------------------------------------------------
| abc | abc1 | abc3 | abc4 | id_limit |
----------------------------------------------------------------|
| BUSINESS_UNIT | 0 | 100 | London | 6 |
| BUSINESS_UNIT | 0 | 200 | Sydney | 7 |
| BUSINESS_UNIT | 0 | 300 | Kiev | 8 |
-----------------------------------------------------------------
Yes, in Oracle 12.
create table foo (
id number generated by default on null as identity
);
https://oracle-base.com/articles/12c/identity-columns-in-oracle-12cr1
In previous versions you use sequence/trigger as explained here:
How to create id with AUTO_INCREMENT on Oracle?
I am working on a trigger which needs INSERT INTO with WHERE logic.
I have three tables.
Absence_table:
-----------------------------
| user_id | absence_reason |
-----------------------------
| 1234567 | 40 |
| 1234567 | 50 |
| 1213 | 40 |
| 1314 | 20 |
| 1111 | 20 |
-----------------------------
company_table:
-----------------------------
| user_id | company_id |
-----------------------------
| 1234567 | 10201 |
| 1213 | 10200 |
| 1314 | 10202 |
| 1111 | 10200 |
-----------------------------
employment_table:
--------------------------------------
| user_id | emp_type | emp_no |
--------------------------------------
| 1234567 | Int | 1 |
| 1213 | Int | 2 |
| 1314 | Int | 3 |
| 1111 | Ext | 4 |
--------------------------------------
and finally I have the table out where data should be going only who have emp_type = Int in employment_table and have company_id = 10200
out:
--------------------------------
| employee_id | absence_reason |
--------------------------------
| 1 | 40 |
| 1 | 50 |
| 2 | 40 |
| 3 | 20 |
--------------------------------
Here is my trigger:
CREATE OR REPLACE TRIGGER "INOUT"."ABSENCE_TRIGGER"
AFTER INSERT ON absence_table
FOR EACH ROW
DECLARE
BEGIN
CASE
WHEN INSERTING THEN
INSERT INTO out (absence_reason, employee_id)
VALUES (:NEW.absence_reason, (SELECT employee_id FROM employment_table WHERE user_id = :NEW.user_id)
WHERE user_id IN
(SELECT user_id FROM employment_table WHERE employment_type = 'INT')
AND user_id IN
(SELECT user_id FROM company_table WHERE company_id = '10200');
END CASE;
END absence_trigger;
It is obviously not working and I can't figure out what should I do to make it work. Any suggestions?
change the insert to this:
insert into out (absence_reason, employee_id)
select :NEW.absence_reason, e.emp_no
from employment_table e
inner join company_table c
on c.user_id = e.user_id
where e.user_id = :NEW.user_id
and e.emp_type = 'INT'
and c.company_id = '10200';
which should work. note you had emp_no in your sample structure yet employee_id in the trigger insert too. i've assumed emp_no is right. also emp_type vs employment_type.
Finally in your trigger you have company_id in quotes. Is it really a varchar2? if so OK, if not, don't use quotes.
The parentheses are not balanced. The one for values is not closed. This is the cause of your specific error, but #DazzaL's answer looks like the correct solution.