Reasonable SELECT ... INTO Oracle solution for case of multiple OR no rows - oracle

I just want to SELECT values into variables from inside a procedure.
SELECT blah1,blah2 INTO var1_,var2_
FROM ...
Sometimes a large complex query will have no rows sometimes it will have more than one -- both cases lead to exceptions. I would love to replace the exception behavior with implicit behavior similiar to:
No rows = no value change, Multiple rows = use last
I can constrain the result set easily enough for the "multiple rows" case but "no rows" is much more difficult for situations where you can't use an aggregate function in the SELECT.
Is there any special workarounds or suggestions? Looking to avoid significantly rewriting queries or executing twice to get a rowcount before executing SELECT INTO.

Whats wrong with using an exception block?
create or replace
procedure p(v_job VARCHAR2) IS
v_ename VARCHAR2(255);
begin
select ename into v_ename
from (
select ename
from scott.emp
where job = v_job
order by v_ename desc )
where rownum = 1;
DBMS_OUTPUT.PUT_LINE('Found Rows Logic Here -> Found ' || v_ename);
EXCEPTION WHEN NO_DATA_FOUND THEN
DBMS_OUTPUT.PUT_LINE('No Rows found logic here');
end;
SQL> begin
p('FOO');
p('CLERK');
end; 2 3 4
5 /
No Rows found logic here
Found Rows Logic Here -> Found SMITH
PL/SQL procedure successfully completed.
SQL>

You could use a for loop. A for loop would do nothing for no rows returned and would be applied to every row returned if there where multiples. You could adjust your select so that it only returns the last row.
begin
for ARow in (select *
from tableA ta
Where ta.value = ???) loop
-- do something to ARow
end loop;
end;

Related

Insert into not working on plsql in oracle

declare
vquery long;
cursor c1 is
select * from temp_name;
begin
for i in c1
loop
vquery :='INSERT INTO ot.temp_new(id)
select '''||i.id||''' from ot.customers';
dbms_output.put_line(i.id);
end loop;
end;
/
Output of select * from temp_name is :
ID
--------------------------------------------------------------------------------
customer_id
1 row selected.
I have customers table which has customer_id column.I want to insert all the customer_id into temp_new table but it is not being inserted. The PLSQL block executes successfully but the temp_new table is empty.
The output of dbms_output.put_line(i.id); is
customer_id
What is wrong there?
The main problem is that you generate a dynamic statement that you never execute; at some point you need to do:
execute immediate vquery;
But there are other problems. If you output the generated vquery string you'll see it contains:
INSERT INTO ot.temp_new(id)
select 'customer_id' from ot.customers
which means that for every row in customers you'll get one row in temp_new with ID set to the same fixed literal 'customer_id'. It's unlikely that's what you want; if customer_id is a column name from customers then it shouldn't be in single quotes.
As #mathguy suggested, long is not a sensible data type to use; you could use a CLOB but only really need a varchar2 here. So something more like this, where I've also switched to use an implicit cursor:
declare
l_stmt varchar2(4000);
begin
for i in (select id from temp_name)
loop
l_stmt := 'INSERT INTO temp_new(id) select '||i.id||' from customers';
dbms_output.put_line(i.id);
dbms_output.put_line(l_stmt);
execute immediate l_stmt;
end loop;
end;
/
db<>fiddle
The loop doesn't really make sense though; if your temp_name table had multiple rows with different column names, you'd try to insert the corresponding values from those columns in the customers table into multiple rows in temp_new, all in the same id column, as shown in this db<>fiddle.
I guess this is the starting point for something more complicated, but still seems a little odd.

Bulk inserting in Oracle PL/SQL

I have around 5 million of records which needs to be copied from table of one schema to table of another schema(in the same database). I have prepared a script but it gives me the below error.
ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define
Following is my script
DECLARE
TYPE tA IS TABLE OF varchar2(10) INDEX BY PLS_INTEGER;
TYPE tB IS TABLE OF SchemaA.TableA.band%TYPE INDEX BY PLS_INTEGER;
TYPE tD IS TABLE OF SchemaA.TableA.start_date%TYPE INDEX BY PLS_INTEGER;
TYPE tE IS TABLE OF SchemaA.TableA.end_date%TYPE INDEX BY PLS_INTEGER;
rA tA;
rB tB;
rD tD;
rE tE;
f number :=0;
BEGIN
SELECT col1||col2||col3 as main_col, band, effective_start_date as start_date, effective_end_date as end_date
BULK COLLECT INTO rA, rB, rD, rE
FROM schemab.tableb;
FORALL i IN rA.FIRST..rE.LAST
insert into SchemaA.TableA(main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
values(rA(i), rB(i), 'C', rD(i), rE(i), 71);
f:=f+1;
if (f=10000) then
commit;
end if;
end;
Could you please help me in finding where the error lies?
Why not a simple
insert into SchemaA.TableA (main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
SELECT col1||col2||col3 as main_col, band, 'C', effective_start_date, effective_end_date, 71
FROM schemab.tableb;
This
f:=f+1;
if (f=10000) then
commit;
end if;
does not make any sense. f becomes 1 - that's it. f=10000 will never be true, thus you don't make a COMMIT.
Following script worked for me and i was able to load around 5 millions of data within 15 minutes.
ALTER SESSION ENABLE PARALLEL DML
/
DECLARE
cursor c_p1 is
SELECT col1||col2||col3 as main_col, band, effective_start_date as start_date, effective_end_date as end_date
FROM schemab.tableb;
TYPE TY_P1_FULL is table of c_p1%rowtype
index by pls_integer;
v_P1_FULL TY_P1_FULL;
v_seq_num number;
BEGIN
open c_p1;
loop
fetch c_p1 BULK COLLECT INTO v_P1_FULL LIMIT 10000;
exit when v_P1_FULL.count = 0;
FOR i IN 1..v_P1_FULL.COUNT loop
INSERT /*+ APPEND */ INTO schemaA.tableA VALUES (v_P1_FULL(i));
end loop;
commit;
end loop;
close c_P1;
dbms_output.put_line('Load completed');
end;
-- Disable parallel mode for this session
ALTER SESSION DISABLE PARALLEL DML
/
ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define
You get that error because you have a literal in the VALUES clause of the INSERT. The FORALL expects everything to be bind to an array.
Your program has a massive problem, literally. You have no LIMIT on the BULK COLLECT clause, so that's going to try to load all five million records from TableB into your collections. That will blow your session's memory limit.
The point of using BULK COLLECT and FORALL is to bite off chunks of a bigger data set and process it in batches. For that you need a loop. The loop has no FOR condition: instead test whether the fetch returned anything and exit when the array has zero records.
DECLARE
TYPE recA IS RECORD (
main_col SchemaA.TableA.main_col%TYPE
, band SchemaA.TableA.band%TYPE
, start_date date
, end_date date
, roll_ni number);
TYPE recsA is table of recA
nt_a recsA;
f number :=0;
CURSOR cur_b is
SELECT col1||col2||col3 as main_col,
band,
effective_start_date as start_date,
effective_end_date as end_date ,
71 as roll_no
FROM schemab.tableb;
BEGIN
open cur_b;
loop
fetch curb_b bulk collect into nt_a limit 1000;
exit when nt_a.count() = 0;
FORALL i IN rA.FIRST..rE.LAST
insert into SchemaA.TableA(main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
values nt_a(i);
f := f + sql%rowcount;
if (f > = 10000) then
commit;
f := 0;
end if;
end loop;
commit;
close cur_b;
end;
Please note that issuing commits inside a loop is contraindicated. You lay yourself open to runtime errors such as ORA-01002 and ORA-01555. If your program does crash half-way through you will have great difficulty in resuming it without problems. By all means persist if you have problems with UNDO tablespace, but the correct answer is to get the DBA to enlarge the UNDO tablespace not weaken your code.
"i am using bulk insert because it gives better performance"
It is true that BULK COLLECT and FORALL ... INSERT is more performative than a CURSOR FOR loop with row-by-row single inserts. It is not more efficient than a pure SQL INSERT INTO ... SELECT. The value of the construct is that it allows us to manipulate the contents of the array before inserting it. This is handle if we have complex business rules which can only be applied programmatically.
Please try after changing first 2 line of your code with below:
DECLARE
TYPE tA IS TABLE OF SchemaA.TableA.main_col%TYPE INDEX BY PLS_INTEGER;
...
...
This may be because of data type/length mismatch. In declaration section you have missed to declare one to inherit type from table.
Also as mentioned, f logic for commit will not do the magic for you. Better you should use LIMIT with BULL COLLECT

how to generate a table of random data from existing database table through oracle procedure

I have to generate a table (contains two columns) of random data from a database table through oracle procedure. The user can indicate the number of data required and we have to use the table data with ID values from 1001 to 1060. I am trying to use cursor loop and not sure dbms_random method dhould I use.
I am using the following code to create procedure
create or replace procedure a05_random_plant(p_count in number)
as
v_count number := p_count;
cursor c is
select plant_id, common_name
from ppl_plants
where rownum = v_count
order by dbms_random.value;
begin
delete from a05_random_plants_table;
for c_table in c
loop
insert into a05_random_plants_table(plant_id, plant_name)
values (c_table.plant_id, c_table.common_name);
end loop;
end;
/
it complied successfully. Then I executed with the following code
set serveroutput on
exec a05_random_plant(5);
it shows anonymous block completed
but when run the following code, I do not get any records
select * from a05_random_plants_table;
The rownum=value would not work for a value greater than 1
hence try the below
create or replace procedure a05_random_plant(p_count in number)
as
v_count number := p_count;
cursor c is
select plant_id, common_name
from ppl_plants
where rownum <= v_count
order by dbms_random.value;
begin
delete from a05_random_plants_table;
for c_table in c
loop
insert into a05_random_plants_table(plant_id, plant_name)
values (c_table.plant_id, c_table.common_name);
end loop;
end;
/
Query by Tom Kyte - will generate almost 75K of rows:
select trunc(sysdate,'year')+mod(rownum,365) TRANS_DATE,
mod(rownum,100) CUST_ID,
abs(dbms_random.random)/100 SALES_AMOUNT
from all_objects
/
You can use this example to write your query and add where clause to it - where id between 1001 and 1060, for example.
I don't think you should use a cursor (which is slow naturally) but do a direct insert from a select:
insert into table (col1, col2)
select colx, coly from other_table...
And, isn't missing a COMMIT on the end of your procedure?
So, all code in your procedure would be a DELETE, a INSERT WITH that SELECT and then a COMMIT.

trigger after insert on table

create or replace
trigger addpagamento
after insert on marcacoes_refeicoes
for each row
declare
nmarcacaoa number;
ncartaoa number;
begin
select nmarcacao into nmarcacaoa from marcacoes_refeicoes where rownum < (select count(*) from marcacoes_refeicoes);
select ncartao into ncartaoa from marcacoes_refeicoes where rownum < (select count(*) from marcacoes_refeicoes);
insert_pagamentos(nmarcacaoa, ncartaoa); --this is a procedure
exception when others then
raise_application_error(-20001, 'Error in Trigger!!!');
end addpagamento;
when i try to run the insert statement to the table "marcacoes_refeicoes" this procedure gives error: like the table is mutating
create or replace
procedure insert_pagamentos
(nmarcacaoa in number, ncartaoa in number)
AS
BEGIN
insert into pagamentos (nmarcacao, datapagamento, ncartao) values (nmarcacaoa, sysdate, ncartaoa);
commit;
END INSERT_PAGAMENTOS;
Short (oversimplified) answer:
You can't modify a table in a trigger that changes the table.
Long answer:
http://www.oracle-base.com/articles/9i/mutating-table-exceptions.php has a more in-depth explanation, including suggestions how to work around the problem.
You're hitting the mutating-table problem because you're selecting from the same table the trigger is on, but what you seem to be trying to do doesn't make sense. Your queries to get a value for nmarcacaoa and ncartaoa will return a no_data_found or too_many_rows error unless the table had exactly 2 rows in it before your insert:
select nmarcacao from marcacoes_refeicoes
where rownum < (select count(*) from marcacoes_refeicoes);
will return all rows except one; and the one that's excluded will be kind of random as you have no ordering. Though the state is undetermined within the trigger, so you can't really how many rows there are, and it won't let you do this query anyway. You won't normally be able to get a single value, anyway, and it's not obvious which value you actually want.
I can only imagine that you're trying to use the values from the row you are currently inserting, and putting them in your separate payments (pagamentos) table. If so there is a built-in mechanism to do that using correlation names, which lets you refer to the newly inserted row as :new (by default):
create or replace
trigger addpagamento
after insert on marcacoes_refeicoes
for each row
begin
insert_pagamentos(:new.nmarcacaoa, :new.ncartaoa);
end addpagamento;
The :new is referring to the current row, so :new.nmarcacaoa is the nmarcacaoa being inserted. You don't need to (and can't) get that value from the table itself. (Even with the suggested autonomous pragma, that would be a separate transaction and would not be able to see your newly inserted and uncommitted data).
Or you can just do the insert directly; not sure what the procedure is adding here really:
create or replace
trigger addpagamento
after insert on marcacoes_refeicoes
for each row
begin
insert into pagamentos(nmarcacao, datapagamento, ncartao)
values (:new.nmarcacaoa, sysdate, :new.ncartaoa);
end addpagamento;
I've removed the exception handler as all it was doing was masking the real error, which is rather unhelpful.
Editing this answer in view of the comments below:
You can use PRAGMA AUTONOMOUS_TRANSACTION to get rid of the error, but DONOT USE it as it will NOT solve any purpose.
Use PRAGMA AUTONOMOUS_TRANSACTION:
create or replace
trigger addpagamento
after insert on marcacoes_refeicoes
for each row
declare
PRAGMA AUTONOMOUS_TRANSACTION;
nmarcacaoa number;
ncartaoa number;
begin
select nmarcacao into nmarcacaoa from marcacoes_refeicoes where rownum < (select count(*) from marcacoes_refeicoes);
select ncartao into ncartaoa from marcacoes_refeicoes where rownum < (select count(*) from marcacoes_refeicoes);
insert_pagamentos(nmarcacaoa, ncartaoa); --this is a procedure
exception when others then
raise_application_error(-20001, 'Error in Trigger!!!');
end addpagamento;
However in this case , select count(*) from marcacoes_refeicoes will give you the new count after the current insertion into the table.

Oracle Ref Cursor Vs Select into with Exception handling

I have a couple of scenarios:
Need to read the value of a column from three different tables in a predefined order and only 1 table will have the data
Read data from table1 if records are present for criteria given else read data from Table2 for given criteria
In Oracle Stored Procedures
The way these are being handled right now is to first get the count for a given query into a variable, and if the count > 0, then we execute the same query to read the actual data as in:
select count(*) from table1 into v_count
if v_count > 0
then
select data into v_data from table1
end if;
Return v_data
This is being done to avoid the no_data_found exception, otherwise I would need three exception handler blocks to catch the no_data_found exception for each table access.
Currently I am reimplementing this with Cursors so that I have something like this:
cursor C1 is
select data from table1;
Open C1
Fetch C1 into v_data
if C1%FOUND
then
Close C1
Return v_data
End If
I wanted to find out which one is better from a performance point of view--the one with Cursors, or the one which does a Select into a variable and has three no_data_found Exception blocks. I don't want to use the two stage query process which we have currently.
I don't know why you are so keen to avoid the exception? What is wrong with:
begin
begin
select data into v_data from table1;
exception
when no_data_found then
begin
select data into v_data from table2;
exception
when no_data_found then
begin
select data into v_data from table3;
exception
when no_data_found then
v_data := null;
end;
end;
end;
return v_data;
end;
I believe this will perform better than your other solution because it does the minimum possible work to achieve the desired result.
See How bad is ignoring Oracle DUP_VAL_ON_INDEX exception? where I demonstrate that using exceptions performs better than counting to see if there is any data.
select count(*) from table1 into v_count
if v_count > 0 then
select data into v_data from table1;
else
v_data := null;
end if;
return v_data;
is NOT equivalent to
begin
select data into v_data from table1;
return v_data;
exception
when no_data_found then
return null;
end;
in a multi-user environment. In the first case, someone could update the table between the points where you check for existence and when you read the data.
Performance-wise, I have no idea which is better, but I know that the first option makes two context switches to the sql engine and the second one only does one context switch.
The way you're handling scenario 1 now is not good. Not only are you doing two queries when one will suffice, but as Erik pointed out, it opens up the possibility of data changing between the two queries (unless you use a read-only or serializable transaction).
Given that you say that in this case the data will be in exactly one of three tables, how about this?
SELECT data
INTO v_data FROM
(SELECT data FROM table1
UNION ALL
SELECT data FROM table2
UNION ALL
SELECT data FROM table3
)
Another "trick" you can use to avoid writing multiple no-data-found handlers would be:
SELECT MIN(data) INTO v_data FROM table1;
IF v_data IS NOT NULL THEN
return v_data;
END IF;
SELECT MIN(data) INTO v_data FROM table2;
...etc...
but I don't really see any reason that's better than having three exception handlers.
For your second scenario, I think what you mean is that there may be data in both tables and you want to use the data from table1 if present, otherwise use the data from table 2. Again you could do this in a single query:
SELECT data
INTO v_data FROM
(SELECT data FROM
(SELECT 1 sort_key, data FROM table1
UNION ALL
SELECT 2 sort_key, data FROM table2
)
ORDER BY sort_key ASC
)
WHERE ROWNUM = 1
An enhanced version of "Dave Costa"'s MIN option...
SELECT COUNT(1), MIN(data) INTO v_rowcount, v_data FROM table2;
Now v_rowcount can be checked for values 0, >1 (greater than 1) where normal select query will throw NO_DATA_FOUND or TOO_MANY_ROWS exception. Value "1" will indicate that exact one row exists and will serve our purpose.
DECLARE
A VARCHAR(35);
B VARCHAR(35);
BEGIN
WITH t AS
(SELECT OM_MARCA, MAGAZIA FROM ifsapp.AKER_EFECTE_STOC WHERE (BARCODE = 1000000491009))
SELECT
(SELECT OM_MARCA FROM t) OM_MARCA,
(SELECT MAGAZIA FROM t) MAGAZIA
INTO A, B
FROM DUAL;
IF A IS NULL THEN
dbms_output.put_line('A este null');
END IF;
dbms_output.put_line(A);
dbms_output.put_line(B);
END;
/
Use the "for row in cursor" form of a loop and the loop will just not process if there is no data:
declare cursor
t1Cur is
select ... from table1;
t2Cur is
select ... from table2;
t3Cur is
select ... from table3;
t1Flag boolean FALSE;
t2Flag boolean FALSE;
t3Flag boolean FALSE;
begin
for t1Row in t1Cur loop
... processing, set t1Flag = TRUE
end loop;
for t2Row in t2Cur loop
... processing, set t2Flag = TRUE
end loop;
for t3Row in t3Cur loop
... processing, set t3Flag = TRUE
end loop;
... conditional processing based on flags
end;

Resources