how to get better performance stored procedure oracle - oracle

I wrote a stored procedure, but I don't think it is performing at all. How can I make it work better? Thanks.
Table A has 800k records. Table B has 36k records. Is test data, there will be more records in production environment.
Table B has customer_no column index defined.
I ran it once and it took 22 minutes.
create or replace procedure SP_KVKK
is
TYPE json_data_type IS RECORD
(
del_data CLOB
);
customer_no number(10);
r_del_data C%ROWTYPE;
l_deleted_cur SYS_REFCURSOR;
l_deleted_rec json_data_type;
l_sel_sql VARCHAR2 (500);
cursor mbb_list is
select customer_no from A;
begin
open mbb_list;
loop
fetch mbb_list into customer_no;
exit when mbb_list%notfound;
l_sel_sql := 'SELECT JSON_OBJECT(* RETURNING CLOB) AS DEL_DATA
FROM B where customer_no=' || customer_no;
open l_deleted_cur for l_sel_sql;
loop
fetch l_deleted_cur into l_deleted_rec;
exit when l_deleted_cur%notfound;
r_del_data.DELETED_DOCUMENT_JSON := l_deleted_rec.del_data;
r_del_data.DELETE_DATE := SYSTIMESTAMP;
Insert into C
values r_del_data;
end loop;
close l_deleted_cur;
end loop;
close mbb_list;
end;

The best thing you can do is deal with this construct as a single SQL statement. Stop thinking in terms of row-by-row nested loops which will always be slow, and look to SQL batch operations to handle this:
begin
-- insert your two columns into table C using the data
-- from table B returned as a json_object and systimestamp,
-- where only records from table B that have a customer_no
-- from table A will be selected
insert into C (deleted_document_json, delete_date)
select json_object(B.* returning clob), systimestamp
from B
inner join A using (customer_no);
commit;
end;

Related

Oracle Stored procedure - execute for all the select result

Say I have a stored procedure which accepts 2 varchars, does some processing and updates my business tables. Is there a way that I can run the stored procedure for the results from a select query?
Like,
execute my_stored_proc select varchar_1,varchar_2 from an_ip_table;
You can iterate over results by loop
BEGIN
FOR RECS IN (SELECT varchar_1, varchar_2 FROM an_ip_table)
LOOP
my_stored_proc (RECS.varchar_1, RECS.varchar_2);
END LOOP;
END
This could be a simple way:
begin
for i in (
select varchar_1, varchar_2
from an_ip_table
)
loop
my_stored_proc(i.varchar_1, i.varchar_2);
end loop;
end;
Initially, I thought of just to put a comment, but this needs some explanation, so I'm writing an answer. You are actually doing it the wrong way. Ideally, you should be passing a cursor to your my_stored_proc and fetching the cursor inside the procedure. Your method actually causes multiple calls to procedure for every row from the query result. The processing will be very slow if you have huge volume of data. It is a bad idea even if there are few rows.
Here is a sample procedure that does a dml operation using FORALL.It is just a sample, but you should be able to convert your select query such that you should be able to do dml this way.
CREATE OR REPLACE PROCEDURE my_stored_proc (
p_iptab_cur SYS_REFCURSOR
) AS
TYPE iprec IS RECORD ( col1 an_ip_table.col1%TYPE,
col2 an_ip_table.col1%TYPE );
TYPE iptype IS
TABLE OF iprec;
ips iptype;
BEGIN
FETCH p_iptab_cur BULK COLLECT INTO ips;
FORALL i IN ips.FIRST..ips.LAST
--Your DML-- using the collection of records.
END;
/
--Calling the procedure by passing the `CURSOR`
DECLARE
x SYS_REFCURSOR;
BEGIN
OPEN x FOR select col1, col2
from an_ip_table;
my_stored_proc(x);
END;
/

Bulk inserting in Oracle PL/SQL

I have around 5 million of records which needs to be copied from table of one schema to table of another schema(in the same database). I have prepared a script but it gives me the below error.
ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define
Following is my script
DECLARE
TYPE tA IS TABLE OF varchar2(10) INDEX BY PLS_INTEGER;
TYPE tB IS TABLE OF SchemaA.TableA.band%TYPE INDEX BY PLS_INTEGER;
TYPE tD IS TABLE OF SchemaA.TableA.start_date%TYPE INDEX BY PLS_INTEGER;
TYPE tE IS TABLE OF SchemaA.TableA.end_date%TYPE INDEX BY PLS_INTEGER;
rA tA;
rB tB;
rD tD;
rE tE;
f number :=0;
BEGIN
SELECT col1||col2||col3 as main_col, band, effective_start_date as start_date, effective_end_date as end_date
BULK COLLECT INTO rA, rB, rD, rE
FROM schemab.tableb;
FORALL i IN rA.FIRST..rE.LAST
insert into SchemaA.TableA(main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
values(rA(i), rB(i), 'C', rD(i), rE(i), 71);
f:=f+1;
if (f=10000) then
commit;
end if;
end;
Could you please help me in finding where the error lies?
Why not a simple
insert into SchemaA.TableA (main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
SELECT col1||col2||col3 as main_col, band, 'C', effective_start_date, effective_end_date, 71
FROM schemab.tableb;
This
f:=f+1;
if (f=10000) then
commit;
end if;
does not make any sense. f becomes 1 - that's it. f=10000 will never be true, thus you don't make a COMMIT.
Following script worked for me and i was able to load around 5 millions of data within 15 minutes.
ALTER SESSION ENABLE PARALLEL DML
/
DECLARE
cursor c_p1 is
SELECT col1||col2||col3 as main_col, band, effective_start_date as start_date, effective_end_date as end_date
FROM schemab.tableb;
TYPE TY_P1_FULL is table of c_p1%rowtype
index by pls_integer;
v_P1_FULL TY_P1_FULL;
v_seq_num number;
BEGIN
open c_p1;
loop
fetch c_p1 BULK COLLECT INTO v_P1_FULL LIMIT 10000;
exit when v_P1_FULL.count = 0;
FOR i IN 1..v_P1_FULL.COUNT loop
INSERT /*+ APPEND */ INTO schemaA.tableA VALUES (v_P1_FULL(i));
end loop;
commit;
end loop;
close c_P1;
dbms_output.put_line('Load completed');
end;
-- Disable parallel mode for this session
ALTER SESSION DISABLE PARALLEL DML
/
ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define
You get that error because you have a literal in the VALUES clause of the INSERT. The FORALL expects everything to be bind to an array.
Your program has a massive problem, literally. You have no LIMIT on the BULK COLLECT clause, so that's going to try to load all five million records from TableB into your collections. That will blow your session's memory limit.
The point of using BULK COLLECT and FORALL is to bite off chunks of a bigger data set and process it in batches. For that you need a loop. The loop has no FOR condition: instead test whether the fetch returned anything and exit when the array has zero records.
DECLARE
TYPE recA IS RECORD (
main_col SchemaA.TableA.main_col%TYPE
, band SchemaA.TableA.band%TYPE
, start_date date
, end_date date
, roll_ni number);
TYPE recsA is table of recA
nt_a recsA;
f number :=0;
CURSOR cur_b is
SELECT col1||col2||col3 as main_col,
band,
effective_start_date as start_date,
effective_end_date as end_date ,
71 as roll_no
FROM schemab.tableb;
BEGIN
open cur_b;
loop
fetch curb_b bulk collect into nt_a limit 1000;
exit when nt_a.count() = 0;
FORALL i IN rA.FIRST..rE.LAST
insert into SchemaA.TableA(main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
values nt_a(i);
f := f + sql%rowcount;
if (f > = 10000) then
commit;
f := 0;
end if;
end loop;
commit;
close cur_b;
end;
Please note that issuing commits inside a loop is contraindicated. You lay yourself open to runtime errors such as ORA-01002 and ORA-01555. If your program does crash half-way through you will have great difficulty in resuming it without problems. By all means persist if you have problems with UNDO tablespace, but the correct answer is to get the DBA to enlarge the UNDO tablespace not weaken your code.
"i am using bulk insert because it gives better performance"
It is true that BULK COLLECT and FORALL ... INSERT is more performative than a CURSOR FOR loop with row-by-row single inserts. It is not more efficient than a pure SQL INSERT INTO ... SELECT. The value of the construct is that it allows us to manipulate the contents of the array before inserting it. This is handle if we have complex business rules which can only be applied programmatically.
Please try after changing first 2 line of your code with below:
DECLARE
TYPE tA IS TABLE OF SchemaA.TableA.main_col%TYPE INDEX BY PLS_INTEGER;
...
...
This may be because of data type/length mismatch. In declaration section you have missed to declare one to inherit type from table.
Also as mentioned, f logic for commit will not do the magic for you. Better you should use LIMIT with BULL COLLECT

Migrate records with INSERT INTO x SELECT FROM y statement and loop

I need to migrate all the records from a tableA to a tableB. At the moment I'am simply using the following statement:
INSERT INTO table1 (id, name, description) SELECT id, name, descriptionOld
FROM table2;
COMMIT;
The problem is that if there is a high number of records the temporary tablespace might not have enough space to handle this statement. For this reason I would like to know if there is any way to still have this statement over a loop that commits, for example, 1000 records at the time.
Thank you!
For huge data processing one must have a look on context switching between SQL and PLSQL engines. An approach can be let the insert from tableA to tableB and handle the error records after the insertion is completed. You create a error tableC same as your destination table to handle the error records. So once the copying of data from tableA is completed you can have a look at the error records and directly do and insert into to tableB after making correction. See below how you can do it.
declare
cursor C is
select *
from table_a;
type array is table of c%rowtype;
l_data array;
dml_errors EXCEPTION;
PRAGMA exception_init(dml_errors, -24381);
l_errors number;
l_idx number;
begin
open c;
loop
--Limit 100 will give optimal number of context switching and best perfomance
fetch c bulk collect into l_data limit 100;
begin
forall i in 1 .. l_data.count SAVE EXCEPTIONS
insert into table_b
values l_data(i);
exception
when DML_ERRORS then
l_errors := sql%bulk_exceptions.count;
for i in 1 .. l_errors
loop
l_idx := sql%bulk_exceptions(i).error_index;
--Insertnig error records to a error table_c so that later on these records can be handled.
insert into table_c
values l_data(l_idx);
end loop;
commit;
end;
exit when c%notfound;
end loop;
close c;
commit;
end;
/
Say you have these tables:
create table sourceTab( a number, b number, c number);
create table targetTab( a number, b number, c number, d number);
and you want to copy records from sourceTab to targetTab filling both the coumns c and d of the tagret table with the value of the column C in the source.
This is a way to copy the records not in a single statement, but in blocks of a given number of rows.
DECLARE
CURSOR sourceCursor IS SELECT a, b, c, c as d FROM sourceTab;
TYPE tSourceTabType IS TABLE OF sourceCursor%ROWTYPE;
vSourceTab tSourceTabType;
vLimit number := 10; /* here you decide how many rows you insert in one shot */
BEGIN
OPEN sourceCursor;
LOOP
FETCH sourceCursor
BULK COLLECT INTO vSourceTab LIMIT vLimit;
forall i in vSourceTab.first .. vSourceTab.last
insert into targetTab values vSourceTab(i);
commit;
EXIT WHEN vSourceTab.COUNT < vLimit;
END LOOP;
CLOSE sourceCursor;
END;
If you follow this approach, you may get an error when some records, but not all, have already been copied (and committed), so you have to consider the best way to handle this case, depending on your needs.

how to generate a table of random data from existing database table through oracle procedure

I have to generate a table (contains two columns) of random data from a database table through oracle procedure. The user can indicate the number of data required and we have to use the table data with ID values from 1001 to 1060. I am trying to use cursor loop and not sure dbms_random method dhould I use.
I am using the following code to create procedure
create or replace procedure a05_random_plant(p_count in number)
as
v_count number := p_count;
cursor c is
select plant_id, common_name
from ppl_plants
where rownum = v_count
order by dbms_random.value;
begin
delete from a05_random_plants_table;
for c_table in c
loop
insert into a05_random_plants_table(plant_id, plant_name)
values (c_table.plant_id, c_table.common_name);
end loop;
end;
/
it complied successfully. Then I executed with the following code
set serveroutput on
exec a05_random_plant(5);
it shows anonymous block completed
but when run the following code, I do not get any records
select * from a05_random_plants_table;
The rownum=value would not work for a value greater than 1
hence try the below
create or replace procedure a05_random_plant(p_count in number)
as
v_count number := p_count;
cursor c is
select plant_id, common_name
from ppl_plants
where rownum <= v_count
order by dbms_random.value;
begin
delete from a05_random_plants_table;
for c_table in c
loop
insert into a05_random_plants_table(plant_id, plant_name)
values (c_table.plant_id, c_table.common_name);
end loop;
end;
/
Query by Tom Kyte - will generate almost 75K of rows:
select trunc(sysdate,'year')+mod(rownum,365) TRANS_DATE,
mod(rownum,100) CUST_ID,
abs(dbms_random.random)/100 SALES_AMOUNT
from all_objects
/
You can use this example to write your query and add where clause to it - where id between 1001 and 1060, for example.
I don't think you should use a cursor (which is slow naturally) but do a direct insert from a select:
insert into table (col1, col2)
select colx, coly from other_table...
And, isn't missing a COMMIT on the end of your procedure?
So, all code in your procedure would be a DELETE, a INSERT WITH that SELECT and then a COMMIT.

How to Calculate Procedures peformance

Hi to everyone please suggest me how to check in the following among three method which is the best method and which method gives best performance.
Thanks in adavance.
CREATE OR REPLACE PACKAGE PKG_P
AS
G_SAL NUMBER(7,2):=&G;
END;
1.)NORMAL CURSOR FOR LOOP METHOD :
CREATE OR REPLACE PROCEDURE P(
v_deptno NUMBER,
v_dname VARCHAR2
)
AS
CURSOR c_emp(c_deptno NUMBER,c_dname VARCHAR2)
IS
SELECT E.Ename,D.Dname,E.Sal
FROM Emp E,Dept D
WHERE E.Deptno=D.Deptno
AND
E.Deptno=c_deptno
AND
D.Dname=c_dname
AND
E.Sal=PKG_P.G_SAL ;
BEGIN
FOR i IN c_emp(v_deptno,v_dname)
LOOP
DBMS_OUTPUT.PUT_LINE(i.Ename||' '||i.Dname||' '||i.Sal);
END LOOP;
END;
2)NORMAL CURSOR FOR LOOP WITH IF CONDITION :
CREATE OR REPLACE PROCEDURE P(
v_deptno NUMBER,
v_dname VARCHAR2
)
AS
CURSOR c_emp(c_deptno NUMBER,c_dname VARCHAR2)
IS
SELECT E.Ename,D.Dname,E.Sal
FROM Emp E,Dept D
WHERE E.deptno=D.deptno
AND
E.Deptno=c_deptno
AND
D.Dname=c_dname;
BEGIN
FOR i IN c_emp(v_deptno,v_dname)
LOOP
IF i.sal=PKG_P.G_SAL THEN
DBMS_OUTPUT.PUT_LINE(i.Ename||' '||i.Dname||' '||i.Sal);
END IF;
END LOOP;
END;
3)USING ASSCOCIATE ARRAY:
CREATE OR REPLACE PROCEDURE P(
v_deptno NUMBER,
v_dname VARCHAR2
)
AS
CURSOR c_emp(c_deptno NUMBER,c_dname VARCHAR2)
IS
SELECT E.Ename,D.Dname,E.Sal
FROM Emp E,Dept D
WHERE E.deptno=D.deptno
AND
E.Deptno=c_deptno
AND
D.Dname=c_dname;
TYPE t is RECORD
(
v_ename VARCHAR2(30),
v_dname VARCHAR2(30),
v_sal NUMBER(7,2)
);
TYPE t1 IS TABLE OF t;
t2 t1;
BEGIN
OPEN c_emp(v_deptno,v_dname);
FETCH c_emp BULK COLLECT INTO t2;
FOR i in t2.FIRST..t2.LAST
LOOP
IF t2(i).V_sal=PKG_P.G_SAL THEN
DBMS_OUTPUT.PUT_LINE(t2(i).v_ename||' '||t2(i).V_dname||' '||t2(i).V_sal);
END IF;
END LOOP;
END;
Easy answer: IT DEPENDS
With some time I could probably create some tables for you, where each of the three versions could be faster, because skewed data can confuse the optimizer...
But as general guidelines with not to strange data:
With newer Oracle Versions the first will be the fastest, easiest to read and overall best, With older versions you may have to incorporate BULK COLLECT into the first:
You should always give the optimizer as much information as you can - so all conditions should be in WHERE clause, so it can find the best plan. Also you don't want to read data from disk just to throw it away afterwards (loop with IF)
The second performance killer are PL/SQL context-switches. If you process one row at a time you will get an overhead for fetching each row individually - so you should use BULK COLLECT to read a bunch of data into memory and process them at once. But Oracle will automatically bulk collect cursor FOR-LOOPS since 10g. See this Link: CURSORS IN PLSQL
You should RARELY use BULK COLLECT without limit. What if the Cursor returns 10M rows? Without limit your whole memory will be flooded and your server will choke. So you would fetch in batches of about 100-500 rows (which newer oracle versions will do automatically with the FOR IN... Loop)

Resources