How can you profile how long a trigger takes in OracleDB? - oracle

There are some AFTER INSERT triggers on the application I'm working on, and I'm trying to optimise them. However, I need to know what I'm doing is actually improving performance. I'm looking in:
SELECT * from v$SQLAREA;
But I can't find anything that relates exactly to triggers. Is there some Oracle magic that will tell me how long my triggers are taking?

you can do something like this, but move into the trigger code.
declare
t1 timestamp;
t2 timestamp;
l_name varchar2(30);
begin
t1 := systimestamp;
--- do whatever here that will fire the trigger
t2 := systimestamp;
end loop;
dbms_output.put_line('Start: '||t1);
dbms_output.put_line(' End: '||t2);
dbms_output.put_line('Elapsed Seconds: '||TO_CHAR(t2-t1, 'SSSS.FF'));
end;

Related

How to handle exceptions in for loop for insert rows using a stored procedure? Oracle PLSQL

I'm coding a complex PLSQL block (complex for me hahaha) to insert rows using information from the FOR LOOP CURSOR and add parameters to insert using a stored procedure. The current problem is there are around 200 rows to be inserted but when a simple row fail to insert all rows inserted broke and oracle execute a ROLLBACK command (I think so). So... How could I handle exceptions to insert succefully all rounds I can and when any rows fail show it in screen? Thanks
FOR i IN c_mig_saldos LOOP
IF i.tipo_comprobante = 'P' THEN -- Nota de debito (positivo)
v_cmp_p.prn_codigo := 'VIV';
v_cmp_p.tcm_codigo := 'NRA';
v_cmp_p.cmp_fecha_emision := TRUNC(SYSDATE);
v_cmp_p.cmp_fecha_contable := TRUNC(SYSDATE);
v_cmp_p.cmp_observacion := 'GENERACION AUTOMATICA DE SALDOS';
v_cmp_p.cli_codigo := i.cli_codigo;
v_tab_dco_p(1).cnc_codigo := 'VIA';
v_tab_dco_p(1).dco_precio_unitario := i.total_final;
v_tab_dco_p(1).dco_cantidad := 1;
v_tab_dco_p(1).dco_importe := i.total_final;
-- Insert a new row using stored procedure but when a itereted fail, no rows has inserted in table
PKG_COMPROBANTES.PRC_INSERTAR_COMPROBANTE(v_cmp_p, v_tab_dco_p, v_tab_pgc_p, v_tab_apl_p, v_tab_mar_p);
COMMIT;
END IF;
END LOOP;
-- Insert a new row using stored procedure but when a itereted fail, no rows has inserted in table
begin
PKG_COMPROBANTES.PRC_INSERTAR_COMPROBANTE(v_cmp_p, v_tab_dco_p, v_tab_pgc_p, v_tab_apl_p, v_tab_mar_p);
exception
when others then --this could be made more specific but you didn't say what type of error you were getting
-- Log to a table so you can export failed inserts later.
-- Make sure log table cols are large enough to store everything that can possibly be inserted here...
ErrorLogProc(the, cols, you, want, to, see, and, SQLERRM);
end;
In the ErrorLogProc() I'd recommend a couple things, here is a snippet of some things I do in my error logging proc that you may find helpful (it's just a few snippets, not intended to fully work, but you should get the idea)...
oname varchar2(100);
pname varchar2(100);
lnumb varchar2(100);
callr varchar2(100);
g1B CHAR(2) := chr(13)||chr(10);
PRAGMA AUTONOMOUS_TRANSACTION; --important!
begin
owa_util.who_called_me(oname, pname, lnumb, callr);
--TRIM AND FORMAT FOR ERRORLOG
lv_errLogText := 'Package: '||pname||' // Version: '||version_in||' // Line Number: '||lnumb||' // Error: ';
lv_string1 := mid(errStr_in,1,4000-Length(lv_errLogText));
lv_errLogText := lv_errLogText||lv_string1;
lv_errLogText := lv_errLogText||g1B||'Error Backtrace: '||replace(dbms_utility.format_error_backtrace,'ORA-', g1b||'ORA-');
insertIntoYourErrorLogTable(lv_errLogText, and, whatever, else, you, need);
commit;
To keep this simple, since there's not enough information to know the what and why of the question, this will kick out some text information about failures as desired.
SQL> set serveroutput on
Then here's an anonymous PL/SQL block:
BEGIN
FOR i IN c_mig_saldos
LOOP
-- not relevant
BEGIN
PKG_COMPROBANTES.PRC_INSERTAR_COMPROBANTE(v_cmp_p, v_tab_dco_p, v_tab_pgc_p, v_tab_apl_p, v_tab_mar_p);
EXCEPTION
-- The goal of this is to ignore but output information about your failures
WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('whatever you want about v_cmp_p, v_tab_dco_p, v_tab_pgc_p, v_tab_apl_p, v_tab_mar_p or SQLERRM/SQLCODE or the error stack - not enough info in the question');
END;
END LOOP;
END;
/
Note: I have removed the commit from the execution.
Then if you like what you see...
SQL> commit;
Ideally, if I knew more about why the insert failures were expected to occur and what I wanted to do about them, I would not insert them in the first place.
Agree with comment that more information is needed, but a couple of things to consider:
Does this need to be done as a loop - if you can write your query as a select statement, then you can do a simple insert without the need for PLSQL, which would be simpler and likely quicker (set based SQL vs row-by-row PLSQL)
You say a ROLLBACK is occuring - you have a commit inside your IF statement, so any records which make it into your IF statement and succesfully make it through your insert procedure will be committed i.e. they will not be rolled back; You should consider if some records you think are being rolled back are actually not making it into the IF statement at all
Can you provide example data, or an example of the error you are receiving?

How to draw the line that triggered the trigger pl/sql?

This is my code:
CREATE OR REPLACE TRIGGER TR_DEL
AFTER INSERT OR UPDATE
ON DIZIONARIO_CHIAVI_DA_ESCLUDERE
FOR EACH ROW
DECLARE
CURSOR c_cursore_prendi_riga IS
SELECT CURRICULUM_CHIAVE_RICERCA
FROM PERSONA;
myCursor PERSONA.CURRICULUM_CHIAVE_RICERCA%TYPE;
BEGIN
OPEN c_cursore_prendi_riga;
LOOP
FETCH c_cursore_prendi_riga INTO myCursor;
EXIT WHEN c_cursore_prendi_riga%NOTFOUND;
dbms_output.put_line('oo='|| myCursor );
-- Here I need it
END LOOP;
CLOSE c_cursore_prendi_riga;
END;
I need the row that started the trigger, thanks
Nicholas Krasnov is right, to put this in an answer:
:new and :old are used as pseudo-records. Documentation is here:
https://docs.oracle.com/database/121/TDDDG/tdddg_triggers.htm#TDDDG50000
and here the explanation of these pseudo-records:
https://docs.oracle.com/database/121/LNPLS/triggers.htm#LNPLS99955
Most important: pseudo-records can not used like "normal" records. You have to name each column...
For example you can not use:
my_special_function(:new);
You wil have to create a "real" record:
declare
myrec mytable%rowtype;
begin
myrec.id := :new.id;
myrec.name := :new.name;
myrec.birthdate := :new.birthdate;
etc etc etc
my_special_function(myrec);
end

Preview the order of the unsorted cursor query in oracle

I have this simple oracle plsql procedure:
declare
cursor A is
select column_A
from A_TAB; -- no order by
begin
for rec_ in A loop
procedure_A(rec_.column_A);
end loop;
end;
And this is running now for ages.
When I look into sys.v_$sql_bind_capture, value_string column, I can see the current value of bound column_A, and thankfully, that value keeps changing every few minutes.
As the cursor was not sorted by anything, is there a way to see how many more records to go (until this is finished)?
In other words I would need to see the currently fetched values of the query from that cursor. Where to look for it?
This is Oracle 12 database.
You can use dbms_application_info.set_session_longops to do this. The results are visible in V$SESSION_LONGOPS.
In your example, that could do something like:
DECLARE
rindex BINARY_INTEGER;
slno BINARY_INTEGER;
totalwork number;
sofar number;
obj BINARY_INTEGER;
cursor A is
select column_A,
COUNT(*) OVER () cnt
from A_TAB; -- no order by
begin
rindex := dbms_application_info.set_session_longops_nohint;
sofar := 0;
for rec_ in A loop
totalwork := rec_.cnt;
sofar := sofar + 1;
dbms_application_info.set_session_longops(rindex,
slno,
'Process a_tab',
'A_TAB',
0,
sofar,
totalwork,
'table',
'rows');
procedure_A(rec_.column_A);
end loop;
end;
Note that in order to get the totalwork value, I've used the analytic COUNT() function to get the total number of rows within the resultset. You could run a separate query to get the count before looping through your original cursor, if that is faster. You'd have to test both methods to work out which would be fastest for your data etc.
Of course, depending on what procedure_a does, you might be able to avoid the need to monitor the progress if you can refactor things so that all the work is being done in a single SQL statement. My answer above assumes that it's not possible to do that. If it is, I highly recommend you refactor your code instead!

How to execute script conditionally in oracle?

I have a long DDL data migration script which has 15 transaction blocks. For each block, it takes data from one table and then inserts into another table.
But now I have come across a scenario in which the database does not have that table. In this case my script does not compile even.
What I actually want is to check and return early if the table does not exist. If it does, then only continue further execution.
But the script does not even compiles since it has statement which contains a table name which is non-existent in that particular database.
After some googling, I found that we can use those statements in EXECUTE IMMEDIATE block. I have tried this, but haven't been able to get it compiled.
Here is the block -
DECLARE
V_OBJECT_NAME1 VARCHAR2(50);
V_STRING VARCHAR2(1000);
V_CONTINUE Boolean;
BEGIN
V_CONTINUE := true;
SELECT TABLE_NAME INTO V_OBJECT_NAME1 FROM USER_TABLES WHERE TABLE_NAME = 'OOLD_SFWID_CHG_LOG_HEADER';
EXCEPTION WHEN NO_DATA_FOUND THEN
V_CONTINUE := false;
IF V_CONTINUE then
--scubbber code
V_STRING := 'DECLARE
CURSOR C1 IS
SELECT * FROM OOLD_SFWID_CHG_LOG_HEADER;
BEGIN
OPEN C1;
CLOSE C1;
END;';
EXECUTE IMMEDIATE V_STRING
END IF;
END;
Please suggest how can we write it..

Best practice for performing inserts in a cursor

I need to do some inserts in a cursor over about 300000 rows, this is however running slowly, any ideas on how i can make it run faster? Can i speed it up by batching the commits? So for example i would perform a commit after the 1000th row
DECLARE
CURSOR test_cursor IS
SELECT a from database.mytable
BEGIN
FOR curRow IN test_cursor LOOP
insert into tableb (testval)
values ('something');
commit;
END LOOP;
END;
300000 rows is not that many rows. Unless the rows are each extremely large, you should not commit in the middle of the batch.
Intermediate commits will only achieve:
additional overhead because each commit creates additional work,
loss of restartability in case of error (and loss of transactional integrity),
greater chance of running into ORA-1555
If your process is really a cursor with a single insert inside the loop, you should run a single statement:
BEGIN
INSERT INTO tableb (col1..coln) (SELECT col1..coln FROM database.mytable);
END;
If you still need extra performance, you could look into direct insert and parallel operation but It might be over-optimization with "only" 300k rows.
By far the single greatest optimization available to you is to think in term of sets instead of the traditional procedural approach that consists of batches of single row statements.
Or you can try this:
DECLARE
CURSOR test_cursor IS
SELECT col1 from table_a;
TYPE fetch_array IS TABLE OF test_cursor%ROWTYPE;
test_array fetch_array;
l_errors PLS_INTEGER;
l_dml_errors EXCEPTION;
PRAGMA EXCEPTION_INIT(l_dml_errors, -24381);
BEGIN
open test_cursor;
loop
fetch test_cursor bulk collect into test_array limit 10000;
forall i in 1..test_array.count save exceptions
insert into table_b(col1)
values(test_array(i).col1);
exit when test_cursor%notfound;
end loop;
close test_cursor;
commit;
EXCEPTION
WHEN l_dml_errors THEN
l_errors := SQL%BULK_EXCEPTIONS.COUNT;
dbms_output.put_line('Number of INSERT statements that failed: ' || l_errors);
FOR i IN 1 .. l_errors
LOOP
dbms_output.put_line('Error #' || i || ' at '|| 'iteration #' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
dbms_output.put_line('Error message is ' || SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
END LOOP;
END;
I would not recommend a cursor approach for this. I use append parallel hints for situations like this. Most of the time your query literally runs N times as fast where N is the parallel degree. It is occasionally a good idea to bypass disaster recovery with nologging / noarchivelog.
For truly large migrations (dozens to hundreds of GB), I've found it's a good idea to batch on a table's natural key (date, usually). Some small amount of state around it can let you cancel + resume the migration at will if necessary.
May Be This will help you please try this
DECLARE
i number;
CURSOR test_cursor IS
SELECT a from database.mytable
BEGIN
FOR curRow IN test_cursor LOOP
insert into tableb (testval)
values ('something');
i:i+1;
if mod(i,1000)=0 then
commit;
end if;
END LOOP;
commit;
END;

Resources