Deadlock detected due to Pragma Autonomous Transaction Procedure - oracle

I have the below procedure which has pragma autonomous_transaction clause.Now this procedure is being called in Java code after validating the come business logic. After an execution of this proc, It starts with some java stuffs...
create or replace procedure UPDATE_INSTRUMENT
is
pragma autonomous_transaction;
begin
begin
update abc
set AUTHSTATUS = p_AUTHSTATUS,
STATUS = p_STATUS,
USERID = p_USERID,
LASTUPDATED = TO_DATE(p_LASTUPDATED, 'DD/MM/YYYY'),
USERDATETIME = TO_DATE(p_USERDATETIME, 'DD/MM/YYYY')
where TRANSACNO = p_TRANSACNO;
commit;
end;
begin
update xyz
set AUTHSTATUS = p_AUTHSTATUS,
USERID = p_USERID,
AUTHDATE = TO_DATE(SYSDATE, 'DD/MM/YYYY'),
LASTUPDATED = TO_DATE(SYSDATE, 'DD/MM/YYYY'),
where TRANSACNO = p_TRANSACNO;
commit;
end;
end UPDATE_INSTRUMENT;
Table 'xyz' has three triggers and out of that 1 is on Insert and 2 are on Before update event.
P.N:- Table 'xyz' is not updated or locked anywhere before calling to this Procedure.
I am getting the below errors.
ORA-00060: deadlock detected while waiting for resource
ORA-06512: at "ADTTRG_xyz", line 277
ORA-04088: error during execution of trigger 'ADTTRG_xyz'
The table abc is getting updated properly, but it is failing to update table xyz.
Please explain why this deadlock is occurring.

"How deadlock is occurring."
Deadlocks occur when two sessions simultaneously attempt to change a common resource - such as a table or unique index - in such a way that neither session can commit without the other committing first. This is always an application design flaw, in that deadlocks are the result of complicated flow and poorly-implemented logic strategies.
There are a few clues that this is the case here.
ORA-06512: at "ADTTRG_xyz", line 277
For a start, a trigger with several hundred lines of code is a code smell. That's a lot of code to have in a trigger. It seems like there's a opportunity for competition there. Especially as...
Table 'xyz' has three triggers and ... 2 are on Before update event.
You have two BEFORE UPDATE triggers on the 'xyz' table and the event which generates the deadlock is an update of 'xyz'. This may not be a coincidence.
You must investigate these two triggers and establish which tables and indexes they need, so that you can spot whether they are in contention.
pragma autonomous_transaction;
The PL/SQL documentation says "If an autonomous transaction tries to access a resource held by the main transaction, a deadlock can occur." Autonomous transactions are another code smell. There are very few valid use cases for autonomous transactions; more often they are used to wrangle a bad data model into submission.
So you have a lot of things to investigate. Oracle offers diagnostics to help with this.
When deadlocks happen Oracle produces a tracefile. This file will be written to an OS directory. If you don't know where that is you can query the V$DIAG_INFO view. Find out more. The tracefile will tell you the two rowids which generated the deadlock; you can find out the object id using dbms_rowid.rowid_object() and plug that into select object_name from all_objects where object_id = :oid;. Depending on how your organisation arranges things you may not have access to the tracefile directory, in which case you will need to ask a DBA for help.
Once you know what table is in deadlock you know what you must change in your application logic. Potentially that's quite a big change, as your code has a number of red flags (long trigger bodies, two triggers on same event, autonomous transaction). Good luck!

Related

Trigger cant read the table, after being fired by the same table

Lets say I have a table as follows--
create table employees
(
eno number(4) not null primary key,
ename varchar2(30),
zip number(5) references zipcodes,
hdate date
);
I've created a trigger using the following code block
create or replace TRIGGER COPY_LAST_ONO
AFTER INSERT ON ORDERS
FOR EACH ROW
DECLARE
ID_FROM_ORDER_TABLE VARCHAR2(10);
BEGIN
SELECT MAX(ORDERS.ONO)INTO ID_FROM_ORDER_TABLE from ORDERS ;
DBMS_OUTPUT.PUT_LINE(ID_FROM_ORDER_TABLE);
INSERT INTO BACKUP_ONO VALUES( VALUE1, VALUE2,VALUE3, ID_FROM_ORDER_TABLE);
END;
The trigger fires after insertion and attempts to read from the table that fired it(logically duhh!) but oracle is giving me an error and asking me to modify the trigger so that it doesnt read the table. Error code-
Error report -
SQL Error: ORA-04091: table TEST1.ORDERS is mutating, trigger/function may not see it
ORA-06512: at "TEST1.COPY_LAST_ONO", line 8
ORA-04088: error during execution of trigger 'TEST1.LOG_INSERT'
04091. 00000 - "table %s.%s is mutating, trigger/function may not see it"
*Cause: A trigger (or a user defined plsql function that is referenced in
this statement) attempted to look at (or modify) a table that was
in the middle of being modified by the statement which fired it.
*Action: Rewrite the trigger (or function) so it does not read that table.
What I'm trying to achieve with this trigger is to copy the last INSERTED ONO (which is a primary key for the ORDER table) immediately to a different table after being INSERTED. What I don't get is, why oracle complaining? The trigger is attempting to read AFTER the insertion!
Ideas? Solution?
MANY THANKS
If you are trying to log the ONO you just inserted, use :new.ono and skip the select altogether:
INSERT INTO BACKUP_ONO VALUES( VALUE1, VALUE2,VALUE3, :new.ono);
I don't believe you can select from the table you are in the middle of inserting into as the commit has not been issued yet, hence the mutating table error.
P.S. Consider not abbreviating. Make it clear for the next developer and call it ORDER_NUMBER or at least a generally accepted abbreviation like ORDER_NBR, whatever your company's naming standards are. :-)
FYI - If you are updating, you can access :OLD.column as well, the value before the update (of course if the column is not a primary key column).
Amplifying #Gary_W's answer:
Oracle does not allow a row trigger (one with FOR EACH ROW in it) to access the table on which the trigger is defined in any way - you can't issue a SELECT, INSERT, UPDATE, or DELETE against that table from within the trigger or anything it calls (so, no, you can't dodge around this by calling a stored procedure which does the dirty work for you - but good thinking! :-). My understanding is that this is done to prevent what you might call a "trigger loop" - that is, the triggering condition is satisfied and the trigger's PL/SQL block is executed; that block then does something which causes the trigger to be fired again; the trigger's PL/SQL block is invoked; the trigger's code modifies another row; etc, ad infinitum. Generally, this should be taken as a warning that your logic is either really ugly, or you're implementing it in the wrong place. (See here for info on the evil of business logic in triggers). If you find that you really seriously need to do this (I've worked with Oracle and other databases for years - I've really had to do it once - and may Cthulhu have mercy upon my soul :-) you can use a compound trigger which allows you to work around these issues - but seriously, if you're in a hole like this your best option is to re-work the data so you don't have to do this.
Best of luck.
Modify your trigger to use PRAGMA AUTONOMOUS_TRANSACTION
create or replace TRIGGER COPY_LAST_ONO
AFTER INSERT ON ORDERS
FOR EACH ROW
DECLARE
ID_FROM_ORDER_TABLE VARCHAR2(10);
PRAGMA AUTONOMOUS_TRANSACTION; -- Modification
BEGIN
.
.
.

Can a table be locked by another table?

We're encountering issues on our oracle 11g database, regarding table lock.
We have a procedure that is executed via sql*plus, which truncates a table, let say table1.
We sometimes get a ORA-00054: resource busy and acquire with NOWAIT error during execution of the procedure at the part when the table is to be truncated. We have a webapp that is in a tomcat server, which when restarted (to kill sessions to the database from tomcat), the procedure can be re executed successfully.
table1 isn't used, not even in select, in the source code for the webapp, but a lot of parent table of table1 is.
So is it possible that an uncommitted update to one of its parent table is causing the lock?. If so, any suggestions on how I can test it?
I've checked with the DBA during times when we encounter the issue, but he can't get the session that is blocking the procedure and the statement that caused the lock.
Yes, an update of a parent table will get a lock on the child table. Below is a test case demonstrating it is possible.
Finding and tracing a specific intermittent locking issue can be painful. Even if you can't trace down the specific conditions it would be a good idea to modify any code to avoid concurrent DML and DDL. Not only can it cause locking issues, it can also break SELECT statements.
If removing the concurrency is out of the question, you may at least want to enable DDL_LOCK_TIMEOUT so that the truncate statement will wait for the lock instead of instantly failing: alter session set ddl_lock_timeout = 100000;
--Create parent/child tables, with or without an indexed foreign key.
create table parent_table(a number primary key);
insert into parent_table values(1);
insert into parent_table values(2);
create table child_table(a number references parent_table(a));
insert into child_table values(1);
commit;
--Session 1: Update parent table.
begin
loop
update parent_table set a = 2 where a = 2;
commit;
end loop;
end;
/
--Session 2: Truncate child table. Eventulaly it will throw this error:
--ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
begin
loop
execute immediate 'truncate table child_table';
end loop;
end;
/
any suggestions on how I can test it?
You can check the blocking sessions when you get ORA-00054: resource busy and acquire with NOWAIT error.
Blocking sessions occur when one sessions holds an exclusive lock on an object and doesn't release it before another sessions wants to update the same data. This will block the second until the first is either does a COMMIT or ROLLBACK.
SELECT
s.blocking_session,
s.sid,
s.serial#,
s.seconds_in_wait
FROM
v$session s
WHERE
blocking_session IS NOT NULL;
For example, see my similar answer here and here.

Oracle Stored Procedure - does raise_application_error disable the procedure if not trapped?

I have a stored procedure in Oracle doing this:
SELECT Count(*) INTO v_count_of_rows_bad FROM SCHMEA.TABLE WHERE
KEY1 = v_key1
AND KEY2 = v_key2
AND STATUS IN ('0', 'P', 'N');
IF v_count_of_rows_bad > 0 THEN
raise_application_error( -20001, 'Status is not ready' );
END IF;
The purpose is to prevent the SP from updating the record if the status is one of those three. There's a batch process that will run against the rows and update the status to Y when complete. Only after this should the SP be allowed to update the row. (I know there's probably a better way to do this, but I'm stuck with some legacy stuff, and another team controls the batch process).
I'm not catching the exception in the stored proc, but rather using it to transfer the text "Status it not ready" to my .NET webservice (written in C#) which is calling the procedure.
The problem is, once the error is encountered, it seems to get stuck. Even if I manually updated the status to Y in the table, and see that it really did update, when I run my .NET service again, it gives me the error again. But if I run the SQL that does the create or replace of the stored procedure again, then run the webservice, its no longer "stuck." Or if I merely compile the SP again, that fixes it. Does anyone know what this could be?
No, raising an exception in a stored procedure doesn't cause Oracle to "get stuck". But the exception will be returned to the caller, and the caller will need to handle the exception appropriately, for example, ROLLBACK the current transaction, and close the connection (or return it to the connection pool.)
It's not clear why re-compiling the stored procedure would cause the application to become "unstuck". The information provided is insufficient to definitively explain the behavior you are observing. (As Justin points out in his comments, there are several possibilities (transaction isolation level, uncommitted transactions, implicit commits, resultsets cached, et al.)
You don't have to raise an exception in this case unless you really want to; you can just code the UPDATE so it doesn't do anything if the status is one of the undesirable values:
UPDATE A_SCHEMA.SOME_TABLE
SET SOME_COL = vVALUE,
SOME_OTHER_COL = vOTHER_VALUE,
ETC = vBLAH
WHERE KEY1 = v_key1 AND
KEY2 = v_key2 AND
STATUS NOT IN ('0', 'P', 'N');
If you need to know if any rows were updated you can use %ROWCOUNT, as in
IF SQL%ROWCOUNT > 0 THEN
DBMS_OUTPUT.PUT_LINE(SQL%ROWCOUNT || ' rows updated');
ELSE
DBMS_OUTPUT.PUT_LINE('No rows updated! Run! Run for your lives!!');
END IF;
This is almost certainly faster than doing a SELECT COUNT(*)... (which can be surprisingly slow) and then turning around and doing the UPDATE.
Share and enjoy.

Keep a record of all deletes with a trigger, including rollbacks

Let's say I have 2 tables, EMPLOYEE and EMP_BAK. I need to create a backup table for all the data that deleted from employee, even these that are rolled back
My trigger:
CREATE OR REPLACE TRIGGER emp_del_bak_trg
before delete ON employee
FOR EACH row
DECLARE
oldname department.department_name%type;
newname department.department_name%type;
BEGIN
INSERT INTO emp_bak
VALUES (:OLD.employee_id, :OLD.employee_name, :OLD.job
,:OLD.hire_date,:OLD.department_id, sysdate);
--commit;
end;
Now, if I rollback then the data will be deleted; if I uncomment out commit, I'll get an error on deleting. The idea is to keep the record plus keep track of the system updates.
Any ideas how to get around this?
There is almost never a good reason to commit inside a trigger.
Now, that's out of the way your situation seems to be extra special, you need to track it, even if the transaction is rolled back. This is a fairly unusual requirement, but I'm going to assume you have a very good reason for doing this.
If you really, really, want to do this you need to use an autonomous transaction, this enables an independent transaction to be committed within another. As a trigger is a PL/SQL block, you can do this in a trigger.
The Oracle documentation has a separate section dealing with autonomous transactions in triggers, with plenty of examples for you. Wherever you need to use one the syntax is as follows, this is always placed in the DECLARE block:
PRAGMA AUTONOMOUS_TRANSACTION;
Your trigger, therefore, would look as follows:
create or replace trigger emp_del_bak_trg
before delete on employee
for each row
declare
PRAGMA AUTONOMOUS_TRANSACTION;
begin
insert into emp_bak
values ( :old.employee_id, :old.employee_name, :old.job
, :old.hire_date, :old.department_id, sysdate);
commit;
end;
/
As a little note I always case this differently so it's obvious you're doing something that you shouldn't be. Autonomous transactions are dangerous and should be used with extreme care.

PL/SQL Oracle Error Handling

I've created a trigger that only allows a user to have 10 current placed orders. So now when the customer tries to placed order number 11 the oracle database throws back a error. Well 3 errors.
ORA-20000: You currently have 10 or more orders processing.
ORA-06512: at "C3283535.TRG_ORDER_LIMIT", line 12
ORA-04088: error during execution of trigger
'C3283535.TRG_ORDER_LIMIT'
The top error is one I've created using:
raise_application_error(-20000, 'You currently have 10 or more orders
processing.');
I just wondered after search and trying many ways how to change the error messages for the other two errors or even not show them all together to the user?
Here is the code I've used
create or replace trigger trg_order_limit
before insert on placed_order for each row
declare
v_count number;
begin
-- Get current order count
select count(order_id)
into v_count
from placed_order
where fk1_customer_id = :new.fk1_customer_id;
-- Raise exception if there are too many
if v_count >= 10 then
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20000, 'You currently have 10 or more orders processing.');
end if;
end;
Thanks a lot
Richard
The exception propagation goes from the internal-to-external block, as opposed to variable scope which goes from external-to-internal block. For more reference on this, read McLaughlin's "Programming with PL/SQL", Chapter 5.
What you are getting here is an exception stack - exceptions raised from the innermost blocks to the outermost blocks.
When you raise an exception from a trigger, your raise_application_error statement returns an error.
It is then propagated to the trigger block which says ORA-06512: at "C3283535.TRG_ORDER_LIMIT", line 12. This is because the trigger treats the raised exception as an error and stops to continue.
The error is then propagated to the session which raises the ORA-04088: error during execution of trigger 'C3283535.TRG_ORDER_LIMIT'. This error reports to us about where, as in which part of the program, the error was raised.
If you are using a front-end program like Java Server Pages or PHP, you will catch the raised error - 20000 first. So, you can display the same to your end user.
EDIT :
About the first error - ORA-20000, you can change it in the RAISE_APPLICATION_ERROR statement itself.
If you want to handle the ORA-06512, you can use Uday Shankar's answer which is helpful in taking care of this error and showing an appropriate error message.
But, you will still be getting the last ORA-04088. If I was at your place I wouldn't have worried, as after getting the ORA-20000 I would raise an application error at the front end itself while hiding all the other details from the user.
In fact, this is the nature of Oracle's exception stack. All the errors from the innermost to the outermost block are raised. This is helpful a lot of times for us to identify the exact source of error.
In the trigger you can add the exception handling part as shown below:
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20000, 'You currently have 10 or more orders processing.');
I see that this is quite an old post, but I think that readers should be aware that
This does not really enforce the business rule (max 10 orders). If
is is just "some" number to avoid too high amounts and you don't
care if sometimes people have 12 orders then this may be fine. But if not, think of a scenario where you have 9 orders already, and then orders for the same customer are inserted from 2 different sessions / transactions simultaneously. In that case you will end up with 11 orders, without detecting this overflow situation. So you can not rely on this trigger actually.
Besides that, you might need to have this trigger fire on update too, if the fk1_customer_id may be updated (I have seen implementations where at first a NULL is put into the FK column, and later being updated to the actual value). You may want to consider if this scenario is realistic.
There is a fundamental flaw in the trigger. You are inside a transaction and inside a statement that is currently being executed but not complete yet. So what if the insert is not a single row insert but something like
insert into placed_order (select ... from waiting_orders ...)
what do you expect the trigger to see?
This kind of business rule is not easy to enforce. But if you choose to do it in a trigger, you better do it in an after statement trigger (thus, not in a before row trigger). An after statement trigger still will not see results of other uncommitted transactions, but at least the current statement is in a defined state.
In fact the business rule CAN fundamentally only be enforced at commit time; but there is not a thing like an ON-COMMIT trigger in the Oracle database.
What you can do is denormalising the count of records into the customers table (add a column ORDER_COUNT), and place a deferred constraint (ORDER_COUNT <= 10) in that table.
But then you are still relying on correctly maintaining this field throughout your code.
A fully reliable alternative, but somewhat cumbersome, is to create a materialized view (something like SELECT fk_customer_id, count(*) order_count from placed_orders group by fk_customer_id, with FAST REFRESH ON COMMIT on the placed_order table and create a check constraint order_count <= 10 on the materialized view.
This is about the only way to reliably enforce this type of constraints, without having to think of all possible situations like concurrent sessions, updates etc.
Note however that the FAST REFRESH ON COMMIT will slow down your commit; so this solution is not useable for high volumes (sigh... Why just does Oracle not provide an ON COMMIT trigger...)

Resources