ORA-00001: unique constraint violated error in oracle jdbc - oracle

I have this 2 jdbc statement (Oracle 10g with isolation level read committed)
delete from emp where emp_id = 1;
insert into emp (emp_id,address,.....) value (1,'newyork',.....);
emp_id,address is the primary key
Unfortunately in a multithreaded environment I get ORA-00001: unique constraint violated while inserting the record.
When I run in a single thread I don't see any issue.
After investigation I found that
first session deletes and then inserts a record but is not yet committed.
second session deletes it
first session now commits it
second session now tries to insert it and throws the error .
Let me know if I am missing anything.
How to solve this problem in oracle.I don't like the solution of single threaded ,I also don't like the solution of retry.
Any clean solution?

Make sure you have enabled AUTOCOMMIT in your program, then check whether the row exists or not before inserting it to prevent ORA-00001. Here's the PL/SQL block.
declare
v_rows_count number;
begin
select count(*) into v_rows_count from emp where emp_id = 1;
if v_rows_count = 0 then
-- Insert the row
else
-- Do not insert the row
end if;
end;
/
Don't forget to commit the change later.

Related

locks when updating a table with a trigger

I have a table (T1) with a trigger after update. This trigger looks for some values and then it updates other table (T2)
CREATE OR REPLACE TRIGGER t1_AIR AFTER UPDATE ON t1 FOR EACH ROW
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
v_dF1 DATE;
v_dF2 DATE;
v_nDays NUMBER;
v_cReg VARCHAR2(50);
BEGIN
IF :NEW.BN_BE_ESTADO not in ('PEN', 'ERR', 'BAJ') THEN
v_nDays := F_GET_PARAM(NULL, NULL, 'X_DAYS');
PCK_AUX.PR_GET_F1(:NEW.F_A, v_dF1, v_cReg);
PCK_AUX.PR_GET_F2(:NEW.F_A, v_dF2);
UPDATE T2
SET F_F1 = TRUNC(v_dF1),
F_F2 = TRUNC(v_dF2),
F_F_END = PCK_AUX.FU_GET_F_END(v_dF2+1, v_nDays),
F_N_REG = v_cReg
WHERE F_ID = :NEW.F_B;
END IF;
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
But when I update T1 it generates locks in both tables (t1 and t2)... does anyone know why??
Thanks to all
When you update a table, row-level(TX) lock and table lock in row-share(RM) and row exclusive mode(RX) are acquired.
Row-level lock is there so other sessions couldn't modify row(s) being updated. Table locks in different modes are there to protect your table structure from modifications while DML statement (update in your case) in progress.
So yes, in your situation rows that are being updated will be locked in both table 1 and table 2 plus table lock will be held in row share and row exclusive mode.
A bit about your trigger...
Why autonomous transaction pragma is there? Unless you are using that trigger for logging for example, situation when you have to commit whether main transaction succeeds or not, you really do not need this pragma. As #William Robertson absolutely correctly pointed out if use autonomous transaction you have to commit this autonomous transaction otherwise ORA-06519 will be raised.
Reconsider usage of when others then null statement - would be better not to hide but re-raise any exceptions that might be generated.

Can a table be locked by another table?

We're encountering issues on our oracle 11g database, regarding table lock.
We have a procedure that is executed via sql*plus, which truncates a table, let say table1.
We sometimes get a ORA-00054: resource busy and acquire with NOWAIT error during execution of the procedure at the part when the table is to be truncated. We have a webapp that is in a tomcat server, which when restarted (to kill sessions to the database from tomcat), the procedure can be re executed successfully.
table1 isn't used, not even in select, in the source code for the webapp, but a lot of parent table of table1 is.
So is it possible that an uncommitted update to one of its parent table is causing the lock?. If so, any suggestions on how I can test it?
I've checked with the DBA during times when we encounter the issue, but he can't get the session that is blocking the procedure and the statement that caused the lock.
Yes, an update of a parent table will get a lock on the child table. Below is a test case demonstrating it is possible.
Finding and tracing a specific intermittent locking issue can be painful. Even if you can't trace down the specific conditions it would be a good idea to modify any code to avoid concurrent DML and DDL. Not only can it cause locking issues, it can also break SELECT statements.
If removing the concurrency is out of the question, you may at least want to enable DDL_LOCK_TIMEOUT so that the truncate statement will wait for the lock instead of instantly failing: alter session set ddl_lock_timeout = 100000;
--Create parent/child tables, with or without an indexed foreign key.
create table parent_table(a number primary key);
insert into parent_table values(1);
insert into parent_table values(2);
create table child_table(a number references parent_table(a));
insert into child_table values(1);
commit;
--Session 1: Update parent table.
begin
loop
update parent_table set a = 2 where a = 2;
commit;
end loop;
end;
/
--Session 2: Truncate child table. Eventulaly it will throw this error:
--ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
begin
loop
execute immediate 'truncate table child_table';
end loop;
end;
/
any suggestions on how I can test it?
You can check the blocking sessions when you get ORA-00054: resource busy and acquire with NOWAIT error.
Blocking sessions occur when one sessions holds an exclusive lock on an object and doesn't release it before another sessions wants to update the same data. This will block the second until the first is either does a COMMIT or ROLLBACK.
SELECT
s.blocking_session,
s.sid,
s.serial#,
s.seconds_in_wait
FROM
v$session s
WHERE
blocking_session IS NOT NULL;
For example, see my similar answer here and here.

"ORA-14450: attempt to access a transactional temp table already in use" in a compound trigger

I have a table which can hold many records for one account: different amounts.
ACCOUNTID | AMOUNT
id1 | 1
id1 | 2
id2 | 3
id2 | 4
Every time a record in this table is inserted/updated/deleted we need to evaluate an overall amount in order to know if we should trigger or not an event (by inserting data into another table). The amount is computed based on the sum of records (per account) present in this table.
The computation of the amount should use new values of the records, but we need also old values in order to check some conditions (e.g. old value was X - new value is Y: if [X<=threshold and Y>threshold] then trigger event by inserting a record into another table).
So in order to compute and trigger the event, we created a trigger on this table. Something like this:
CREATE OR REPLACE TRIGGER <trigger_name>
AFTER INSERT OR UPDATE OR DELETE OF MOUNT ON <table_name>
FOR EACH ROW
DECLARE
BEGIN
1. SELECT SUM(AMOUNT) INTO varSumAmounts FROM <table_name> WHERE accountid = :NEW.accountid;
2. varAmount := stored_procedure(varSumAmounts);
END <trigger_name>;
The issue is that statement 1. throws the following error: 'ORA-04091: table is mutating, trigger/function may not see it'.
We tried the following but without success (same exception/error) to select all records which have rowId different than current rowId:
(SELECT SUM(AMOUNT)
INTO varSumAmounts
FROM <table_name>
WHERE accountId = :NEW.accountid
AND rowid <> :NEW.rowid;)
in order to compute the amount as the sum of amounts of all rows beside current row + the amount of current row (which we have in the context of the trigger).
We searched for other solutions and we found some but I don’t know which of them is better and what is the downside for each of them (although they are somehow similar)
Use compound trigger
http://www.oracle-base.com/articles/9i/mutating-table-exceptions.php
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551198119097816936
To avoid 'table is mutating' error based on solutions 1&2, I used a combination of compound triggers with global temporary tables.
Now we have a compound trigger which uses some global temporary tables to store relevant data from :OLD and :NEW pseudo records. Basically we do the next things:
CREATE OR REPLACE TRIGGER trigger-name
FOR trigger-action ON table-name
COMPOUND TRIGGER
-------------------
BEFORE STATEMENT IS
BEGIN
-- Delete data from global temporary table (GTT) for which source is this trigger
-- (we use same global temporary tables for multiple triggers).
END BEFORE STATEMENT;
-------------------
AFTER EACH ROW IS
BEGIN
-- Here we have access to :OLD and :NEW objects.
-- :NEW and :OLD objects are defined only inside ROW STATEMENTS.
-- Save relevant data regarding :NEW and :OLD into GTT table to use it later.
END AFTER EACH ROW;
--------------------
AFTER STATEMENT IS
BEGIN
-- In this block DML operations can be made on table-name(the same table on which
--the trigger is created) safely.
-- Table is mutating error will no longer appear because this block is not for EACH ROW specific.
-- But we can't access :OLD and :NEW objects. This is the reason why in 'AFTER EACH ROW' we saved them in GTT.
-- Because previously we saved :OLD and :NEW data, now we can continue with our business logic.
-- if (oldAmount<=threshold && newAmount>threshold) then
-- trigger event by inserting record into another table
END AFTER STATEMENT;
END trigger-name;
/
The global temporary tables used are created with option 'ON COMMIT DELETE ROWS', this way I make sure that data from this table will be cleaned at the end of the transaction.
Yet, this error occurred: 'ORA-14450: attempt to access a transactional temp table already in use'.
The problem is that the application uses distributed transactions and in oracle documentation is mentioned that:
"A variety of internal errors can be reported when using Global Temporary Tables (GTTs) in conjunction with Distributed or XA transactions.
...
Temporary tables are not supported in any distributed, and therefore XA, coordinated transaction.
The safest option is to not use temporary tables within distributed or XA transactions as their use in this context is not officially supported.
...
A global temporary table can be safely used if there is only single branch transaction at the database using it, but if there are loopback database links or XA transactions involving multiple branches, then problems can occur including block corruption as per Bug 5344322.
"
It's worth mentioning that I can't avoid XA transactions or making DML on same table which is the subject of the trigger (fixing the data model is not a feasible solution). I've tried using instead of the global temporary table a trigger variable - a collection (table of objects) but I am not sure regarding this approach. Is it safe regarding distributed transactions?
Which other solutions will be suitable in this case to fix either initial issue: 'ORA-04091: table name is mutating, trigger/function may not see it', or the second one: 'ORA-14450: attempt to access a transactional temp table already in use'?
You should carefuly check that you code doesn't use autonomous transactions to access temporary table data:
SQL> create global temporary table t (x int) on commit delete rows
2 /
SQL> insert into t values(1)
2 /
SQL> declare
2 pragma autonomous_transaction;
3 begin
4 insert into t values(1);
5 commit;
6 end;
7 /
declare
*
error in line 1:
ORA-14450: attempt to access a transactional temp table already in use
ORA-06512: error in line 4
In case you do a DELETE FROM <temp-table-name> in BEFORE STATEMENT and AFTER STATEMENT is should not matter if you GTT is defined with ON COMMIT PRESERVE ROWS or ON COMMIT DELETE ROWS.
In your trigger you can define a RECORD/TABLE variable. This variable you can initialize in BEFORE STATEMENT block and loop over it in BEFORE STATEMENT block.
Would be something like this:
CREATE OR REPLACE TRIGGER TRIGGER-NAME
FOR TRIGGER-action ON TABLE-NAME
COMPOUND TRIGGER
TYPE GTT_RECORD_TYPE IS RECORD (ID NUMBER, price NUMBER, affected_row ROWID);
TYPE GTT_TABLE_TYPE IS TABLE OF GTT_RECORD_TYPE;
GTT_TABLE GTT_TABLE_TYPE;
-------------------
BEFORE STATEMENT IS
BEGIN
GTT_TABLE := GTT_TABLE_TYPE(); -- init the table variable
END BEFORE STATEMENT;
-------------------
AFTER EACH ROW IS
BEGIN
GTT_TABLE.EXTEND;
GTT_TABLE(GTT_TABLE.LAST) := GTT_RECORD_TYPE(:OLD.ID, :OLD.PRICE, :OLD.ROWID);
END AFTER EACH ROW;
--------------------
AFTER STATEMENT IS
BEGIN
FOR i IN GTT_TABLE.FIRST..GTT_TABLE.LAST LOOP
-- do something with values
END LOOP;
END AFTER STATEMENT;
END TRIGGER-NAME;
/

how to make a trigger like primary key constraint?

i need to define a trigger which i want to apply on a column of table. The trigger should restrict the user to input duplicate and not null values. Or you can say, i need to know the logic of primary key.
Just because you seem intent on seeing this fail, and not to take anything away from APC's points, this appears to work at first glance as long as it's a before trigger:
create table t42 (id number);
create trigger trig42
before insert or update on t42
for each row
declare
c number;
begin
if :new.id is null then
raise_application_error(-20001, 'ID is null');
end if;
select count(*) into c from t42 where id = :new.id;
if c > 0 then
raise_application_error(-20002, 'ID is not unique');
end if;
end;
/
It compiles and if you insert data you get the behaviour you seem to want:
insert into t42 values (1);
1 rows inserted.
insert into t42 values (1);
Error starting at line 20 in command:
insert into t42 values (1)
Error report:
SQL Error: ORA-20002: ID is not unique
ORA-06512: at "STACKOVERFLOW.TRIG42", line 9
ORA-04088: error during execution of trigger 'STACKOVERFLOW.TRIG42'
insert into t42 values (null);
Error starting at line 22 in command:
insert into t42 values (null)
Error report:
SQL Error: ORA-20001: ID is null
ORA-06512: at "STACKOVERFLOW.TRIG42", line 5
ORA-04088: error during execution of trigger 'STACKOVERFLOW.TRIG42'
select * from t42;
ID
----------
1
Which seems to do what you want. But not if you have more than one session. I haven't committed in this session; in another session I can do:
insert into t42 values (1);
1 row created.
select * from t42;
ID
----------
1
1 row selected.
Hmm, that's strange. Well, maybe it's deferred... let's commit them both:
commit;
select * from t42;
ID
----------
1
1
2 rows selected.
Oops. Once session can't see another session's uncommitted data, so this will never work.
Also, the mutating table problem exhibits itself when we insert multiple rows in a single statement:
SQL> insert into t42 select level+1 from dual connect by level <= 5;
insert into t42 select level+1 from dual connect by level <= 5
*
ERROR at line 1:
ORA-04091: table STACKOVERFLOW.T42 is mutating, trigger/function may not see it
ORA-06512: at "STACKOVERFLOW.TRIG42", line 7
ORA-04088: error during execution of trigger 'STACKOVERFLOW.TRIG42'
SQL>
Double oops.
Even with an after trigger and a package to work around the mutating table issue, you'd still have this problem (I think), unless you lock the whole table for every insert or update. As APC said the constraint is implemented deep in the bowels of the database, not at this level.
is it not possible to define a trigger, which checks the value before
insertion that it should not be null and unique as well?
Not when you have more than one session, no. And even within one session, unless you have an index on the column the performance won't scale as the count(*) will get progressively slower. And if you do have an index, well, why not make it a unique index in the first place?
Finally, from the trigger design guidelines:
Do not create triggers that duplicate database features.
For example, do not create a trigger to reject invalid data if you can
do the same with constraints (see "How Triggers and Constraints
Differ").
" i want to learn, how primary key is made(it is a trigger of course)"
There is no "of course" about it. A constraint is not a trigger. It is an internal process which uses an index and a lot of low level activity to enforce relational constraints in a reliable and efficient manner.
If you want to learn the rules are quite straightforward: not null, uniqueness, serialization. So just try to implement a primary key in triggers. You'll find you can't (spoiler alert!) because of the "mutating table" problem. And if you don't understand what that means, well there's a good topic to read about.
there is a question "is it not possible to define a trigger, which
checks the value before insertion that it should not be null and
unique as well? "
The answer to that question is, No. Well, you could code a trigger-based implementation but like other "mutating table" workarounds it would require a package and AFTER statement triggers (so technically not before insertion).
But seriously, what would be the point? You won't learn anything about how primary keys actually work. And mutating tables almost always point to a poor data model, and that would certainly be the case here.
Primary key is not a trigger. It is a key, because it identifies the whole row, that's why it should be unique (and implicitly not null). It is "primary", because it is the candidate key that is most appropriate - by your decision - to be the main reference key for your table. You can add it as ALTER TABLE your_table_name ADD CONSTRAINT PK_your_table_name PRIMARY KEY (your_key_column).
If you do not want to add a primary key like that (which is a bad idea), but want to add a unique index to that table: CREATE UNIQUE INDEX UQ_IX_your_table_your_column ON your_table_name (unique_column_name).
The NOT NULL constraint should be put on the column.

ORA-04091: table [blah] is mutating, trigger/function may not see it

I recently started working on a large complex application, and I've just been assigned a bug due to this error:
ORA-04091: table SCMA.TBL1 is mutating, trigger/function may not see it
ORA-06512: at "SCMA.TRG_T1_TBL1_COL1", line 4
ORA-04088: error during execution of trigger 'SCMA.TRG_T1_TBL1_COL1'
The trigger in question looks like
create or replace TRIGGER TRG_T1_TBL1_COL1
BEFORE INSERT OR UPDATE OF t1_appnt_evnt_id ON TBL1
FOR EACH ROW
WHEN (NEW.t1_prnt_t1_pk is not null)
DECLARE
v_reassign_count number(20);
BEGIN
select count(t1_pk) INTO v_reassign_count from TBL1
where t1_appnt_evnt_id=:new.t1_appnt_evnt_id and t1_prnt_t1_pk is not null;
IF (v_reassign_count > 0) THEN
RAISE_APPLICATION_ERROR(-20013, 'Multiple reassignments not allowed');
END IF;
END;
The table has a primary key "t1_pk", an "appointment event id"
t1_appnt_evnt_id and another column "t1_prnt_t1_pk" which may or may
not contain another row's t1_pk.
It appears the trigger is trying to make sure that nobody else with the
same t1_appnt_evnt_id has referred to the same one this row is referring to a referral to another row, if this one is referring to another row.
The comment on the bug report from the DBA says "remove the trigger, and perform the check in the code", but unfortunately they have a proprietary code generation framework layered on top of Hibernate, so I can't even figure out where it actually gets written out, so I'm hoping that there is a way to make this trigger work. Is there?
I think I disagree with your description of what the trigger is trying to
do. It looks to me like it is meant to enforce this business rule: For a
given value of t1_appnt_event, only one row can have a non-NULL value of
t1_prnt_t1_pk at a time. (It doesn't matter if they have the same value in the second column or not.)
Interestingly, it is defined for UPDATE OF t1_appnt_event but not for the other column, so I think someone could break the rule by updating the second column, unless there is a separate trigger for that column.
There might be a way you could create a function-based index that enforces this rule so you can get rid of the trigger entirely. I came up with one way but it requires some assumptions:
The table has a numeric primary key
The primary key and the t1_prnt_t1_pk are both always positive numbers
If these assumptions are true, you could create a function like this:
dev> create or replace function f( a number, b number ) return number deterministic as
2 begin
3 if a is null then return 0-b; else return a; end if;
4 end;
and an index like this:
CREATE UNIQUE INDEX my_index ON my_table
( t1_appnt_event, f( t1_prnt_t1_pk, primary_key_column) );
So rows where the PMNT column is NULL would appear in the index with the inverse of the primary key as the second value, so they would never conflict with each other. Rows where it is not NULL would use the actual (positive) value of the column. The only way you could get a constraint violation would be if two rows had the same non-NULL values in both columns.
This is perhaps overly "clever", but it might help you get around your problem.
Update from Paul Tomblin: I went with the update to the original idea that igor put in the comments:
CREATE UNIQUE INDEX cappec_ccip_uniq_idx
ON tbl1 (t1_appnt_event,
CASE WHEN t1_prnt_t1_pk IS NOT NULL THEN 1 ELSE t1_pk END);
I agree with Dave that the desired result probalby can and should be achieved using built-in constraints such as unique indexes (or unique constraints).
If you really need to get around the mutating table error, the usual way to do it is to create a package which contains a package-scoped variable that is a table of something that can be used to identify the changed rows (I think ROWID is possible, otherwise you have to use the PK, I don't use Oracle currently so I can't test it). The FOR EACH ROW trigger then fills in this variable with all rows that are modified by the statement, and then there is an AFTER each statement trigger that reads the rows and validate them.
Something like (syntax is probably wrong, I haven't worked with Oracle for a few years)
CREATE OR REPLACE PACKAGE trigger_pkg;
PROCEDURE before_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID);
PROCEDURE after_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE PACKAGE BODY trigger_pkg AS
TYPE rowid_tbl IS TABLE OF(ROWID);
modified_rows rowid_tbl;
PROCEDURE before_stmt_trigger IS
BEGIN
modified_rows := rowid_tbl();
END before_each_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID) IS
BEGIN
modified_rows(modified_rows.COUNT) = row;
END for_each_row_trigger;
PROCEDURE after_stmt_trigger IS
BEGIN
FOR i IN 1 .. modified_rows.COUNT LOOP
SELECT ... INTO ... FROM the_table WHERE rowid = modified_rows(i);
-- do whatever you want to
END LOOP;
END after_each_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE TRIGGER before_stmt_trigger BEFORE INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.before_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER after_stmt_trigger AFTER INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.after_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER for_each_row_trigger
BEFORE INSERT OR UPDATE ON mytable
WHEN (new.mycolumn IS NOT NULL) AS
BEGIN
trigger_pkg.for_each_row_trigger(:new.rowid);
END;
With any trigger-based (or application code-based) solution you need to
put in locking to prevent data corruption in a multi-user environment.
Even if your trigger worked, or was re-written to avoid the mutating table
issue, it would not prevent 2 users from simultaneously updating
t1_appnt_evnt_id to the same value on rows where t1_appnt_evnt_id is not
null: assume there are currenly no rows where t1_appnt_evnt_id=123 and
t1_prnt_t1_pk is not null:
Session 1> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =456;
/* OK, trigger sees count of 0 */
Session 2> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =789;
/* OK, trigger sees count of 0 because
session 1 hasn't committed yet */
Session 1> commit;
Session 2> commit;
You now have a corrupted database!
The way to avoid this (in trigger or application code) would be to lock
the parent row in the table referenced by t1_appnt_evnt_id=123 before performing the check:
select appe_id
into v_app_id
from parent_table
where appe_id = :new.t1_appnt_evnt_id
for update;
Now session 2's trigger must wait for session 1 to commit or rollback before it performs the check.
It would be much simpler and safer to implement Dave Costa's index!
Finally, I'm glad no one has suggested adding PRAGMA AUTONOMOUS_TRANSACTION to your trigger: this is often suggested on forums and works in as much as the mutating table issue goes away - but it makes the data integrity problem even worse! So just don't...
I had similar error with Hibernate. And flushing session by using
getHibernateTemplate().saveOrUpdate(o);
getHibernateTemplate().flush();
solved this problem for me. (I'm not posting my code block as I was sure that everything was written properly and should work - but it did not until I added the previous flush() statement). Maybe this can help someone.

Resources