ORA-02291: parent key not found when inserting multiple rows - oracle

I having a problem executing an stored procedure that does multiple inserts
I am a copying 30 tables from a instance of a server to another by a DBLINK:
INSERT INTO table#dblink (column1)
SELECT column1
FROM table;
But it results in:
ORA-02291: integrity constraint (string.string) violated - parent key not found
There is only one commit at the end of the procedure.
The 4th table that I'm inserting, has an FK to the first one, and its no recognizing the inserts of the first one (I have tried with deferred constraints and same problem: ORA-02291).

The problem here is you are modifying data (DML) through db-link. This might be ill-managed by Oracle, and cause unexpected behavior. You should do it the other way around: instead of pushing data, drag data through this db-link, and do the inserts locally. Of course, you probably cannot technically do what you want on the destination database...
The solution you have is to deactivate the FK before your inserts, then activate the FK.
However, I am not sure this DDL is possible directly through db-link... You may need to create a procedure to deactivate the FKs on the destination database, and call it via db-link.

Related

Oracle deadlock output when caused by foreign keys

We have a multi-threaded batch job ending up in deadlock. I am getting conflicting answers from our dba's as to what actually causes the deadlock.
Caused by: java.sql.SQLException: ORA-00060: deadlock detected while waiting for resource
The error output references the sql for inserting into table A. Every row going into table A should be unique. Table A has foreign keys on two other tables, both of which are indexed and primary keys made up of two columns. Many rows in Table A can point to the same FK in the parent tables. Our code handles FK errors by trying to insert into parent tables and then trying into Table A again.
The sql in the trace log refers to the Table A insert sql (does not show param binding values). Does this mean definitively that there are two identical sql statements trying to be inserted into Table A in which case our prior logic is not thread-safe somewhere? Or could it really be that there are two inserts both referencing an unsatisfied FK? And the deadlock occurs from our error handling in trying to insert into the parent table. If so, would the sql in the trace not then reference the parent table sql?
Or perversely, does the original insert attempt put a lock on the row and then after handling the error, does the second attempt of the insert cause the deadlock? Any further debugging assistance?
There's not much info to work with, but my guess would be that two threads are trying to insert the same rows into one of the 'two other tables' at the same time. Idea on debugging below
Use a trigger on table A and on the other two tables ( 3 triggers in total) that write the inserted data to logging tables in an autonomous transaction that commits. This way you can see the uncommitted inserts that were rolled back due to the deadlock (the rows that exists in the logging tables but not in the actual tables are the ones that were rolled back).
HTH, KR

Unique Constraint Violated on empty table

I recently received a case which my client came across the ORA-00001: unique constraint violated error. This happened when a program tried to truncate two tables and then insert data into them.
From the error-log file, the truncate step was completed,
delete from INTERNET_GROUP
delete from INTERNET_ITEM
BUT right after this, the insertion to the Internet_group table triggered the ORA-00001 error. I am wondering if there is any database settings related to this error? I never used Oracle and am wondering if Oracle puts a lock on a row with SELECT statement, in which case the row is locked and not deleted somehow? Any help is appreciated.
Please know that there is a difference between truncate and delete. You say you truncated the table, but you mention "delete from" . That is entirely different.
If you're sure you want to empty the tables, try replacing with
truncate table internet_group reuse storage;
Mind you that a commit is not necessary with the truncate statement as this is considered a DDL (data definition language) statement and not a DML (Data modification language) statement like updates and deletes.
Also, there is no row locking on selects. But changes are only applied and visible for other sessions in the database when commit-ed.
I guess that is wat happened; you deleted the records but did not execute a commit (yet) and subsequently inserted new records.
edit:
I now realize you're probably inserting multiple records....
The other option might be, that the data itself causes a violation. Can you please provide the constraints on the table? There must be a primary key or unique constraint. You might want to hold that against your dataset.

Make an Oracle foreign key constraint referencing USER_SEQUENCES(SEQUENCE_NAME)?

I want to create a table with a column that references the name of a sequence I've also created. Ideally, I'd like to have a foreign key constraint that enforces this. I've tried
create table testtable (
sequence_name varchar2(128),
constraint testtableconstr
foreign key (sequence_name)
references user_sequences (sequence_name)
on delete set null
);
but I'm getting a SQL Error: ORA-01031: insufficient privileges. I suspect either this just isn't possible, or I need to add something like on update cascade. What, if anything, can I do to enforce this constraint when I insert rows into this table?
I assume you're trying to build some sort of deployment management system to keep track of your schema objects including sequences.
To do what you ask, you might explore one of the following options:
Run a report after each deployment that compares the values in your table vs. the data dictionary view, and lists any discrepancies.
Create a DDL trigger which does the insert automatically whenever a sequence is created.
Add a trigger to the table which does a query on the sequences view and raises an exception if not found.
I'm somewhat confused at what you are trying to achieve here - a sequence (effectively) only has a single value, the next number to be allocated, not all the values that have been previously allocated.
If you simply want to ensure that an attribute in the relation is populated from the sequence, then a trigger would be the right approach.

Is it possible to compare other tables within a trigger?

I have a database with tables that are chained together with foreign keys, and the last one in the chain also has a foreign key to itself. I want to delete them with cascade on, exapt for the last one in the chain. That one should be set null, unless it's parent record has a certain value. I figured i would do that with a trigger: whenever the last table updated, if the foreign key to itself had been set to null, check the field in the parent record, and if it is the value "default", delete the record in the last table.
However, I haven't found any help online indicting that comparing a parent record in another table.
Is this possible?
In general, a row-level trigger on table A cannot query table A. Doing so would generally raise a mutating table exception (ORA-04091). So a trigger is generally not the right solution.
Presumably, you have some sort of API (i.e. a stored procedure) to delete records from the parent table. That API should query this last table before issuing the DELETE against the parent table. It should take care of updating the last table in the chain as well as deleting the data from the parent table.
If you really wanted a trigger-based solution, life would get substantially more complicated. You could work around the mutating table exception by
Creating a package with a collection of primary keys from the parent table
Creating a before statement trigger that initializes this collection
Creating a row-level trigger that populates the collection with the primary keys that were modified by the SQL statement
Creating an after statement trigger that iterates over the collection and issues whatever DML is necessary (unlike row-level triggers, statement-level triggers on table A can query or modify table A).
If you're using 11g, you can simplify this a bit with a compound trigger with before statement, after row, and after statement sections. But you've still got a number of moving pieces to try to coordinate.
AFAIK you won't be able to really delete the record in the last table (mutating table problem), but you could update a status field indicating the record has been logically deleted (untested):
create or replace trigger last_table_trig
before update on last_table
for each row
declare
l_parentField varchar2(100);
begin
if :new.self_ref_fk is null then
select p.parent_field into l_parentField from parent_table p
where p.pk = :new.parent_fk;
if l_parentField = 'default' then
:new.status := 'DELETED';
end if;
end if;
end;

Find all Foreign Key errors with Oracle SET CONSTRAINTS ALL DEFERRED

I am using:
set constraints all deferred;
(lots of deletes and inserts)
commit;
This works as expected. If there are any broken relationships then the commit fails and an error is raised listing ONE of the FKs that it fails on.
The user fixes the offending data and runs again. then hits another FK issue and repeats the process.
What I really want is a list of ALL FKs in one go that would cause the commit to fail.
I can of course write something to check every FK relationship through a select statement (one select per FK), but the beauty of using the deferred session is that this is all handled for me.
The set immediate answer worked fine for me.
Taking the A,B,C example from the other post:
SQL> create table a (id number primary key);
Table created.
SQL> create table b (id number primary key, a_id number, constraint fk_b_to_a foreign key (a_id) references a deferrable initially immediate);
Table created.
SQL> create table c (id number primary key, b_id number, constraint fk_c_to_b foreign key (b_id) references b deferrable initially immediate);
Table created.
SQL> insert into a values (1);
1 row created.
SQL> insert into b values (1,1);
1 row created.
SQL> insert into c values (1,1);
1 row created.
SQL> commit;
Commit complete.
I have a consistent set of data.. my starting point.
Now I start a session to update some of the data - which is what I was trying to describe in my post.
SQL> set constraints all deferred;
Constraint set.
SQL> delete from a;
1 row deleted.
SQL> delete from b;
1 row deleted.
SQL> insert into b values (10,10);
1 row created.
SQL> set constraint fk_b_to_a immediate;
set constraint fk_b_to_a immediate
*
ERROR at line 1:
ORA-02291: integrity constraint (GW.FK_B_TO_A) violated - parent key not foun
SQL> set constraint fk_c_to_b immediate;
set constraint fk_c_to_b immediate
*
ERROR at line 1:
ORA-02291: integrity constraint (GW.FK_C_TO_B) violated - parent key not foun
Which tells me about both broken constraints (C to B) and (B to A), without rolling back the transaction. Which is exactly what I wanted.
Thanks
First option, you can look into DML error logging. That way you leave your constraints active, then do the inserts, with the erroring rows going into error tables. You can find them there, fix them, re-insert the rows and delete them from the error table.
The second option is, rather than trying the commit, attempt to re-enable each deferred constraint dynamically, catching the errors. You'd be looking at a PL/SQL procedure that queries ALL_CONSTRAINTS for the deferrable ones, then does an EXECUTE IMMEDIATE to make that constraint immediate. That last bit would be in a block with an exception handler so you can catch the ones that fail, but continue on.
Personally, I'd go for option (1) as you do get the individual records that fail, plus you don't have to worry about deferrable constraints and remembering to make them immediate again after this script so it doesn't break something later.
I guess somewhere in memory Oracle must be maintaining a list, at least of constraints that failed if not the actual rows. I'm pretty confident it doesn't rescan the whole set of tables at commit time to check all the constraints.
Oracle's Error Mechanism is too primitive to return a collection of all errors that COULD occur. I mean, it's a cool thought but think about what you'd have to do if you wrote the code. Your standard error handling would need to be thwarted. Instead of returning an error as soon as you encounter it, you'd have to continue somehow to see if you could proceed if the first error you caught wasn't an error at all. And even if you did all of this you'd still face he possibility that the row you add that fixes the first error actually causes the second error.
For example:
A is the parent of B is the parent of C. I defer and I insert C. I get an error that there's no B record parent. I defer and I add B. Now I get another error that there's no A.
This is certainly possible and there's no way to tell in advance.
I think you're looking for the server to do some work that's really your responsibility. At the risk of mod -1, you're using a technique for firing an automatic weapon called "Spray and Pray" -- Fire as many rounds as possible and hope you kill the target. That approach just can't work here. You disable your RI and do a bunch of steps and hope that all the RI works out in the end when the database "grades" your DML.
I think you have to write the code.

Resources