I have Oracle Database 11g Enterprise Edition Release 11.2.0.1.0.
I have parent table t1 and t2 with foreign key which references t1(col1).
What I'm wondering is why locking is there?
Please check what I've done...
session 1
SQL> create table t1(col1 char(1), primary key(col1));
Table created.
SQL> insert into t1 values('1');
1 row created.
SQL> insert into t1 values('2');
1 row created.
SQL> insert into t1 values('3');
1 row created.
SQL> insert into t1 values('4');
1 row created.
SQL> insert into t1 values('5');
1 row created.
SQL> commit;
Commit complete.
SQL> create table t2(col1 char(1), col2 char(2), foreign key(col1) references t1(col1));
Table created.
SQL> insert into t2 values('1','0');
1 row created.
SQL> commit;
Commit complete.
SQL> update t2 set col2='9'; --not committed yet!
1 row updated.
session 2
SQL> delete from t1; -- Lock happens here!!!
session 1
SQL> commit;
Commit complete.
session 2
delete from t1 -- The error occurs after I commit updating query in session 1.
*
ERROR at line 1:
ORA-02292: integrity constraint (KMS_USER.SYS_C0013643) violated - child record found
Could anyone explain me why this happens?
delete from t1; tries to lock the child table, T2. If the session is waiting on the entire table lock it can't even try to delete anything yet.
This unusual locking behavior occurs because you have an unindexed foreign key.
If you create an index, create index t2_idx on t2(col1);, you will get the ORA-02292 error instead of the lock.
The lock is coming from your line: insert into t2 values('1','0'); The lock does not happen when you delete from t1 in session 2.
Think about it. Once you insert this row in session 1, there is a reference from t2.col1 back to t1.col1. The foreign key has been validated at that point and Oracle knows that the '1' exists in t1. If session 2 could delete that row from t1, then session 2 would have an uncommitted row in t2 that had an invalid reference to t1, which doesn't make any sense.
Related
I am trying to come up with a way to make sure that when a table is updated, that a certain condition is met. Can this be done in a trigger? I have made the following two tables, storeTable and employeeTable.
I need to make sure that when storeManager is updated in the storeTable, that the employee has a storeID that matches the store in which I am trying to update the storeManager. (employee cannot be manager of a store he does not work at)
In addition, I need to make sure that the employee exists in the employeeTable. I was thinking some sort of CASE statement would be best, but dont know how this could be enforced by a trigger.
I was thinking about morphing the "Foreign Key Trigger for Child Table" trigger example from https://docs.oracle.com/cd/B12037_01/appdev.101/b10795/adfns_tr.htm#1007172 but I could not figure out how to change this to fit my specific need. Any help is much appreciated.
For context, the current keys are:
storeTable:
storeID PRIMARY KEY
employeeTable:
empID PRIMARY KEY
storeID FOREIGN KEY REFERS TO storeTable.storeID
To me, it seems that constraints can do the job. You don't need triggers.
Here's how. First, create tables without any constraints. Then add them, both primary and foreign key ones which will be deferrable (otherwise you wouldn't be able to insert any rows as parent keys don't exist yet).
SQL> create table employee
2 (empid number,
3 fname varchar2(10),
4 storeid number
5 );
Table created.
SQL> create table store
2 (storeid number,
3 storename varchar2(20),
4 storemanager number
5 );
Table created.
SQL>
SQL> alter table employee add constraint pk_employee primary key (empid, storeid);
Table altered.
SQL> alter table store add constraint pk_store primary key (storeid);
Table altered.
SQL>
SQL> alter table store add constraint fk_store_emp foreign key (storemanager, storeid)
2 references employee (empid, storeid)
3 deferrable initially deferred;
Table altered.
SQL> alter table employee add constraint fk_emp_store foreign key (storeid)
2 references store (storeid)
3 deferrable initially deferred;
Table altered.
SQL>
Now let's add some data: initial insert into employee will be OK until I commit - then it'll fail because its store doesn't exist yet:
SQL> insert into employee values (1, 'John' , 1);
1 row created.
SQL> commit;
commit
*
ERROR at line 1:
ORA-02091: transaction rolled back
ORA-02291: integrity constraint (SCOTT.FK_EMP_STORE) violated - parent key not
found
SQL>
But, if I don't commit and pay attention to what I'm entering (i.e. that referential integrity is maintained), it'll be OK:
SQL> insert into employee values (1, 'John' , 1);
1 row created.
SQL> insert into employee values (2, 'Matt' , 2);
1 row created.
SQL> insert into store values (1, 'Santa Clara', 1);
1 row created.
SQL> insert into store values (2, 'San Francisco', 2); --> note 2 as STOREID
1 row created.
SQL> commit;
Commit complete.
SQL> select * From employee;
EMPID FNAME STOREID
---------- ---------- ----------
1 John 1
2 Matt 2
SQL> select * From store;
STOREID STORENAME STOREMANAGER
---------- -------------------- ------------
1 Santa Clara 1
2 San Francisco 2
SQL>
See? So far so good.
Now I'll try to modify STORE table and set its manager to John who works in storeid = 1 which means that it should fail:
SQL> update store set storemanager = 1
2 where storeid = 2;
1 row updated.
SQL> commit;
commit
*
ERROR at line 1:
ORA-02091: transaction rolled back
ORA-02291: integrity constraint (SCOTT.FK_STORE_EMP) violated - parent key not
found
SQL>
As expected.
Let's now add emplyoee ID = 6, Jimmy, who works in storeid = 2 and set him to be manager in San Francisco (storeid = 2):
SQL> insert into employee values (6, 'Jimmy', 2);
1 row created.
SQL> update store set storemanager = 6
2 where storeid = 2;
1 row updated.
SQL> commit;
Commit complete.
SQL>
Yey! It works!
As you can see, no triggers needed.
Note that - if you want to drop any of those tables - you'll fail as they are referenced by each other:
SQL> drop table store;
drop table store
*
ERROR at line 1:
ORA-02449: unique/primary keys in table referenced by foreign keys
SQL> drop table employee;
drop table employee
*
ERROR at line 1:
ORA-02449: unique/primary keys in table referenced by foreign keys
SQL>
It means that you'd first have to drop foreign key constraints, then drop tables:
SQL> alter table employee drop constraint fk_emp_store;
Table altered.
SQL> alter table store drop constraint fk_store_emp;
Table altered.
SQL> drop table store;
Table dropped.
SQL> drop table employee;
Table dropped.
SQL>
That's all, I guess.
First my oracle version:
SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production
I create table and insert two rows:
create table test_table
(
objectId VARCHAR2(40) not null,
dependId VARCHAR2(40) not null
);
insert into test_table values(1, 10000);
insert into test_table values(2, 20000);
commit;
Then open two sessions and execute the following commands in order.
Case1:
session1:
update test_table set dependId=100000 where objectid in (2);
session2:
update test_table set dependId=200000 where objectid in (1,2);
seesion1:
update test_table set dependId=100000 where objectid in (1);
and session2 showsORA-00060: deadlock detected while waiting for resource
Case2
session1:
update test_table set dependId=100000 where objectid in (1);
session2:
update test_table set dependId=200000 where objectid in (2,1);
seesion1:
update test_table set dependId=100000 where objectid in (2);
and no deadlock occur.
Please explain the reason. How the update ... where objectid in (1,2) hold the lock?
This comes down to the order the database tries to acquire locks on the rows.
In your example objectid = 1 is "first" in the table. You can verify this by sorting the data by rowid:
create table test_table
(
objectId VARCHAR2(40) not null,
dependId VARCHAR2(40) not null
);
insert into test_table values(1, 99);
insert into test_table values(2, 0);
commit;
select rowid, t.* from test_table t
order by rowid;
ROWID OBJECTID DEPENDID
AAAT9kAAMAAAdMVAAA 1 99
AAAT9kAAMAAAdMVAAB 2 0
If in session 1 you now run:
update test_table set dependId=100000 where objectid in (2);
You're updating the "second" row in the table. When session 2 runs:
update test_table set dependId=200000 where objectid in (2,1);
It reads the data block. Then tries to acquire locks on them in the order they're stored. So it looks at the first row (objectid = 1), asks "is the locked?" Finds the answer is no. And locks the row.
It then repeats this process for the second row. Which is locked by session 1. When querying v$lock, you should see two entries requesting 'TX' locks in lmode = 6. One for each session:
select sid from v$lock
where type = 'TX'
and lmode = 6;
SID
75
60
So at this stage both sessions have one row locked. And session 2 is waiting for session 1.
In session 1 you now run:
update test_table set dependId=100000 where objectid in (1);
BOOOOM! Deadlock!
OK, but how can we be sure that this is due to the order rows are stored?
Using attribute clustering (a 12c feature), we can change the order rows are stored in the blocks, so objectid = 2 is "first":
alter table test_table
add clustering
by linear order ( dependId );
alter table test_table move;
select rowid, t.* from test_table t
order by rowid;
ROWID OBJECTID DEPENDID
AAAT9lAAMAAAdM7AAA 2 0
AAAT9lAAMAAAdM7AAB 1 99
Repeat the test. In session 1:
update test_table set dependId=100000 where objectid in (2);
So this has locked the "first" row. In session 2:
update test_table set dependId=200000 where objectid in (2,1);
This tries to lock the "first" row. But can't because session 1 has it locked. So at this point only session 1 holds any locks.
Check v$lock to be sure:
select sid from v$lock
where type = 'TX'
and lmode = 6;
SID
60
And, sure enough, when you run the second update in session 1 it completes:
update test_table set dependId=100000 where objectid in (1);
NOTE
This doesn't mean that update is guaranteed to lock rows in the order they're stored in the table blocks. Adding or removing indexes could affect this behaviour. As could changes between Oracle Database versions.
The key point is that update has to lock rows in some order. It can't instantly acquire locks on all the rows it'll change.
So if you have two or more sessions with multiple updates, deadlock is possible. So you should start your transaction by locking all the rows you intend to change with select ... for update.
CREATE TABLE t1 (
t1pk NUMBER PRIMARY KEY NOT NULL
,t1val NUMBER
);
CREATE TABLE t2 (
t2pk NUMBER PRIMARY KEY NOT NULL
,t2fk NUMBER
,t2val NUMBER
,CONSTRAINT t2fk FOREIGN KEY (t2fk)
REFERENCES t1 (t1pk) ON DELETE CASCADE
);
INSERT INTO t1 (t1pk, t1val)
VALUES (1, 1);
INSERT INTO t2 (t2pk, t2fk, t2val)
VALUES (1, 1, 1);
COMMIT;
CREATE SEQUENCE seq1
MINVALUE 1
MAXVALUE 999999999999999999999999999;
CREATE OR REPLACE TRIGGER trg1
BEFORE
INSERT -- Problematic code.
OR UPDATE
OR DELETE ON t1
FOR EACH ROW
BEGIN
IF (INSERTING)
THEN
-- Problematic code.
:NEW.t1pk := seq1.NEXTVAL;
END IF;
END trg1;
/
Session 1:
UPDATE t2 -- Table 2!
SET t2val = t2val;
Session 2:
UPDATE t1 -- Table 1!
SET t1val = t1val;
In session 2 the update does not come back and waits until session 1 closes the transaction with commit or rollback. This is not what I do expect. The reason seems to be the trigger code with pk generation from sequence. If I remove that sequence code the update in session 2 does not wait and come back while the transaction of session 1 is still open. What is wrong?
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
1/ Does FLASHBACK and SELECT AS OF/ VERSION BETWEEN use the same source of history to fall back to ? This question is related to the second question.
2/ I am aware that FLASHBACK cannot go back before a DDL change.
My question is for SELECT AS OF, would it be able to select something before a DDL change.
Take for example
CREATE TABLE T
(col1 NUMBER, col2 NUMBER)
INSERT INTO T(col1, col2) VALUES('1', '1')
INSERT INTO T(col1, col2) VALUES('2', '2')
COMMIT;
SLEEP(15)
ALTER TABLE T DROP COLUMN col2;
SELECT * FROM T
AS OF SYSTIMESTAMP - INTERVAL '10' SECOND;
Would the select return 2 columns or 1 ?
Pardon me I do not have a database at hand to test.
Any DDL that alter the structure of a table invalidates any existing undo data for the table. So you will get the error 'ORA-01466' unable to read data - table definition has changed.
Here is a simple test
CREATE TABLE T
(col1 NUMBER, col2 NUMBER);
INSERT INTO T(col1, col2) VALUES('1', '1');
INSERT INTO T(col1, col2) VALUES('2', '2');
COMMIT;
SLEEP(15)
ALTER TABLE T DROP COLUMN col2;
SELECT * FROM T
AS OF TIMESTAMP (SYSTIMESTAMP - INTERVAL '60' SECOND);
ERRROR ORA-01466 upon executing the above select statement.
However DDL operations that alter the storage attributes of a table do no invalidate undo data so that you can still use flashback query.
1) FLASHBACK TABLE and SELECT .. AS OF use the same source, UNDO. There is also FLASHBACK DATABASE - although it uses the same mechanism it uses a separate source, flashback logs that must be optionally configured.
2) Flashback table and flashback queries can go back before a DDL change if you enable a flashback archive.
To use that feature, add a few statements to the sample code:
CREATE FLASHBACK ARCHIVE my_flashback_archive TABLESPACE users RETENTION 10 YEAR;
...
ALTER TABLE t FLASHBACK ARCHIVE my_flashback_archive;
Now this statement will return 1 column:
SELECT * FROM T;
And this statement will return 2 columns:
SELECT * FROM T AS OF SYSTIMESTAMP - INTERVAL '10' SECOND;
Is there a way to force delete all dependent rows (child rows) when you delete the parent row of a table.
I have a table with too many referential integrity. I was wondering what is the easy way to achieve this in oracle.
I appreciate your support.
You can declare foreign key constraints that cascade deletes so that child rows are automatically deleted when the parent row is deleted.
SQL> create table parent (
2 parent_key number primary key
3 );
Table created.
SQL> create table child (
2 child_key number primary key,
3 parent_key number,
4 constraint fk_child_parent foreign key( parent_key )
5 references parent( parent_key )
6 on delete cascade
7 );
Table created.
SQL> insert into parent values( 1 );
1 row created.
SQL> insert into child values( 10, 1 );
1 row created.
SQL> commit;
Commit complete.
SQL> delete from parent where parent_key = 1;
1 row deleted.
SQL> select * from child;
no rows selected
I'm personally not a fan of this sort of cascading delete-- I'd rather see the delete against the child table as part of the procedure that deletes from the parent so that the flow of the program is all in one place. Cascading foreign keys are like triggers in that they can seriously complicate the program flow by adding actions that are hard for a developer reading through code to notice and to track.