First my oracle version:
SQL> select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production
I create table and insert two rows:
create table test_table
(
objectId VARCHAR2(40) not null,
dependId VARCHAR2(40) not null
);
insert into test_table values(1, 10000);
insert into test_table values(2, 20000);
commit;
Then open two sessions and execute the following commands in order.
Case1:
session1:
update test_table set dependId=100000 where objectid in (2);
session2:
update test_table set dependId=200000 where objectid in (1,2);
seesion1:
update test_table set dependId=100000 where objectid in (1);
and session2 showsORA-00060: deadlock detected while waiting for resource
Case2
session1:
update test_table set dependId=100000 where objectid in (1);
session2:
update test_table set dependId=200000 where objectid in (2,1);
seesion1:
update test_table set dependId=100000 where objectid in (2);
and no deadlock occur.
Please explain the reason. How the update ... where objectid in (1,2) hold the lock?
This comes down to the order the database tries to acquire locks on the rows.
In your example objectid = 1 is "first" in the table. You can verify this by sorting the data by rowid:
create table test_table
(
objectId VARCHAR2(40) not null,
dependId VARCHAR2(40) not null
);
insert into test_table values(1, 99);
insert into test_table values(2, 0);
commit;
select rowid, t.* from test_table t
order by rowid;
ROWID OBJECTID DEPENDID
AAAT9kAAMAAAdMVAAA 1 99
AAAT9kAAMAAAdMVAAB 2 0
If in session 1 you now run:
update test_table set dependId=100000 where objectid in (2);
You're updating the "second" row in the table. When session 2 runs:
update test_table set dependId=200000 where objectid in (2,1);
It reads the data block. Then tries to acquire locks on them in the order they're stored. So it looks at the first row (objectid = 1), asks "is the locked?" Finds the answer is no. And locks the row.
It then repeats this process for the second row. Which is locked by session 1. When querying v$lock, you should see two entries requesting 'TX' locks in lmode = 6. One for each session:
select sid from v$lock
where type = 'TX'
and lmode = 6;
SID
75
60
So at this stage both sessions have one row locked. And session 2 is waiting for session 1.
In session 1 you now run:
update test_table set dependId=100000 where objectid in (1);
BOOOOM! Deadlock!
OK, but how can we be sure that this is due to the order rows are stored?
Using attribute clustering (a 12c feature), we can change the order rows are stored in the blocks, so objectid = 2 is "first":
alter table test_table
add clustering
by linear order ( dependId );
alter table test_table move;
select rowid, t.* from test_table t
order by rowid;
ROWID OBJECTID DEPENDID
AAAT9lAAMAAAdM7AAA 2 0
AAAT9lAAMAAAdM7AAB 1 99
Repeat the test. In session 1:
update test_table set dependId=100000 where objectid in (2);
So this has locked the "first" row. In session 2:
update test_table set dependId=200000 where objectid in (2,1);
This tries to lock the "first" row. But can't because session 1 has it locked. So at this point only session 1 holds any locks.
Check v$lock to be sure:
select sid from v$lock
where type = 'TX'
and lmode = 6;
SID
60
And, sure enough, when you run the second update in session 1 it completes:
update test_table set dependId=100000 where objectid in (1);
NOTE
This doesn't mean that update is guaranteed to lock rows in the order they're stored in the table blocks. Adding or removing indexes could affect this behaviour. As could changes between Oracle Database versions.
The key point is that update has to lock rows in some order. It can't instantly acquire locks on all the rows it'll change.
So if you have two or more sessions with multiple updates, deadlock is possible. So you should start your transaction by locking all the rows you intend to change with select ... for update.
Related
CREATE TABLE t1 (
t1pk NUMBER PRIMARY KEY NOT NULL
,t1val NUMBER
);
CREATE TABLE t2 (
t2pk NUMBER PRIMARY KEY NOT NULL
,t2fk NUMBER
,t2val NUMBER
,CONSTRAINT t2fk FOREIGN KEY (t2fk)
REFERENCES t1 (t1pk) ON DELETE CASCADE
);
INSERT INTO t1 (t1pk, t1val)
VALUES (1, 1);
INSERT INTO t2 (t2pk, t2fk, t2val)
VALUES (1, 1, 1);
COMMIT;
CREATE SEQUENCE seq1
MINVALUE 1
MAXVALUE 999999999999999999999999999;
CREATE OR REPLACE TRIGGER trg1
BEFORE
INSERT -- Problematic code.
OR UPDATE
OR DELETE ON t1
FOR EACH ROW
BEGIN
IF (INSERTING)
THEN
-- Problematic code.
:NEW.t1pk := seq1.NEXTVAL;
END IF;
END trg1;
/
Session 1:
UPDATE t2 -- Table 2!
SET t2val = t2val;
Session 2:
UPDATE t1 -- Table 1!
SET t1val = t1val;
In session 2 the update does not come back and waits until session 1 closes the transaction with commit or rollback. This is not what I do expect. The reason seems to be the trigger code with pk generation from sequence. If I remove that sequence code the update in session 2 does not wait and come back while the transaction of session 1 is still open. What is wrong?
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
1/ Does FLASHBACK and SELECT AS OF/ VERSION BETWEEN use the same source of history to fall back to ? This question is related to the second question.
2/ I am aware that FLASHBACK cannot go back before a DDL change.
My question is for SELECT AS OF, would it be able to select something before a DDL change.
Take for example
CREATE TABLE T
(col1 NUMBER, col2 NUMBER)
INSERT INTO T(col1, col2) VALUES('1', '1')
INSERT INTO T(col1, col2) VALUES('2', '2')
COMMIT;
SLEEP(15)
ALTER TABLE T DROP COLUMN col2;
SELECT * FROM T
AS OF SYSTIMESTAMP - INTERVAL '10' SECOND;
Would the select return 2 columns or 1 ?
Pardon me I do not have a database at hand to test.
Any DDL that alter the structure of a table invalidates any existing undo data for the table. So you will get the error 'ORA-01466' unable to read data - table definition has changed.
Here is a simple test
CREATE TABLE T
(col1 NUMBER, col2 NUMBER);
INSERT INTO T(col1, col2) VALUES('1', '1');
INSERT INTO T(col1, col2) VALUES('2', '2');
COMMIT;
SLEEP(15)
ALTER TABLE T DROP COLUMN col2;
SELECT * FROM T
AS OF TIMESTAMP (SYSTIMESTAMP - INTERVAL '60' SECOND);
ERRROR ORA-01466 upon executing the above select statement.
However DDL operations that alter the storage attributes of a table do no invalidate undo data so that you can still use flashback query.
1) FLASHBACK TABLE and SELECT .. AS OF use the same source, UNDO. There is also FLASHBACK DATABASE - although it uses the same mechanism it uses a separate source, flashback logs that must be optionally configured.
2) Flashback table and flashback queries can go back before a DDL change if you enable a flashback archive.
To use that feature, add a few statements to the sample code:
CREATE FLASHBACK ARCHIVE my_flashback_archive TABLESPACE users RETENTION 10 YEAR;
...
ALTER TABLE t FLASHBACK ARCHIVE my_flashback_archive;
Now this statement will return 1 column:
SELECT * FROM T;
And this statement will return 2 columns:
SELECT * FROM T AS OF SYSTIMESTAMP - INTERVAL '10' SECOND;
This question already has answers here:
How to create id with AUTO_INCREMENT on Oracle?
(18 answers)
Closed 8 years ago.
I am facing issue while inserting multiple row in one go into table because column id has primary key and its created based on sequence.
for ex:
create table test (
iD number primary key,
name varchar2(10)
);
insert into test values (123, 'xxx');
insert into test values (124, 'yyy');
insert into test values (125, 'xxx');
insert into test values (126, 'xxx');
The following statement creates a constraint violoation error:
insert into test
(
select (SELECT MAX (id) + 1 FROM test) as id,
name from test
where name='xxx'
);
This query should insert 3 rows in table test (having name=xxx).
You're saying that your query inserts rows with primary key ID based on a sequence. Yet, in your insert/select there is select (SELECT MAX (id) + 1 FROM test) as id, which clearly is not based on sequence. It may be the case that you are not using the term "sequence" in the usual, Oracle way.
Anyway, there are two options for you ...
Create a sequence, e.g. seq_test_id with the starting value of select max(id) from test and use it (i.e. seq_test_id.nextval) in your query instead of the select max(id)+1 from test.
Fix the actual subselect to nvl((select max(id) from test),0)+rownum instead of (select max(id)+1 from test).
Please note, however, that the option 2 (as well as your original solution) will cause you huge troubles whenever your code runs in multiple concurrent database sessions. So, option 1 is strongly recommended.
Use
insert into test (
select (SELECT MAX (id) FROM test) + rownum as id,
name from test
where name='xxx'
);
as a workaround.
Of course, you should be using sequences for integer-primary keys.
If you want to insert an ID/Primary Key value generated by a sequence you should use the sequence instead of selecting the max(ID)+1.
Usually this is done using a trigger on your table wich is executed for each row. See sample below:
CREATE TABLE "MY_TABLE"
(
"MY_ID" NUMBER(10,0) CONSTRAINT PK_MY_TABLE PRIMARY KEY ,
"MY_COLUMN" VARCHAR2(100)
);
/
CREATE SEQUENCE "S_MY_TABLE"
MINVALUE 1 MAXVALUE 999999999999999999999999999
INCREMENT BY 1 START WITH 10 NOCACHE ORDER NOCYCLE NOPARTITION ;
/
CREATE OR REPLACE TRIGGER "T_MY_TABLE"
BEFORE INSERT
ON
MY_TABLE
REFERENCING OLD AS OLDEST NEW AS NEWEST
FOR EACH ROW
WHEN (NEWEST.MY_ID IS NULL)
DECLARE
IDNOW NUMBER;
BEGIN
SELECT S_MY_TABLE.NEXTVAL INTO IDNOW FROM DUAL;
:NEWEST.MY_ID := IDNOW;
END;
/
ALTER TRIGGER "T_MY_TABLE" ENABLE;
/
insert into MY_TABLE (MY_COLUMN) values ('DATA1');
insert into MY_TABLE (MY_COLUMN) values ('DATA2');
insert into MY_TABLE (MY_ID, MY_COLUMN) values (S_MY_TABLE.NEXTVAL, 'DATA3');
insert into MY_TABLE (MY_ID, MY_COLUMN) values (S_MY_TABLE.NEXTVAL, 'DATA4');
insert into MY_TABLE (MY_COLUMN) values ('DATA5');
/
select * from MY_TABLE;
I have Oracle Database 11g Enterprise Edition Release 11.2.0.1.0.
I have parent table t1 and t2 with foreign key which references t1(col1).
What I'm wondering is why locking is there?
Please check what I've done...
session 1
SQL> create table t1(col1 char(1), primary key(col1));
Table created.
SQL> insert into t1 values('1');
1 row created.
SQL> insert into t1 values('2');
1 row created.
SQL> insert into t1 values('3');
1 row created.
SQL> insert into t1 values('4');
1 row created.
SQL> insert into t1 values('5');
1 row created.
SQL> commit;
Commit complete.
SQL> create table t2(col1 char(1), col2 char(2), foreign key(col1) references t1(col1));
Table created.
SQL> insert into t2 values('1','0');
1 row created.
SQL> commit;
Commit complete.
SQL> update t2 set col2='9'; --not committed yet!
1 row updated.
session 2
SQL> delete from t1; -- Lock happens here!!!
session 1
SQL> commit;
Commit complete.
session 2
delete from t1 -- The error occurs after I commit updating query in session 1.
*
ERROR at line 1:
ORA-02292: integrity constraint (KMS_USER.SYS_C0013643) violated - child record found
Could anyone explain me why this happens?
delete from t1; tries to lock the child table, T2. If the session is waiting on the entire table lock it can't even try to delete anything yet.
This unusual locking behavior occurs because you have an unindexed foreign key.
If you create an index, create index t2_idx on t2(col1);, you will get the ORA-02292 error instead of the lock.
The lock is coming from your line: insert into t2 values('1','0'); The lock does not happen when you delete from t1 in session 2.
Think about it. Once you insert this row in session 1, there is a reference from t2.col1 back to t1.col1. The foreign key has been validated at that point and Oracle knows that the '1' exists in t1. If session 2 could delete that row from t1, then session 2 would have an uncommitted row in t2 that had an invalid reference to t1, which doesn't make any sense.
I'm having a weird problem with index organized table. I'm running Oracle 11g standard.
i have a table src_table
SQL> desc src_table;
Name Null? Type
--------------- -------- ----------------------------
ID NOT NULL NUMBER(16)
HASH NOT NULL NUMBER(3)
........
SQL> select count(*) from src_table;
COUNT(*)
----------
21108244
now let's create another table and copy 2 columns from src_table
set timing on
SQL> create table dest_table(id number(16), hash number(20), type number(1));
Table created.
Elapsed: 00:00:00.01
SQL> insert /*+ APPEND */ into dest_table (id,hash,type) select id, hash, 1 from src_table;
21108244 rows created.
Elapsed: 00:00:15.25
SQL> ALTER TABLE dest_table ADD ( CONSTRAINT dest_table_pk PRIMARY KEY (HASH, id, TYPE));
Table altered.
Elapsed: 00:01:17.35
It took Oracle < 2 min.
now same exercise but with IOT table
SQL> CREATE TABLE dest_table_iot (
id NUMBER(16) NOT NULL,
hash NUMBER(20) NOT NULL,
type NUMBER(1) NOT NULL,
CONSTRAINT dest_table_iot_PK PRIMARY KEY (HASH, id, TYPE)
) ORGANIZATION INDEX;
Table created.
Elapsed: 00:00:00.03
SQL> INSERT /*+ APPEND */ INTO dest_table_iot (HASH,id,TYPE)
SELECT HASH, id, 1
FROM src_table;
"insert" into IOT takes 18 hours !!! I have tried it on 2 different instances of Oracle running on win and linux and got same results.
What is going on here ? Why is it taking so long ?
The APPEND hint is only useful for a heap-organized table.
When you insert into an IOT, I suspect that each row has to be inserted into the real index structure separately, causing a lot of re-balancing of the index.
When you build the index on a heap table, a temp segment is used and I'm guessing that this allows it to reduce the re-balancing overhead that would otherwise take place.
I suspect that if you created an empty, heap-organized table with the primary key, and did the same insert without the APPEND hint, it would take more like the 18 hours.
You might try putting an ORDER BY on your SELECT and see how that affects the performance of the insert into the IOT. It's not guaranteed to be an improvement by any means, but it might be.