I've got two Oracle queries running in different sessions, which are deadlocking, and I'm having trouble seeing why that's happening.
The query in session 1 is this:
UPDATE REFS R SET R.REFS_NAME = :B2 WHERE R.REFS_CODE = :B1
The query in session 2 is this:
UPDATE REFS R SET R.STATUS_CODE = :B3, R.STATUS_TYPE = :B2 WHERE R.REFS_CODE = :B1
Each is surrounded by a cursor that loops through a selection of primary key values. When these queries are run at the same time, they deadlock. REFS_CODE is the primary key, and the Oracle trace shows that they're updating different rowids. The primary key is indexed, obviously, and there are some foreign key constraints, which are supported by indexes as this has been a problem for us in the past.
Moving into the realm of desperation, I've tried disabling triggers on the table, and that didn't help. Also tried using autonomous transactions, but that made things much worse.
Is there something I'm missing? Thanks for any help!
If a commit is happening after the entire cursor batch is updated, then it may just be a straight forward deadlocking scenario where the two cursors are operating on the same rows but in a different order.
Assume session 1 has cursor set 1 and is updating refs_code 1 and refs_code 2 in that order before attempting a commit.
Assume session 2 has cursor set 2 and is updating refs_code 2 and refs_code 1 in that order before attempting a commit.
Then, interleaving the updates:
time cursor set 1 cursor set 2
==== ============ ============
t1 refs_code 1 -
t2 - refs_code 2
t3 refs_code 2 -
t4 - refs_code 1
at t3, cursor set 1 is waiting on cursor set 2 to commit refs_code 2
at t4, cursor set 2 is waiting on cursor set 1 to commit refs_code 1
The two transactions are waiting on different rowids. If that is the case, you may be able to add an order by (in the same direction) to both cursors to help avoid this.
Related
Do we have similar kind of checksum function (SQL) in oracle function. I would to store it in a table and use key/value for update statements. One of my current process contains 15 columns, need to check any change is there between source and destination, instead of checking one by one column whether change is happened or not, would like to have single column in table to helps whether change happened in 15 columns
Databases are good at keeping track of changes, that's what they are for.
Maybe the built-in ORA_ROWSCN is good enough for your change tracking? By default it tracks only the changes per block, but you can enable a table to track the change for each row with the - oddly named - clause ROWDEPENDENCIES:
CREATE TABLE t (
a NUMBER,
b NUMBER,
c NUMBER
) ROWDEPENDENCIES;
INSERT INTO t VALUES (1,1,1);
INSERT INTO t VALUES (2,2,2);
SELECT current_scn FROM v$database;
2380496
After a while:
UPDATE t SET b=20, c=20 WHERE a=2;
SELECT current_scn FROM v$database;
2380665
Now you can query the system change number SCN each row was changed:
SELECT ORA_ROWSCN AS scn, a,b,c FROM t;
SCN A B C
2380496 1 1 1
2380665 2 20 20
For the last 5 days, Oracle can translate a SCN to approximate real time (in Oracle 10 ± 5 minutes, from Oracle 11 ± 3 seconds):
SELECT ORA_ROWSCN AS SCN, SCN_TO_TIMESTAMP(ORA_ROWSCN) AS ts, a,b,c FROM t;
SCN TS A B C
2380496 2020-05-15 20:20:38 1 1 1
2380665 2020-05-15 20:23:06 2 20 20
It's documented in Oracles SQL Language Reference.
I'm confused about time Oracle 10g XE performs insert. I implemented bulk insert from xml file into several tables with programmatical transaction managment. Why one insert performs in a moment and another more than 10 minutes! I can't wait more and stop it. I think there's something more complex I have not payed attention yet.
Update:
I found lock using Monitor.
Waits
Event enq: TX - row lock contention
name|mode 1415053316
usnusnusnusn<<16 | slot 327711
sequence 162
SQL
INSERT INTO ESKD$SERVICESET (ID, TOUR_ID, CURRENCY_ID) VALUES (9, 9, 1)
What does it mean and how should I resolve it?
TX- Enqueues are well known and a quick google will give you a clear answer.
From that article:
1) Waits for TX in mode 6 occurs when a session is waiting for a row level lock that is already held by another session. This occurs when one user is updating or deleting a row, which another session wishes to update or delete. This type of TX enqueue wait corresponds to the wait event enq: TX - row lock contention.
If you have lots of simultaneous inserts and updates to a table you want each transaction to be a short as possible. Get in, get out... the longer things sit in between, the longer the delays for OTHER transactions.
PURE GUESS:
I have a feeling that your mention of "programmatical transaction managment" is that you're trying to use a table like a QUEUE. Inserting a start record, updating it frequently to change the status and then deleting the 'finished' ones. That is always trouble.
This question will be really hard to answer with so little specific information. All that I can tell you is why this could be.
If you are doing an INSERT ... SELECT ... bulk insert then perhaps your SELECT query is performing poorly. There may be a large number of table joins, innefficient use of inline views and other resources that may be negatively impacting the performance of your INSERT.
Try executing your SELECT query in an Explain Plan to see how the Optimizer is deriving the plan and to evaluation the COST of the query.
The other thing that you mentioned was a possible lock. This could be the case however you will need to analyze this with the OEM tool to tell for sure.
Another thing to consider may be that you do not have indexes on your tables OR the statistics on these tables may be out of date. Out of date statistics can GREATLY impact the performance of queries on large tables.
see sites.google.com/site/embtdbo/wait-event-documentation/oracle-enqueues
The locking wait indicates a conflict that could easily be the cause of your performance issues. On the surface it looks likely that the problem is inserting a duplicate key value while the first insert of that key value had not yet committed. The lock you see "enq: TX - row lock contention" happens because one session is trying to modify uncommited data from another session. There are 4 common reasons for this particular lock wait event:
update/delete of the same row
inserting the same uniq key
modifying the same bitmap index chunk
deleting/updating a parent value to a foreign key
We can eliminate the first and last case are you are doing an insert.
You should be able to identify the 2nd if you have no bitmap indexes involved. If you have bitmap indexes involved and you have uniq keys involved then you could investigate easily if you had Active Session History (ASH) data, but unfortunately Oracle XE doesn't. On the other hand you can collected it yourself with S-ASH, see : http://ashmasters.com/ash-simulation/ . With ASH or S-ASH you can run a query like
col event for a22
col block_type for a18
col objn for a18
col otype for a10
col fn for 99
col sid for 9999
col bsid for 9999
col lm for 99
col p3 for 99999
col blockn for 99999
select
to_char(sample_time,'HH:MI') st,
substr(event,0,20) event,
ash.session_id sid,
mod(ash.p1,16) lm,
ash.p2,
ash.p3,
nvl(o.object_name,ash.current_obj#) objn,
substr(o.object_type,0,10) otype,
CURRENT_FILE# fn,
CURRENT_BLOCK# blockn,
ash.SQL_ID,
BLOCKING_SESSION bsid
--,ash.xid
from v$active_session_history ash,
all_objects o
where event like 'enq: TX %'
and o.object_id (+)= ash.CURRENT_OBJ#
Order by sample_time
/
Which would output something like:
ST EVENT SID LM P2 P3 OBJ OTYPE FN BLOCKN SQL_ID BSID
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
showing that the object name "OBJ" and the object type "OTYPE" with the contention and that the type is an INDEX. From there you could look up the type of INDEX to verify that it is bitmap.
IF the problem is a bitmap index, then you should probably re-evaluate using bitmap indexes or revisit the way that data is loaded and/or modify to reduce conflicts.
If the problem isn't BITMAP indexes, then it's trying to insert a duplicate key. Some other process had inserted the same key value and not yet committed. Then your process tries to insert the same key value and has to wait for the first session to commit or rollback.
For more information see this link: lock waits
It means, your sequence cache is to small. Increase it.
I am trying to execute a query like
select * from tableName where rownum=1
This query is basically to fetch the column names of the table.There are more than million records in the table.When I put the above condition its taking so much time to fetch the first row.Is there any alternate to get the first row.
This question has already been answered, I will just provide an explanation as to why sometimes a filter ROWNUM=1 or ROWNUM <= 1 may result in a long response time.
When encountering a ROWNUM filter (on a single table), the optimizer will produce a FULL SCAN with COUNT STOPKEY. This means that Oracle will start to read rows until it encounters the first N rows (here N=1). A full scan reads blocks from the first extent to the high water mark. Oracle has no way to determine which blocks contain rows and which don't beforehand, all blocks will therefore be read until N rows are found. If the first blocks are empty, it could result in many reads.
Consider the following:
SQL> /* rows will take a lot of space because of the CHAR column */
SQL> create table example (id number, fill char(2000));
Table created
SQL> insert into example
2 select rownum, 'x' from all_objects where rownum <= 100000;
100000 rows inserted
SQL> commit;
Commit complete
SQL> delete from example where id <= 99000;
99000 rows deleted
SQL> set timing on
SQL> set autotrace traceonly
SQL> select * from example where rownum = 1;
Elapsed: 00:00:05.01
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=7 Card=1 Bytes=2015)
1 0 COUNT (STOPKEY)
2 1 TABLE ACCESS (FULL) OF 'EXAMPLE' (TABLE) (Cost=7 Card=1588 [..])
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
33211 consistent gets
25901 physical reads
0 redo size
2237 bytes sent via SQL*Net to client
278 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
As you can see the number of consistent gets is extremely high (for a single row). This situation could be encountered in some cases where for example, you insert rows with the /*+APPEND*/ hint (thus above high water mark), and you also delete the oldest rows periodically, resulting in a lot of empty space at the beginning of the segment.
Try this:
select * from tableName where rownum<=1
There are some weird ROWNUM bugs, sometimes changing the query very slightly will fix it. I've seen this happen before, but I can't reproduce it.
Here are some discussions of similar issues: http://jonathanlewis.wordpress.com/2008/03/09/cursor_sharing/ and http://forums.oracle.com/forums/thread.jspa?threadID=946740&tstart=1
Surely Oracle has meta-data tables that you can use to get column names, like the sysibm.syscolumns table in DB2?
And, after a quick web search, that appears to be the case: see ALL_TAB_COLUMNS.
I'd use those rather than go to the actual table, something like (untested):
SELECT COLUMN_NAME
FROM ALL_TAB_COLUMNS
WHERE TABLE_NAME = "MYTABLE"
ORDER BY COLUMN_NAME;
If you are hell-bent on finding out why your query is slow, you should revert to the standard method: asking your DBMS to explain the execution plan of the query for you. For Oracle, see section 9 of this document.
There's a conversation over at Ask Tom - Oracle that seems to suggest the row numbers are created after the select phase, which may mean the query is retrieving all rows anyway. The explain will probably help establish that. If it contains FULL without COUNT STOPKEY, then that may explain the performance.
Beyond that, my knowledge of Oracle specifics diminishes and you will have to analyse the explain further.
Your query is doing a full table scan and then returning the first row.
Try
SELECT * FROM table WHERE primary_key = primary_key_value;
The first row, particularly as it pertains to ROWNUM, is arbitrarily decided by Oracle. It may not be the same from query to query, unless you provide an ORDER BY clause.
So, picking a primary key value to filter by is as good a method as any to get a single row.
I think you're slightly missing the concept of ROWNUM - according to Oracle docs: "ROWNUM is a pseudo-column that returns a row's position in a result set. ROWNUM is evaluated AFTER records are selected from the database and BEFORE the execution of ORDER BY clause."
So it returns ANY row that it consideres #1 in the result set which in your case will contain 1M rows.
You may want to check out a ROWID pseudo-column: http://psoug.org/reference/pseudocols.html
I've recently had the same problem you're describing: I want one row from the very large table as a quick, dirty, simple introspection, and "where rownum=1" alone behaves very poorly. Below is a remedy which worked for me.
Select the max() of the first term of some index, and then use it to choose some small fraction of all rows with "rownum=1". Suppose my table has some index on numerical "group-id", and compare this:
select * from my_table where rownum = 1;
-- Elapsed: 00:00:23.69
with this:
select * from my_table where rownum = 1
and group_id = (select max(group_id) from my_table);
-- Elapsed: 00:00:00.01
I have a situation like the following join table:
A_ID B_ID
1 27
1 314
1 5
I need to put a constraint on the table that will prevent a duplicate group from being entered. In other words:
A_ID B_ID
2 27
2 314
2 5
should fail, but
A_ID B_ID
3 27
3 314
should succeed, because it's a distinct group.
The 2 ways I've thought of are:
Pivot the table in a materialize view based upon the order and put a unique key on the pivot fields. I don't like this because in Oracle I have to limit the number of rows in a group because of both the pivoting rules, and the 32-column index limitation (thought of a way around this second problem, but still).
Create some unique hash value on the combination of the B_IDs and make that unique. Maybe I'm not enough of a mathematician, but I can't think of a way to do this that doesn't limit the number of values that I can use for B_ID.
I feel like there's something obvious I'm missing here, like I could just add some sort of an ordering column and set a different unique key, but I've done quite a bit of reading and haven't come up with anything. It might also be that the data model I inherited is flawed, but I can't think of anything that would give me similar flexibility.
Firstly a regular constraint can't work.
If the set with A_ID of 1 exists, and then session 1 inserts a record with A_ID 2 and B_ID of 27, session 2 inserts (2,314) and session 3 inserts (2,5), then none of those would see a conflict to cause a constraint violation. Triggers won't work either. Equally, if a set existed of (6,99), then it would be difficult for another session to create a new set of (6,99,300).
The MV with 'refresh on commit' could work, preventing the last session from successfully committing. I'd look more at the hashing option, summing up the hashed B_ID's for each A_ID
select table_name, sum(ora_hash(column_id)), count(*)
from user_tab_columns
group by table_name
While hash collisions are possible, they are very unlikely.
If you are on 11g check out LISTAGG too.
select table_name, listagg(column_id||':') within group (order by column_id)
from user_tab_columns
group by table_name
I have multiple threads that are processing rows from the same table that is - in fact - a queue.
I want that each row will be handled by one thread only. So, I've added a column - "IsInProccess", and in the threads SELECT statement I've added "WHERE IsInProccess = 0".
In addition I use "SELECT FOR UPDATE" so after a thread get a row from the table no other thread will get it before it puts 1 in "IsInProccess".
The problem is that I have many threads and in many times the following scenario happens:
Thread A selecting by "SELECT FOR UPDATE" form the table and getting row no. 1. Before it changes IsInProccess to 1, thread B selects in the same way from the table and get row no 1 too. Oracle save row no. 1 to thread A session and thread B can't change this row and return an error - "Fail to be fetched".
I want that when some thread select from the table Oracle will return rows that are no saved to other open session.
Can I do that?
This is a sketch of a solution that I've seen used very successfully before:
Use the SELECT FOR UPDATE NOWAIT syntax so that if the session cannot get a lock on a row immediately it raises an exception instead of waiting for the lock. The exception handler could wait a few seconds (e.g. with dbms_lock.sleep); the whole block could be wrapped in a loop which tries again a certain number of times before it gives up.
Add WHERE ROWNUM<=n to the query so that it only tries to get a certain number of rows (e.g. 1) at a time; if it succeeds, update the rows as "in process".
A good way to mark rows as "in process" (that I've seen used successfully) is to have two columns - SID and SERIAL# - and update these columns with the SID and SERIAL# for the current session.
In case a session fails while it has the rows marked as "in process", another process could "clean up" the rows that were marked as "in process" by searching for any rows that have SID/SERIAL# that are not found as active sessions in v$session.
Oracle have already solved this for you: use the Advanced Queueing API
If you have 11g, look at SKIP LOCKED
It is there, but undocumented (and therefore unsupported and maybe buggy) in 10g.
That way, when Session A locks the row, Session B can skip it and process the next.
One solution:
Put the select and update queries into a transaction. This sort of problem is exactly why transactions were invented.
You also need to worry about "orphan" rows - e.g. a thread picks up the row and then dies without finishing the work. To solve that, one solution is to have 2 columns: "IsInProcess" and "StartprocessingTime".
isInProcess will have 3 values: 0 (not processed), 1 (picked up), 2 (done).
The original transaction will set the row's "isInProcess" to 1 and "StartprocessingTime" to something else, but the select will also add this to where clause (assuming you can specify a valid timeout period)
"WHERE isInProcess = 0 OR (isInProcess = 1 AND StartprocessingTime < getdate()-timeout)".
Please note that the syntax above is not ORACLE, just pseudo-code.