How to find locked rows in Oracle - oracle

We have an Oracle database, and the customer account table has about a million rows. Over the years, we've built four different UIs (two in Oracle Forms, two in .Net), all of which remain in use. We have a number of background tasks (both persistent and scheduled) as well.
Something is occasionally holding a long lock (say, more than 30 seconds) on a row in the account table, which causes one of the persistent background tasks to fail. The background task in question restarts itself once the update times out. We find out about it a few minutes after it happens, but by then the lock has been released.
We have reason to believe that it might be a misbehaving UI, but haven't been able to find a "smoking gun".
I've found some queries that list blocks, but that's for when you've got two jobs contending for a row. I want to know which rows have locks when there's not necessarily a second job trying to get a lock.
We're on 11g, but have been experiencing the problem since 8i.

Oracle's locking concept is quite different from that of the other systems.
When a row in Oracle gets locked, the record itself is updated with the new value (if any) and, in addition, a lock (which is essentially a pointer to transaction lock that resides in the rollback segment) is placed right into the record.
This means that locking a record in Oracle means updating the record's metadata and issuing a logical page write. For instance, you cannot do SELECT FOR UPDATE on a read only tablespace.
More than that, the records themselves are not updated after commit: instead, the rollback segment is updated.
This means that each record holds some information about the transaction that last updated it, even if the transaction itself has long since died. To find out if the transaction is alive or not (and, hence, if the record is alive or not), it is required to visit the rollback segment.
Oracle does not have a traditional lock manager, and this means that obtaining a list of all locks requires scanning all records in all objects. This would take too long.
You can obtain some special locks, like locked metadata objects (using v$locked_object), lock waits (using v$session) etc, but not the list of all locks on all objects in the database.

you can find the locked tables in Oracle by querying with following query
select
c.owner,
c.object_name,
c.object_type,
b.sid,
b.serial#,
b.status,
b.osuser,
b.machine
from
v$locked_object a ,
v$session b,
dba_objects c
where
b.sid = a.session_id
and
a.object_id = c.object_id;

Rather than locks, I suggest you look at long-running transactions, using v$transaction. From there you can join to v$session, which should give you an idea about the UI (try the program and machine columns) as well as the user.

Look at the dba_blockers, dba_waiters and dba_locks for locking. The names should be self explanatory.
You could create a job that runs, say, once a minute and logged the values in the dba_blockers and the current active sql_id for that session. (via v$session and v$sqlstats).
You may also want to look in v$sql_monitor. This will be default log all SQL that takes longer than 5 seconds. It is also visible on the "SQL Monitoring" page in Enterprise Manager.

The below PL/SQL block finds all locked rows in a table. The other answers only find the blocking session, finding the actual locked rows requires reading and testing each row.
(However, you probably do not need to run this code. If you're having a locking problem, it's usually easier to find the culprit using GV$SESSION.BLOCKING_SESSION and other related data dictionary views. Please try another approach before you run this abysmally slow code.)
First, let's create a sample table and some data. Run this in session #1.
--Sample schema.
create table test_locking(a number);
insert into test_locking values(1);
insert into test_locking values(2);
commit;
update test_locking set a = a+1 where a = 1;
In session #2, create a table to hold the locked ROWIDs.
--Create table to hold locked ROWIDs.
create table locked_rowids(the_rowid rowid);
--Remove old rows if table is already created:
--delete from locked_rowids;
--commit;
In session #2, run this PL/SQL block to read the entire table, probe each row, and store the locked ROWIDs. Be warned, this may be ridiculously slow. In your real version of this query, change both references to TEST_LOCKING to your own table.
--Save all locked ROWIDs from a table.
--WARNING: This PL/SQL block will be slow and will temporarily lock rows.
--You probably don't need this information - it's usually good enough to know
--what other sessions are locking a statement, which you can find in
--GV$SESSION.BLOCKING_SESSION.
declare
v_resource_busy exception;
pragma exception_init(v_resource_busy, -00054);
v_throwaway number;
type rowid_nt is table of rowid;
v_rowids rowid_nt := rowid_nt();
begin
--Loop through all the rows in the table.
for all_rows in
(
select rowid
from test_locking
) loop
--Try to look each row.
begin
select 1
into v_throwaway
from test_locking
where rowid = all_rows.rowid
for update nowait;
--If it doesn't lock, then record the ROWID.
exception when v_resource_busy then
v_rowids.extend;
v_rowids(v_rowids.count) := all_rows.rowid;
end;
rollback;
end loop;
--Display count:
dbms_output.put_line('Rows locked: '||v_rowids.count);
--Save all the ROWIDs.
--(Row-by-row because ROWID type is weird and doesn't work in types.)
for i in 1 .. v_rowids.count loop
insert into locked_rowids values(v_rowids(i));
end loop;
commit;
end;
/
Finally, we can view the locked rows by joining to the LOCKED_ROWIDS table.
--Display locked rows.
select *
from test_locking
where rowid in (select the_rowid from locked_rowids);
A
-
1

Given some table, you can find which rows are not locked with SELECT FOR UPDATESKIP LOCKED.
For example, this query will lock (and return) every unlocked row:
SELECT * FROM mytable FOR UPDATE SKIP LOCKED
References
Ask TOM "How to get ROWID for locked rows in oracle".

Related

Oracle parallel query returns before actual job finishes

A VB 6 program is processing records and inserting in a temporary table, then these records are moved from this temporary table to actual table as
connection.Execute "INSERT INTO MAIN_TABLE SELECT * FROM TEMP_TABLE"
The temporary table is then truncated when records are moved
connection.Execute "TRUNCATE TABLE TEMP_TABLE"
This is working fine untill I use PARALLEL hint for INSERT query. I receive this error on TRUNCATE
ORA-00054: resource busy and acquire with NOWAIT specified or timeout
expired
It looks to me that parallel query returns before completing the job and TRUNCATE command is issued causing the lock.
I checked the number of records inserted as below and found that it is far less than the number of records in temporary table
connection.Execute "INSERT /*+ PARALLEL */ INTO MAIN_TABLE SELECT * FROM TEMP_TABLE", recordsAffected
Is there any way to wait for INSERT to complete?
Delete may be slower but Truncate is DDL which you can't run at the same time as DML. In fact, Truncate requires exclusive access to the table. DML on tables will request a share mode lock on the table which means you can't do DDL against the table at the same time.
A possible alternate solution would be to use synonyms. You have your table A
and a synonym S pointing to A
create table B as select * from A where 1=0;
create or replace synonym S for B
Your app now uses B instead of A so you can do what you want with A.
Do this every time you want to "truncate"
This assumes you're using ADO - though I now notice you don't have
that tag in your question.
Can you monitor the connection state with a loop waiting for executing to finish?
Something like
EDIT - Fix Boolean Add to use + instead of "AND"
While Conn.State = (adStateOpen + adStateExecuting)
DoEvents
Sleep 500 ' uses Sleep API command to delay 1/2 second
Wend
Sleep API declare
Edit - Add Asynch Hint/Option
Also - it might help the ADO connection to give it a hint that its running asynchronously, by adding the adAsyncExecute to end of your execute command
ie. Change the execute sql command to look like
conn.execute sqlString, recordsaffected, adAsyncExecute

Why does my cursor based copying of a table leave the undo tablespace completely occupied even after finishing the procedure?

I am copying a lot of data from one Table to another in a procedure while using cursors to iterate over the data of one table, holding them in an array and befilling the other afterwards in a limited batchsize.
I do realize that there are better ways to do this but i devloped it this way because of language restrictions. To my Code:
PROCEDURE copy_tableA_into_tableB IS
TYPE tableA_array IS TABLE OF tableA%ROWTYPE;
tableA_initialized_array TableIwantToCopyFrom_array;
CURSOR table_a_cursor IS
SELECT *
FROM TABLE_A; --get all data from Table_A
BEGIN
OPEN table_a_cursor;
LOOP
FETCH table_a_cursor BULK COLLECT
INTO test_copy LIMIT 10000;
FORALL i IN 1 .. table_a_cursor.COUNT
INSERT INTO TABLE_B
VALUES tableA_initialized_array (i);
COMMIT;
EXIT WHEN table_a_cursor%NOTFOUND;
END LOOP;
CLOSE table_a_cursor;
END copy_tableA_into_tableB;
This Code works just fine by just one weird thing that is happening,
when i execute it a couple of times with a bulk of data like 1.5 million table rows the UNDO tablespace is getting bigger and bigger and even though the procedure is done its still allocated by my procedure. Eventually my UNDO tablespace is full and i get an Exception. In fact i can only drop my UNDO tablespace and rebuild it to get it unoccupied and emptry again.
As you can clearly see i am commiting actually every time i got though the array so why would the UNDO table space even still be allocated after the transaction is done?
I am not an oracle expert to understand the underlying concept but i thought my cursor is closed and deallocated when hes closed, so i don't think that hes the culprit, i am using Oracle Version 11g if this is of any concern.
i expected that when i am done with the Procedure that my UNDO table space is deallocated again and i am checking the undo table space that is still left untouched
EDIT for Questions unanswered:
I am checking how much UNDO tablespace i have left the numbers of data to occupation added up and i am the only one running programms,
Exception: ORA-01555: snapshot too old: rollback segment number with name "" too small
I am testing the procedure in the PL/SQL developer under a stored procedure test
I am not reseting anything just emptying the tables that i wanna copy into with a truncate.
First off, I don't understand what you mean by "language restrictions". Why would that prevent you from doing a straight insert?
Secondly, you're commiting across a loop - seriously, a bad, bad idea. You could find yourself getting snapshot too old errors.
Thirdly, for your answer, Oracle doesn't immediately free up space in UNDO, once you've finished with it - it's marked as no longer needed (something that you do with that commit, hence why fetching across a loop is a bad idea!) and if another session needs that space then it's overwritten.
Tom Kyte, as ever, says it best
Is it not because the opening statement is:
CURSOR table_a_cursor IS
SELECT *
FROM TABLE_A; --get all data from Table_A
Therefore undo is generated so that this query can succeed. The lock on these rows isn't released until after it has completed (by which time the UNDO tablespace is full).
Something like this should work: (some modification to the loop may be required - I personally would check against 'x' processed before commited rather than this method.
PROCEDURE Copy_tableA_into_tableB AS
l_count integer
BEGIN
-- Count rows in tableA
l_count_total := 'select count(*) from table A';
EXECUTE IMMEDIATE l_sql_regexp_count
INTO l_pancount;
WHILE l_count > 0
LOOP
FETCH table_a_cursor BULK COLLECT
INTO cdplzstb_copy_batch LIMIT 10000;
FORALL i IN 1 .. table_a_cursor.COUNT
INSERT INTO TABLE_B
VALUES tableA_initialized_array (i);
COMMIT;
l_count:=l_count-1;
END LOOP;
END Copy_tableA_into_tableB;
Oracle will not reuse space while there is an active transaction there.
that explain why your tablespace completely occupied
More info in this Undo tablespace keeps growing
If you want to know how much space is occuipied
select tablespace_name,sum(bytes) from dba_segments group by tablespace_name;
this way you can find the actual size
select tablespace_name,sum(bytes) from dba_data_files group by tablespace_name;

Can a table be locked by another table?

We're encountering issues on our oracle 11g database, regarding table lock.
We have a procedure that is executed via sql*plus, which truncates a table, let say table1.
We sometimes get a ORA-00054: resource busy and acquire with NOWAIT error during execution of the procedure at the part when the table is to be truncated. We have a webapp that is in a tomcat server, which when restarted (to kill sessions to the database from tomcat), the procedure can be re executed successfully.
table1 isn't used, not even in select, in the source code for the webapp, but a lot of parent table of table1 is.
So is it possible that an uncommitted update to one of its parent table is causing the lock?. If so, any suggestions on how I can test it?
I've checked with the DBA during times when we encounter the issue, but he can't get the session that is blocking the procedure and the statement that caused the lock.
Yes, an update of a parent table will get a lock on the child table. Below is a test case demonstrating it is possible.
Finding and tracing a specific intermittent locking issue can be painful. Even if you can't trace down the specific conditions it would be a good idea to modify any code to avoid concurrent DML and DDL. Not only can it cause locking issues, it can also break SELECT statements.
If removing the concurrency is out of the question, you may at least want to enable DDL_LOCK_TIMEOUT so that the truncate statement will wait for the lock instead of instantly failing: alter session set ddl_lock_timeout = 100000;
--Create parent/child tables, with or without an indexed foreign key.
create table parent_table(a number primary key);
insert into parent_table values(1);
insert into parent_table values(2);
create table child_table(a number references parent_table(a));
insert into child_table values(1);
commit;
--Session 1: Update parent table.
begin
loop
update parent_table set a = 2 where a = 2;
commit;
end loop;
end;
/
--Session 2: Truncate child table. Eventulaly it will throw this error:
--ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
begin
loop
execute immediate 'truncate table child_table';
end loop;
end;
/
any suggestions on how I can test it?
You can check the blocking sessions when you get ORA-00054: resource busy and acquire with NOWAIT error.
Blocking sessions occur when one sessions holds an exclusive lock on an object and doesn't release it before another sessions wants to update the same data. This will block the second until the first is either does a COMMIT or ROLLBACK.
SELECT
s.blocking_session,
s.sid,
s.serial#,
s.seconds_in_wait
FROM
v$session s
WHERE
blocking_session IS NOT NULL;
For example, see my similar answer here and here.

Showing rows that are locked in Oracle

Using Oracle, is it possible to indicate which rows are currently locked (and which are not) when performing a select statement (I don't want to lock any rows, just be able to display which are locked)?
For example, a pseudo column that would return the lock/transaction against the row:
SELECT lockname FROM emp;
One thing you could do is this - although it is not terribly efficient and so I wouldn't want to do use it for large data sets. Create a row-level function to try and lock the row. If it fails, then the row is already locked
CREATE OR REPLACE FUNCTION is_row_locked (v_rowid ROWID, table_name VARCHAR2)
RETURN varchar2
IS
x NUMBER;
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
EXECUTE IMMEDIATE 'Begin
Select 1 into :x from '
|| table_name
|| ' where rowid =:v_rowid for update nowait;
Exception
When Others Then
:x:=null;
End;'
USING OUT x, v_rowid;
-- now release the lock if we got it.
ROLLBACK;
IF x = 1
THEN
RETURN 'N';
ELSIF x IS NULL
THEN
RETURN 'Y';
END IF;
END;
/
And then you could
Select field1, field2, is_row_locked(rowid, 'MYTABLE') from mytable;
It will work, but it isn't pretty nor efficient.
Indeed, it has exactly one redeeming quality - it will work even if you don't have select privs on the various v$ tables required in the linked document. If you have the privs, though, definitely go the other route.
is it possible to indicate which rows are currently locked (and which are not) when performing a select statement
A SELECT statement will never lock any rows - unless you ask it to by using FOR UPDATE.
If you want to see locks that are held due to a SELECT ... FOR UPDATE (or a real update), you can query the v$lock system view.
See the link that OMG Pony posted for an example on how to use that view.
I think #Michael Broughton's answer is the only way that will always work. This is because V$LOCK is not accurate 100% of the time.
Sessions don't wait for a row, they wait for the end of the transaction that modified that row. Most of the time those two concepts are the same thing, but not when you start using savepoints.
For example:
Session 1 creates a savepoint and modifies a row.
Session 2 tries to modify that same
row, but sees session 1 already has that row,
and waits for session 1 to finish.
Session 1 rolls back to the
savepoint. This removes its entry
from the ITL but does not end the
transaction. Session 2 is still
waiting on session 1. According to
V$LOCK session 2 is still waiting on
that row, but that's not really true
because now session 3 can modify that
row. (And if session 1 executes a
commit or rollback, session 2 will
wait on session 3.)
Sorry if that's confusing. You may want to step through the link provided by OMG Ponies, and then try it again with savepoints.

How to delete large data from Oracle 9i DB?

I have a table that is 5 GB, now I was trying to delete like below:
delete from tablename
where to_char(screatetime,'yyyy-mm-dd') <'2009-06-01'
But it's running long and no response. Meanwhile I tried to check if anybody is blocking with this below:
select l1.sid, ' IS BLOCKING ', l2.sid
from v$lock l1, v$lock l2
where l1.block =1 and l2.request > 0
and l1.id1=l2.id1
and l1.id2=l2.id2
But I didn't find any blocking also.
How can I delete this large data without any problem?
5GB is not a useful measurement of table size. The total number of rows matters. The number of rows you are going to delete as a proportion of the total matters. The average length of the row matters.
If the proportion of the rows to be deleted is tiny it may be worth your while creating an index on screatetime which you will drop afterwards. This may mean your entire operation takes longer, but crucially, it will reduce the time it takes for you to delete the rows.
On the other hand, if you are deleting a large chunk of rows you might find it better to
Create a copy of the table using
'create table t1_copy as select * from t1
where screatedate >= to_date('2009-06-01','yyyy-mm-dd')`
Swap the tables using the rename command.
Re-apply constraints, indexs to the new T1.
Another thing to bear in mind is that deletions eat more UNDO than other transactions, because they take more information to rollback. So if your records are long and/or numerous then your DBA may need to check the UNDO tablespace (or rollback segs if you're still using them).
Finally, have you done any investigation to see where the time is actually going? DELETE statements are just another query, and they can be tackled using the normal panoply of tuning tricks.
Use a query condition to export necessary rows
Truncate table
Import rows
If there is an index on screatetime your query may not be using it. Change your statement so that your where clause can use the index.
delete from tablename where screatetime < to_date('2009-06-01','yyyy-mm-dd')
It runs MUCH faster when you lock the table first. Also change the where clause, as suggested by Rene.
LOCK TABLE tablename IN EXCLUSIVE MODE;
DELETE FROM tablename
where screatetime < to_date('2009-06-01','yyyy-mm-dd');
EDIT: If the table cannot be locked, because it is constantly accessed, you can choose the salami tactic to delete those rows:
BEGIN
LOOP
DELETE FROM tablename
WHERE screatetime < to_date('2009-06-01','yyyy-mm-dd')
AND ROWNUM<=10000;
EXIT WHEN SQL%ROWCOUNT=0;
COMMIT;
END LOOP;
END;
Overall, this will be slower, but it wont burst your rollback segment and you can see the progress in another session (i.e. the number of rows in tablename goes down). And if you have to kill it for some reason, rollback won't take forever and you haven't lost all work done so far.

Resources