Why we need WHERE CURRENT OF clause in Oracle PL/SQL? We all know that FETCH retrieves only one row at a time and hence FETCH is used in LOOP to process all the rows of a cursor. Then why do we exclusively need a WHERE CURRENT OF clause? We can anyhow lock the cursor rows using FOR UPDATE or FOR UPDATE OF.
Can rows be unlocked (which are locked by FOR UPDATE or FOR UPDATE OF) once we close the cursor? Or do we need to COMMIT or ROLLBACK the transaction to unlock the rows?
Have a look at this block:
DECLARE
CURSOR c1 IS
SELECT course_number, ROWID AS RID
FROM courses_tbl
FOR UPDATE;
begin
FOR aCourse IN c1 LOOP
UPDATE courses_tbl SET course_number = aCourse.course_number + 1
WHERE CURRENT OF c1;
UPDATE courses_tbl SET course_number = aCourse.course_number + 1
WHERE ROWID = aCourse.RID
end loop;
end;
The two UPDATE statements are equivalent, WHERE CURRENT OF ... is just a shortcut for WHERE ROWID = ..., you can use either of them.
Actually your question should be "Why we need FOR UPDATE ...?" The reason is, the ROWID may change by other operations, e.g. ALTER TABLE ... SHRINK SPACE, moving tablespace or big DML's. FOR UPDATE locks the row, i.e. ensures that ROWID does not change until you finished your transaction.
No, you can release the lock only by finishing the transaction, i.e. ROLLBACK or COMMIT
May be this will convince you ...
When we want to update or delete the cursor fetched row(s) from the
database, we don’t have to form a UPDATE or a DELETE statement with a
primary key mapping in its WHERE clause, instead, the WHERE CURRENT OF
clause comes in handy.
Source:
http://www.dba-oracle.com/t_adv_plsql_for_update_where_current.htm
You must commit or rollback your transaction to release locks.
Related
I am copying a lot of data from one Table to another in a procedure while using cursors to iterate over the data of one table, holding them in an array and befilling the other afterwards in a limited batchsize.
I do realize that there are better ways to do this but i devloped it this way because of language restrictions. To my Code:
PROCEDURE copy_tableA_into_tableB IS
TYPE tableA_array IS TABLE OF tableA%ROWTYPE;
tableA_initialized_array TableIwantToCopyFrom_array;
CURSOR table_a_cursor IS
SELECT *
FROM TABLE_A; --get all data from Table_A
BEGIN
OPEN table_a_cursor;
LOOP
FETCH table_a_cursor BULK COLLECT
INTO test_copy LIMIT 10000;
FORALL i IN 1 .. table_a_cursor.COUNT
INSERT INTO TABLE_B
VALUES tableA_initialized_array (i);
COMMIT;
EXIT WHEN table_a_cursor%NOTFOUND;
END LOOP;
CLOSE table_a_cursor;
END copy_tableA_into_tableB;
This Code works just fine by just one weird thing that is happening,
when i execute it a couple of times with a bulk of data like 1.5 million table rows the UNDO tablespace is getting bigger and bigger and even though the procedure is done its still allocated by my procedure. Eventually my UNDO tablespace is full and i get an Exception. In fact i can only drop my UNDO tablespace and rebuild it to get it unoccupied and emptry again.
As you can clearly see i am commiting actually every time i got though the array so why would the UNDO table space even still be allocated after the transaction is done?
I am not an oracle expert to understand the underlying concept but i thought my cursor is closed and deallocated when hes closed, so i don't think that hes the culprit, i am using Oracle Version 11g if this is of any concern.
i expected that when i am done with the Procedure that my UNDO table space is deallocated again and i am checking the undo table space that is still left untouched
EDIT for Questions unanswered:
I am checking how much UNDO tablespace i have left the numbers of data to occupation added up and i am the only one running programms,
Exception: ORA-01555: snapshot too old: rollback segment number with name "" too small
I am testing the procedure in the PL/SQL developer under a stored procedure test
I am not reseting anything just emptying the tables that i wanna copy into with a truncate.
First off, I don't understand what you mean by "language restrictions". Why would that prevent you from doing a straight insert?
Secondly, you're commiting across a loop - seriously, a bad, bad idea. You could find yourself getting snapshot too old errors.
Thirdly, for your answer, Oracle doesn't immediately free up space in UNDO, once you've finished with it - it's marked as no longer needed (something that you do with that commit, hence why fetching across a loop is a bad idea!) and if another session needs that space then it's overwritten.
Tom Kyte, as ever, says it best
Is it not because the opening statement is:
CURSOR table_a_cursor IS
SELECT *
FROM TABLE_A; --get all data from Table_A
Therefore undo is generated so that this query can succeed. The lock on these rows isn't released until after it has completed (by which time the UNDO tablespace is full).
Something like this should work: (some modification to the loop may be required - I personally would check against 'x' processed before commited rather than this method.
PROCEDURE Copy_tableA_into_tableB AS
l_count integer
BEGIN
-- Count rows in tableA
l_count_total := 'select count(*) from table A';
EXECUTE IMMEDIATE l_sql_regexp_count
INTO l_pancount;
WHILE l_count > 0
LOOP
FETCH table_a_cursor BULK COLLECT
INTO cdplzstb_copy_batch LIMIT 10000;
FORALL i IN 1 .. table_a_cursor.COUNT
INSERT INTO TABLE_B
VALUES tableA_initialized_array (i);
COMMIT;
l_count:=l_count-1;
END LOOP;
END Copy_tableA_into_tableB;
Oracle will not reuse space while there is an active transaction there.
that explain why your tablespace completely occupied
More info in this Undo tablespace keeps growing
If you want to know how much space is occuipied
select tablespace_name,sum(bytes) from dba_segments group by tablespace_name;
this way you can find the actual size
select tablespace_name,sum(bytes) from dba_data_files group by tablespace_name;
I run into an interesting and unexpected issue when processing records in Oracle (11g) using BULK COLLECT.
The following code was running great, processing through all million plus records with out an issue:
-- Define cursor
cursor My_Data_Cur Is
Select col1
,col2
from My_Table_1;
…
-- Open the cursor
open My_Data_Cur;
-- Loop through all the records in the cursor
loop
-- Read the first group of records
fetch My_Data_Cur
bulk collect into My_Data_Rec
limit 100;
-- Exit when there are no more records to process
Exit when My_Data_Rec.count = 0;
-- Loop through the records in the group
for idx in 1 .. My_Data_Rec.count
loop
… do work here to populate a records to be inserted into My_Table_2 …
end loop;
-- Insert the records into the second table
forall idx in 1 .. My_Data_Rec.count
insert into My_Table_2…;
-- Delete the records just processed from the source table
forall idx in 1 .. My_Data_Rec.count
delete from My_Table_1 …;
commit;
end loop;
Since at the end of processing each group of 100 records (limit 100) we are deleting the records just read and processed, I though it would be a good idea to add the “for update” syntax to the cursor definition so that another process couldn’t update any of the records between the time the data was read and the time the record is deleted.
So, the only thing in the code I changed was…
cursor My_Data_Cur
is
select col1
,col2
from My_Table_1
for update;
When I ran the PL/SQL package after this change, the job only processes 100 records and then terminates. I confirmed this change was causing the issue by removing the “for update” from the cursor and once again the package processed all of the records from the source table.
Any ideas why adding the “for update” clause would cause this change in behavior? Any suggestions on how to get around this issue? I’m going to try starting an exclusive transaction on the table at the beginning of the process, but this isn’t an idea solution because I really don’t want to lock the entire table which processing the data.
Thanks in advance for your help,
Grant
The problem is that you're trying to do a fetch across a commit.
When you open My_Data_Cur with the for update clause, Oracle has to lock every row in the My_Data_1 table before it can return any rows. When you commit, Oracle has to release all those locks (the locks Oracle creates do not span transactions). Since the cursor no longer has the locks that you requested, Oracle has to close the cursor since it can no longer satisfy the for update clause. The second fetch, therefore, must return 0 rows.
The most logical approach would almost always be to remove the commit and do the entire thing in a single transaction. If you really, really, really need separate transactions, you would need to open and close the cursor for every iteration of the loop. Most likely, you'd want to do something to restrict the cursor to only return 100 rows every time it is opened (i.e. a rownum <= 100 clause) so that you wouldn't incur the expense of visiting every row to place the lock and then every row other than the 100 that you processed and deleted to release the lock every time through the loop.
Adding to Justin's Explantion.
You should have seen the below error message.Not sure, if your Exception handler suppressed this.
And the message itself explains a Lot!
For this kind of Updates, it is better to create a shadow copy of the main table, and let the public synonym point to it. While some batch id, creates a private synonym to our main table and perform the batch operations, to keep it simpler for maintenance.
Error report -
ORA-01002: fetch out of sequence
ORA-06512: at line 7
01002. 00000 - "fetch out of sequence"
*Cause: This error means that a fetch has been attempted from a cursor
which is no longer valid. Note that a PL/SQL cursor loop
implicitly does fetches, and thus may also cause this error.
There are a number of possible causes for this error, including:
1) Fetching from a cursor after the last row has been retrieved
and the ORA-1403 error returned.
2) If the cursor has been opened with the FOR UPDATE clause,
fetching after a COMMIT has been issued will return the error.
3) Rebinding any placeholders in the SQL statement, then issuing
a fetch before reexecuting the statement.
*Action: 1) Do not issue a fetch statement after the last row has been
retrieved - there are no more rows to fetch.
2) Do not issue a COMMIT inside a fetch loop for a cursor
that has been opened FOR UPDATE.
3) Reexecute the statement after rebinding, then attempt to
fetch again.
Also, you can change you Logic by Using rowid
An Example for Docs:
DECLARE
-- if "FOR UPDATE OF salary" is included on following line, an error is raised
CURSOR c1 IS SELECT e.*,rowid FROM employees e;
emp_rec employees%ROWTYPE;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO emp_rec; -- FETCH fails on the second iteration with FOR UPDATE
EXIT WHEN c1%NOTFOUND;
IF emp_rec.employee_id = 105 THEN
UPDATE employees SET salary = salary * 1.05 WHERE rowid = emp_rec.rowid;
-- this mimics WHERE CURRENT OF c1
END IF;
COMMIT; -- releases locks
END LOOP;
END;
/
You have to fetch a record row by row!! update it using the ROWID AND COMMIT immediately
. And then proceed to the next row!
But by this, you have to give up the Bulk Binding option.
PL SQL moves older versions of data from a transaction table to a history table of same structure and archive for a certain period -
for each record
insert into tab_hist (select older_versions of current row);
delete from tab (select older_versions of current row);
END
ps: earlier we were not archiving(no insert) - but after adding the insert it has doubled the run time - so can we accomplish insert and delete with a single select statement? as there is large data to be processed and across multiple table
This is a batch operation, right? In which case you should avoid Row By Row and use set processing. SQL is all about The Joy Of Sets.
Oracle has fantastic bulk SQL processing capabilities. The pseudo code you paosted would look something like this:
declare
cursor c_oldrecs is
select * from your_table
where criterion between some_date and some_other_date;
type rec_nt is table of your_table%rowtype;
oldrecs_coll rec_nt;
begin
open c_oldrecs;
loop
fetch c_oldrecs into oldrecs_coll limit 1000;
exit when oldrecs_coll.count() = 0;
forall i in oldrecs_coll.first() oldrecs_coll.last()
insert into your_table_hist
values oldrecs_coll(i);
forall i in oldrecs_coll.first() oldrecs_coll.last()
delete from your_table
where pk_col = oldrecs_coll(i).pk_col;
end loop;
end;
/
This bulk processing is faster because it sends one thousand statements to the database at a time, instead of switching between PL/SQL and SQL one thousand times. The LIMIT 1000 clause is there to prevent a really huge selection blowing the PGA. This safeguard may not be necessary in your case, or perhaps you can work with a higher value.
I think your current implementation is wrong. It is better to keep only the current version in the live table, and to keep all the historical versions in a separate table from the off. Use triggers to maintain the history as part of every transaction.
It may be that the slowness you are seeing is due to the logic that selects which rows are to be moved. If so, you might get better results by doing the select once to get the rowids into a nested table in memory, then doing the insert and the delete based on that list; or alternatively, driving your loop with a query that selects the rows to be moved.
You might instead consider creating a trigger on insert that will move the existing rows that "match" the row being inserted. This will slow down the inserts somewhat, but would mean you don't need any process to move the old rows in bulk.
If you are on Enterprise edition with the partitioning option, look at partition exchange.
As simple as this
CREATE BACKUP_TAB AS SELECT * FROM TAB
If you are deleting a lot of rows you will be hitting your undo tablespace and a delete which deletes say 100k rows can cause performance issues. You are better of deleting by batch say 5k rows at a time and committing.
BEGIN
-- Where condition on insert and delete must be the same
loop
INSERT INTO BACKUP_TAB SELECT * FROM TAB WHERE 1=1 and rownum < 5000; --Your condition here
exit when SQL%rowcount < 4999;
commit;
end loop;
loop
DELETE FROM TAB
where 1=1--Your condition here
and rownum < 5000;
exit when SQL%rowcount < 4999;
commit;
end loop;
commit;
END;
Using Oracle, is it possible to indicate which rows are currently locked (and which are not) when performing a select statement (I don't want to lock any rows, just be able to display which are locked)?
For example, a pseudo column that would return the lock/transaction against the row:
SELECT lockname FROM emp;
One thing you could do is this - although it is not terribly efficient and so I wouldn't want to do use it for large data sets. Create a row-level function to try and lock the row. If it fails, then the row is already locked
CREATE OR REPLACE FUNCTION is_row_locked (v_rowid ROWID, table_name VARCHAR2)
RETURN varchar2
IS
x NUMBER;
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
EXECUTE IMMEDIATE 'Begin
Select 1 into :x from '
|| table_name
|| ' where rowid =:v_rowid for update nowait;
Exception
When Others Then
:x:=null;
End;'
USING OUT x, v_rowid;
-- now release the lock if we got it.
ROLLBACK;
IF x = 1
THEN
RETURN 'N';
ELSIF x IS NULL
THEN
RETURN 'Y';
END IF;
END;
/
And then you could
Select field1, field2, is_row_locked(rowid, 'MYTABLE') from mytable;
It will work, but it isn't pretty nor efficient.
Indeed, it has exactly one redeeming quality - it will work even if you don't have select privs on the various v$ tables required in the linked document. If you have the privs, though, definitely go the other route.
is it possible to indicate which rows are currently locked (and which are not) when performing a select statement
A SELECT statement will never lock any rows - unless you ask it to by using FOR UPDATE.
If you want to see locks that are held due to a SELECT ... FOR UPDATE (or a real update), you can query the v$lock system view.
See the link that OMG Pony posted for an example on how to use that view.
I think #Michael Broughton's answer is the only way that will always work. This is because V$LOCK is not accurate 100% of the time.
Sessions don't wait for a row, they wait for the end of the transaction that modified that row. Most of the time those two concepts are the same thing, but not when you start using savepoints.
For example:
Session 1 creates a savepoint and modifies a row.
Session 2 tries to modify that same
row, but sees session 1 already has that row,
and waits for session 1 to finish.
Session 1 rolls back to the
savepoint. This removes its entry
from the ITL but does not end the
transaction. Session 2 is still
waiting on session 1. According to
V$LOCK session 2 is still waiting on
that row, but that's not really true
because now session 3 can modify that
row. (And if session 1 executes a
commit or rollback, session 2 will
wait on session 3.)
Sorry if that's confusing. You may want to step through the link provided by OMG Ponies, and then try it again with savepoints.
We have an Oracle database, and the customer account table has about a million rows. Over the years, we've built four different UIs (two in Oracle Forms, two in .Net), all of which remain in use. We have a number of background tasks (both persistent and scheduled) as well.
Something is occasionally holding a long lock (say, more than 30 seconds) on a row in the account table, which causes one of the persistent background tasks to fail. The background task in question restarts itself once the update times out. We find out about it a few minutes after it happens, but by then the lock has been released.
We have reason to believe that it might be a misbehaving UI, but haven't been able to find a "smoking gun".
I've found some queries that list blocks, but that's for when you've got two jobs contending for a row. I want to know which rows have locks when there's not necessarily a second job trying to get a lock.
We're on 11g, but have been experiencing the problem since 8i.
Oracle's locking concept is quite different from that of the other systems.
When a row in Oracle gets locked, the record itself is updated with the new value (if any) and, in addition, a lock (which is essentially a pointer to transaction lock that resides in the rollback segment) is placed right into the record.
This means that locking a record in Oracle means updating the record's metadata and issuing a logical page write. For instance, you cannot do SELECT FOR UPDATE on a read only tablespace.
More than that, the records themselves are not updated after commit: instead, the rollback segment is updated.
This means that each record holds some information about the transaction that last updated it, even if the transaction itself has long since died. To find out if the transaction is alive or not (and, hence, if the record is alive or not), it is required to visit the rollback segment.
Oracle does not have a traditional lock manager, and this means that obtaining a list of all locks requires scanning all records in all objects. This would take too long.
You can obtain some special locks, like locked metadata objects (using v$locked_object), lock waits (using v$session) etc, but not the list of all locks on all objects in the database.
you can find the locked tables in Oracle by querying with following query
select
c.owner,
c.object_name,
c.object_type,
b.sid,
b.serial#,
b.status,
b.osuser,
b.machine
from
v$locked_object a ,
v$session b,
dba_objects c
where
b.sid = a.session_id
and
a.object_id = c.object_id;
Rather than locks, I suggest you look at long-running transactions, using v$transaction. From there you can join to v$session, which should give you an idea about the UI (try the program and machine columns) as well as the user.
Look at the dba_blockers, dba_waiters and dba_locks for locking. The names should be self explanatory.
You could create a job that runs, say, once a minute and logged the values in the dba_blockers and the current active sql_id for that session. (via v$session and v$sqlstats).
You may also want to look in v$sql_monitor. This will be default log all SQL that takes longer than 5 seconds. It is also visible on the "SQL Monitoring" page in Enterprise Manager.
The below PL/SQL block finds all locked rows in a table. The other answers only find the blocking session, finding the actual locked rows requires reading and testing each row.
(However, you probably do not need to run this code. If you're having a locking problem, it's usually easier to find the culprit using GV$SESSION.BLOCKING_SESSION and other related data dictionary views. Please try another approach before you run this abysmally slow code.)
First, let's create a sample table and some data. Run this in session #1.
--Sample schema.
create table test_locking(a number);
insert into test_locking values(1);
insert into test_locking values(2);
commit;
update test_locking set a = a+1 where a = 1;
In session #2, create a table to hold the locked ROWIDs.
--Create table to hold locked ROWIDs.
create table locked_rowids(the_rowid rowid);
--Remove old rows if table is already created:
--delete from locked_rowids;
--commit;
In session #2, run this PL/SQL block to read the entire table, probe each row, and store the locked ROWIDs. Be warned, this may be ridiculously slow. In your real version of this query, change both references to TEST_LOCKING to your own table.
--Save all locked ROWIDs from a table.
--WARNING: This PL/SQL block will be slow and will temporarily lock rows.
--You probably don't need this information - it's usually good enough to know
--what other sessions are locking a statement, which you can find in
--GV$SESSION.BLOCKING_SESSION.
declare
v_resource_busy exception;
pragma exception_init(v_resource_busy, -00054);
v_throwaway number;
type rowid_nt is table of rowid;
v_rowids rowid_nt := rowid_nt();
begin
--Loop through all the rows in the table.
for all_rows in
(
select rowid
from test_locking
) loop
--Try to look each row.
begin
select 1
into v_throwaway
from test_locking
where rowid = all_rows.rowid
for update nowait;
--If it doesn't lock, then record the ROWID.
exception when v_resource_busy then
v_rowids.extend;
v_rowids(v_rowids.count) := all_rows.rowid;
end;
rollback;
end loop;
--Display count:
dbms_output.put_line('Rows locked: '||v_rowids.count);
--Save all the ROWIDs.
--(Row-by-row because ROWID type is weird and doesn't work in types.)
for i in 1 .. v_rowids.count loop
insert into locked_rowids values(v_rowids(i));
end loop;
commit;
end;
/
Finally, we can view the locked rows by joining to the LOCKED_ROWIDS table.
--Display locked rows.
select *
from test_locking
where rowid in (select the_rowid from locked_rowids);
A
-
1
Given some table, you can find which rows are not locked with SELECT FOR UPDATESKIP LOCKED.
For example, this query will lock (and return) every unlocked row:
SELECT * FROM mytable FOR UPDATE SKIP LOCKED
References
Ask TOM "How to get ROWID for locked rows in oracle".