ORA 54 resouce busy error - oracle

I have a job running in production which abends randomnly with the error ORA 54. When I checked the code, I could see that the presence of NOWAIT is causing the issue. Now I decided to test it and wrote an anonymous block as follows.
declare
cursor c1 is
select * from table where column_1=2 and column_2=2 and column_3=6
for update of column_4 nowait;
record_locked exception;
pragma exception_init (record_locked, -54);
begin
begin
open c1;
exception
when record_locked then
dbms_output.put_line('Faced a locked record. Waiting for 2 minutes..');
dbms_lock.sleep(120);
open c1;
end;
exception
when others then
dbms_output.put_line('Exception Occured '||SQLCODE||SQLERRM);
end;
I opened one session and ran the below query
select * from table
where column_1=2 and column_2=2 and column_3=6
for update of column_4 nowait;
I didn't commit or rollback and kept the session open. Now I ran the above anonymous block in another session. After waiting for 2 minutes, it failed with ORA 54 error. So my assumption is correct I believe.
Now the thing is when I ran the entire job code containing the first anonymous block in test environment in the same manner, it waited long for the locked records without abending. When I released the lock by rolling back, it updated the records and completed successfully.
I wish to know why?

You get different results from test and production because your table contents are different. Your test table doesn't contain any rows that match the where clause, therefore your sessions don't block each other.
Try adding at least one row to your test table that matches the criterion, and you should get the same results as in production.

Related

Nhibernate TooManyRowsAffectedException Oracle

There is a trigger on a table that periodically creates an insert and throws the TooManyRowsAffectedException. In Sql server, we can set the trigger to NoCount to solve the issue. Any ideas in Oracle?
FluentNHibernate 2.12
.net 4.7.2
Oracle 11g
I think you are talking about two different things:
SET NOCOUNT ON
Stops the message that shows the count of the number of rows affected
by a Transact-SQL statement or stored procedure from being returned as
part of the result set. When SET NOCOUNT is ON, the count is not returned. When SET NOCOUNT is OFF, the count is returned.
I guess you are talking about the exception Too_Many_Rows in Oracle, as the TooManyRowsAffectedException of Hibernate indicates that more rows were affected then we were expecting to be. Typically indicates presence of duplicate "PK" values in the given table.
The TOO_MANY_ROWS Exception (ORA-01422) occurs when a SELECT INTO
statement returns more than one row.
An exception in Oracle is handled within a PL/SQL program into the exception section, which its counterpart in SQL Server would be the TRY CATCH.
When a program in Oracle contains an exception block, you can control the output of one of many specific errors by changing the outcome of them, thereby you can control what the program should do when an error happens.
An exception is basically a logical expression that answers a simple question: when an error happens what do you do with it.
exception
when ... then ...
Let me show you an example
SQL> create table t ( c1 number , c2 number ) ;
Table created.
SQL> alter table t add primary key (c1) ;
Table altered.
SQL> set timing off
SQL> declare
begin
insert into t values ( 1 , 1 );
commit ;
insert into t values ( 1 , 2 );
commit;
exception
when dup_val_on_index then null;
when others then raise;
end;
/
PL/SQL procedure successfully completed.
SQL> select * from t ;
C1 C2
---------- ----------
1 1
As you can see above in that example, I just controlled the exception dup_val_on_index to prevent the program to throw an error. I could have done the same for a too_many_rows exception.
Basically, if you want to ignore the exception, you can change the code in your trigger to avoid that when the exception happens an error is raised. You can also disable the trigger, but I guess that might be not an option.
You only need to realise that both things are different. SET NOCOUNT ON is preventing message of affected rows to be delivered to the client program. An exception in Oracle PL/SQL is used to control and handling a specific error.

How to handle ORA-00028 "your session has been killed" in Oracle 12c (12.2.0.1.0)?

The problem is that handling an error ORA-00028 is kinda tricky. Please, look at the code below.
If you run proc1 in session 1 and while it's still running you kill session 1 with ALTER SYSTEM KILL SESSION then you get ORA-00028 error message and no row in llog table.
If you run proc1 and let it finish (1 min) then error handling works as expected and you get no error message and 1 row in llog table. But the funny thing is if after that you run proc1 again and kill that session you get no error-message (ORA-00028 handled) and one more row in llog table.
So for ORA-00028 to be handled in exception clause you need to catch some other error first. It seems to be a bug. Has anyone faced this problem?
/* creating simple table with logs */
create table llog(time timestamp, error varchar2(4000));
/
/* creating package */
create or replace package my_pack
is
procedure proc1;
end;
/
/* creating package body*/
create or replace package body my_pack
is
e_session_killed EXCEPTION;
PRAGMA EXCEPTION_INIT(e_session_killed, -00028);
procedure error_log (time llog.time%type, error llog.error%type) is
pragma autonomous_transaction;
begin
insert into llog values (time, error);
commit;
end;
procedure proc1 is
begin
dbms_lock.sleep(60);
raise too_many_rows;
exception
when e_session_killed then
error_log(systimestamp, sqlerrm);
when others then
error_log(systimestamp, sqlerrm);
end;
end;
You can't catch a kill session. It interrupts the current operation (as mush as it can - there might be some low level operations that cause issues), rolling back the open transaction(s). Once the rollback is complete the client is told that it is disconnected (assuming the client is still there) and the process goes away.
There's a couple of variants of kill session that affect the order of those but you're not going to be able to insert anything into any table from a killed session.
The only exception might be through a database link or similar, where you actually have two separate sessions/processes going on at the same time.

Is there any alternative of skip locked for update in oracle

I have 5 rows in table. And some of the rows are locked in some sessions .
I don't want to generate any error, Just want to wait until any row will become free for further processing
I tired with nowait and skip locked:-
nowait , But there is a problem with nowait. query has written in cursor , when I used "nowait" under cursor , query will return null and control will go out with an error by saying- resource busy
I tried with skip locked with for update-But if table contain 5 rows and all
5 rows are locked then it is giving error.
CURSOR cur_name_test IS
SELECT def.id , def.name
FROM def_map def
WHERE def.id = In_id
FOR UPDATE skip locked;
Why don't use select for update only ? The below is being test locally in plsql developer.
in first sesssion I do the below
SELECT id , name
FROM ex_employee
FOR UPDATE;
in second session i run the below however it hang.
SET serveroutput ON size 2000
/
begin
declare curSOR cur_name_test IS
SELECT id , name
FROM ex_employee
WHERE id = 1
FOR UPDATE ;
begin
for i in cur_name_test loop
dbms_output.put_line('inside cursor');
end loop;
end;
end;
/
commit
/
when I commit in the first session , the lock will be released and the second session will do its work. i guess that you want , infinite wait.
However such locking mechanism (pessimistic locking) can lead to deadlocks if its not managed correctly and carefully ( first session waiting second session , and second session waiting first session).
As for the nowait its normal to have error resource busy, because you are saying for the query don't wait if there are locking. you can instead wait 30, which will wait 30 second then output error but thats not what you want(I guess).
As for skip locked, the select will skip the locked data for example you have 5 rows , and one of them is locked then the select will not read this row . thats why when all the data is locked its throwing error because nothing can be skipped. and I guess is not what you want in your scenario.
This sounds like you need to think about transaction control.
If you are doing work in a transaction then the implication is that that unit of work needs to complete in order to be valid.
What you are saying is that some of the work in my update transaction doesn't need to complete in order for the transaction to be committed.
Not only that but you have two transactions running at the same time performing operations against the same object. In itself that may be valid but
if it is then you really need to go back to the first sentence and think hard about transaction control and process flow and see if there's a way you can have the second transaction only attempt to update rows that aren't being updated in the first transaction.

bulk collect using "for update"

I run into an interesting and unexpected issue when processing records in Oracle (11g) using BULK COLLECT.
The following code was running great, processing through all million plus records with out an issue:
-- Define cursor
cursor My_Data_Cur Is
Select col1
,col2
from My_Table_1;
…
-- Open the cursor
open My_Data_Cur;
-- Loop through all the records in the cursor
loop
-- Read the first group of records
fetch My_Data_Cur
bulk collect into My_Data_Rec
limit 100;
-- Exit when there are no more records to process
Exit when My_Data_Rec.count = 0;
-- Loop through the records in the group
for idx in 1 .. My_Data_Rec.count
loop
… do work here to populate a records to be inserted into My_Table_2 …
end loop;
-- Insert the records into the second table
forall idx in 1 .. My_Data_Rec.count
insert into My_Table_2…;
-- Delete the records just processed from the source table
forall idx in 1 .. My_Data_Rec.count
delete from My_Table_1 …;
commit;
end loop;
Since at the end of processing each group of 100 records (limit 100) we are deleting the records just read and processed, I though it would be a good idea to add the “for update” syntax to the cursor definition so that another process couldn’t update any of the records between the time the data was read and the time the record is deleted.
So, the only thing in the code I changed was…
cursor My_Data_Cur
is
select col1
,col2
from My_Table_1
for update;
When I ran the PL/SQL package after this change, the job only processes 100 records and then terminates. I confirmed this change was causing the issue by removing the “for update” from the cursor and once again the package processed all of the records from the source table.
Any ideas why adding the “for update” clause would cause this change in behavior? Any suggestions on how to get around this issue? I’m going to try starting an exclusive transaction on the table at the beginning of the process, but this isn’t an idea solution because I really don’t want to lock the entire table which processing the data.
Thanks in advance for your help,
Grant
The problem is that you're trying to do a fetch across a commit.
When you open My_Data_Cur with the for update clause, Oracle has to lock every row in the My_Data_1 table before it can return any rows. When you commit, Oracle has to release all those locks (the locks Oracle creates do not span transactions). Since the cursor no longer has the locks that you requested, Oracle has to close the cursor since it can no longer satisfy the for update clause. The second fetch, therefore, must return 0 rows.
The most logical approach would almost always be to remove the commit and do the entire thing in a single transaction. If you really, really, really need separate transactions, you would need to open and close the cursor for every iteration of the loop. Most likely, you'd want to do something to restrict the cursor to only return 100 rows every time it is opened (i.e. a rownum <= 100 clause) so that you wouldn't incur the expense of visiting every row to place the lock and then every row other than the 100 that you processed and deleted to release the lock every time through the loop.
Adding to Justin's Explantion.
You should have seen the below error message.Not sure, if your Exception handler suppressed this.
And the message itself explains a Lot!
For this kind of Updates, it is better to create a shadow copy of the main table, and let the public synonym point to it. While some batch id, creates a private synonym to our main table and perform the batch operations, to keep it simpler for maintenance.
Error report -
ORA-01002: fetch out of sequence
ORA-06512: at line 7
01002. 00000 - "fetch out of sequence"
*Cause: This error means that a fetch has been attempted from a cursor
which is no longer valid. Note that a PL/SQL cursor loop
implicitly does fetches, and thus may also cause this error.
There are a number of possible causes for this error, including:
1) Fetching from a cursor after the last row has been retrieved
and the ORA-1403 error returned.
2) If the cursor has been opened with the FOR UPDATE clause,
fetching after a COMMIT has been issued will return the error.
3) Rebinding any placeholders in the SQL statement, then issuing
a fetch before reexecuting the statement.
*Action: 1) Do not issue a fetch statement after the last row has been
retrieved - there are no more rows to fetch.
2) Do not issue a COMMIT inside a fetch loop for a cursor
that has been opened FOR UPDATE.
3) Reexecute the statement after rebinding, then attempt to
fetch again.
Also, you can change you Logic by Using rowid
An Example for Docs:
DECLARE
-- if "FOR UPDATE OF salary" is included on following line, an error is raised
CURSOR c1 IS SELECT e.*,rowid FROM employees e;
emp_rec employees%ROWTYPE;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO emp_rec; -- FETCH fails on the second iteration with FOR UPDATE
EXIT WHEN c1%NOTFOUND;
IF emp_rec.employee_id = 105 THEN
UPDATE employees SET salary = salary * 1.05 WHERE rowid = emp_rec.rowid;
-- this mimics WHERE CURRENT OF c1
END IF;
COMMIT; -- releases locks
END LOOP;
END;
/
You have to fetch a record row by row!! update it using the ROWID AND COMMIT immediately
. And then proceed to the next row!
But by this, you have to give up the Bulk Binding option.

Populating a database in PostgreSQL

The following link on the PostgreSQL documentation manual http://www.postgresql.org/docs/8.3/interactive/populate.html says that to disable autocommit in postgreSQL you can simply place all insert statements within BEGIN; and COMMIT;
However I have difficulty in capturing any exceptions that may happen between the BEGIN; COMMIT; and if an error occurs (like trying to insert a duplicate PK) I have no way to explicitly call the ROLLBACK or COMMIT commands. Although all insert statements are automatically rolled back, PostgreSQL still expects an explicit call to either the COMMIT or ROLLBACK commands before it can consider the transaction to be terminated. Otherwise, the script has to wait for the transaction to time out and any statements executed thereafter will raise an error.
In a stored procedure you can use the EXCEPTION clause to do this but the same does not apply in my circumstance of performing bulk inserts. I have tried it and the exception block did not work for me because the next statement/s executed after the error takes place fails to execute with the error:
ERROR: current transaction is aborted, commands ignored until end of transaction block
The transaction remains open as it has not been explicitly finalised with a call to COMMIT or ROLLBACK;
Here is a sample of the code I used to test this:
BEGIN;
SET search_path TO testing;
INSERT INTO friends (id, name) VALUES (1, 'asd');
INSERT INTO friends (id, name) VALUES (2, 'abcd');
INSERT INTO friends (id, nsame) VALUES (2, 'abcd'); /*note the deliberate mistake in attribute name and also the deliberately repeated pk value number 2*/
EXCEPTION /* this part does not work for me */
WHEN OTHERS THEN
ROLLBACK;
COMMIT;
When using such technique do I really have to guarantee that all statements will succeed? Why is this so? Isn't there a way to trap errors and explicitly call a rollback?
Thank you
if you do it between begin and commit then everything is automatically rolled back in case of an exception.
Excerpt from the url you posted:
"An additional benefit of doing all insertions in one transaction is that if the insertion of one row were to fail then the insertion of all rows inserted up to that point would be rolled back, so you won't be stuck with partially loaded data."
When I initialize databases, i.e. create a series of tables/views/functions/triggers/etc. and/or loading in the initial data, I always use psql and it's Variables to control the flow. I always add:
\set ON_ERROR_STOP
to the top of my scripts, so whenever I hit any exception, psql will abort. It looks like this might help in your case too.
And in cases when I need to do some exception handling, I use anonymous code blocks like this:
DO $$DECLARE _rec record;
BEGIN
FOR _rec IN SELECT * FROM schema WHERE schema_name != 'master' LOOP
EXECUTE 'DROP SCHEMA '||_rec.schema_name||' CASCADE';
END LOOP;
EXCEPTION WHEN others THEN
NULL;
END;$$;
DROP SCHEMA master CASCADE;

Resources