Alternative for 'ddl_lock_timeout' in oracle 10g - oracle

In oracle 11g it's allowed to set session and system parameter, which's called ddl_lock_timeout. It's very useful when you need to execute some statement and resources are highly used (in order to avoid ORA-00054 exception).
But the case is that there's no such a parameter in 10g.
Of course, I'm able to use such a cosntruction as:
DECLARE START_DATE DATE := SYSDATE;
BEGIN
LOOP
IF SYSDATE>START_DATE+30/60/60/24 THEN
EXIT;
END IF;
BEGIN
<some statement>
EXIT;
EXCEPTION WHEN OTHERS THEN
IF sqlcode != -54 THEN
RAISE;
END IF;
END;
END LOOP;
END;
And by using it, I will try to execute the statement for 30 seconds in a cycle, but the thing here is that the statement is executed many many times and could cause some troubles (i'm not sure but I feel it somehow), but using ddl_lock_timeout the statement is executed only once and then waits for resources which is much fluffier.
Any ideas?

This is an explanation of why LOCK TABLE does not necessarily work.
LOCK TABLE - trick may not be working in all situations.
Session 1: Session 2:
create table table1(a number);
insert into table1 values(1);
commit;
update table1 set a = 2;
lock table table1 in exclusive mode;
<waits...>
commit;
"Table(s) Locked."
update table1 set a = 3; <-- notice session 1 goes in queue now.
DDL fails with "resource busy with NOWAIT".
Reason is DDL first commit the previous transaction
before executing the DDL. And when it commits
session 1 gets the lock as it was already in queue.

Related

Oracle Transactions PLSQL

I am working on PLSQL based Procedures in Oracle 11g & 12c.
I want to keep logs of table name and row count when I issue commit command in one of my procedure/function.
This is for audit logs.
Can you please suggest how do I accomplish this?
Your PL/SQL code will need to keep track of its activity and log it. There is no way to ask Oracle "how many rows are you committing right now and to which tables?"
So, e.g.,
DECLARE
l_row_count NUMBER;
BEGIN
UPDATE table_1 SET column_a = 'whatever' WHERE column_b = 'some condition';
l_row_count := SQL%ROWCOUNT;
INSERT INTO my_audit ( action, cnt ) VALUES ('Updated table_1', l_row_count);
-- Notice the audit is part of the transaction; if I don't commit the UPDATE,
-- I won't commit the log of the update.
-- ... do other similar updates / inserts / deleted, using SQL%ROWCOUNT to
-- to determine the number of rows affected and log each one ...
COMMIT;
END;
Again, it is not practical to do a bunch of DML statements (inserts, updates, deletes) and then ask Oracle after the fact "what I have done so far in this transaction?" You need to record it as you go.

Can a table be locked by another table?

We're encountering issues on our oracle 11g database, regarding table lock.
We have a procedure that is executed via sql*plus, which truncates a table, let say table1.
We sometimes get a ORA-00054: resource busy and acquire with NOWAIT error during execution of the procedure at the part when the table is to be truncated. We have a webapp that is in a tomcat server, which when restarted (to kill sessions to the database from tomcat), the procedure can be re executed successfully.
table1 isn't used, not even in select, in the source code for the webapp, but a lot of parent table of table1 is.
So is it possible that an uncommitted update to one of its parent table is causing the lock?. If so, any suggestions on how I can test it?
I've checked with the DBA during times when we encounter the issue, but he can't get the session that is blocking the procedure and the statement that caused the lock.
Yes, an update of a parent table will get a lock on the child table. Below is a test case demonstrating it is possible.
Finding and tracing a specific intermittent locking issue can be painful. Even if you can't trace down the specific conditions it would be a good idea to modify any code to avoid concurrent DML and DDL. Not only can it cause locking issues, it can also break SELECT statements.
If removing the concurrency is out of the question, you may at least want to enable DDL_LOCK_TIMEOUT so that the truncate statement will wait for the lock instead of instantly failing: alter session set ddl_lock_timeout = 100000;
--Create parent/child tables, with or without an indexed foreign key.
create table parent_table(a number primary key);
insert into parent_table values(1);
insert into parent_table values(2);
create table child_table(a number references parent_table(a));
insert into child_table values(1);
commit;
--Session 1: Update parent table.
begin
loop
update parent_table set a = 2 where a = 2;
commit;
end loop;
end;
/
--Session 2: Truncate child table. Eventulaly it will throw this error:
--ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
begin
loop
execute immediate 'truncate table child_table';
end loop;
end;
/
any suggestions on how I can test it?
You can check the blocking sessions when you get ORA-00054: resource busy and acquire with NOWAIT error.
Blocking sessions occur when one sessions holds an exclusive lock on an object and doesn't release it before another sessions wants to update the same data. This will block the second until the first is either does a COMMIT or ROLLBACK.
SELECT
s.blocking_session,
s.sid,
s.serial#,
s.seconds_in_wait
FROM
v$session s
WHERE
blocking_session IS NOT NULL;
For example, see my similar answer here and here.

Rollback form one database to other database in oracle

I'm have two database DB1 and DB2, using the following stored procedure to archive datas form one database to another database
CREATE OR REPLACE PROCEDURE "DB1"."ARCHIVE"(FROM_ARCHIVE timestamp, TO_ARCHIVE timestamp)
AS
v_err_num NUMBER;
v_err_msg VARCHAR2(200);
CURSOR MY_CURSOR IS
SELECT
ID,A,B,C
FROM
TABLE1 WHERE A >= FROM_ARCHIVE AND A <= TO_ARCHIVE
BEGIN
FOR MY_LOOP IN MY_CURSOR
LOOP
BEGIN
INSERT
INTO
DB2.TABLE2
(
A,B,C
)
VALUES
(
MY_LOOP.A,MY_LOOP.B,MY_LOOP.C
);
END;
END LOOP;
FOR MY_LOOP IN MY_CURSOR
LOOP
BEGIN
DELETE FROM TABLE1 Where A = MY_LOOP.ID;
END;
END LOOP;
COMMIT;
EXCEPTION
WHEN OTHERS THEN
ROLLBACK;--if exception occures the rollback happening for the DB1 database only and not happening for DB2
COMMIT;
END;
here if exception occurs in the TABLE1 delete statement then rollback happening for the DB1 database only and not happening for DB2
is there any way to do the rollback for the particular DB2 database
Please help me solve this Thanks in advance
create a savepoint before the insert on db2 and executed my rollback to that specific savepoint this solved the issue
Special thanks to #Jorge Campos
The 'Stop and rollback on error' option does not work when deploying multiple PL/SQL routines (stored procedures, user-defined functions, and packages) against an Oracle database server.
Do not deploy multiple PL/SQL routines against Oracle using the 'Stop and rollback on error' error handling option.
If routines that are deployed need to be updated or dropped because of an encountered error,
do so after the deployment process completes by identifying the affected routines in the SQL results view and dropping and re-deploying routines, as required.

How to create a table and insert in the same statement

I'm new to Oracle and I'm struggling with this:
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT(*) INTO cnt FROM all_tables WHERE table_name like 'Newtable';
IF(cnt=0) THEN
EXECUTE IMMEDIATE 'CREATE TABLE Newtable ....etc';
END IF;
COMMIT;
SELECT COUNT(*) INTO cnt FROM Newtable where id='something'
IF (cnt=0) THEN
EXECUTE IMMEDIATE 'INSERT INTO Newtable ....etc';
END IF;
END;
This keeps crashing and gives me the "PL/SQL: ORA-00942:table or view does not exist" on the insert-line. How can I avoid this? Or what am I doing wrong? I want these two statements (in reality it's a lot more of course) in a single transaction.
It isn't the insert that is the problem, it's the select two lines before. You have three statements within the block, not two. You're selecting from the same new table that doesn't exist yet. You've avoided that in the insert by making that dynamic, but you need to do the same for the select:
EXECUTE IMMEDIATE q'[SELECT COUNT(*) FROM Newtable where id='something']'
INTO cnt;
SQL Fiddle.
Creating a table at runtime seems wrong though. You said 'for safety issues the table can only exist if it's filled with the correct dataset', which doesn't entirely make sense to me - even if this block is creating and populating it in one go, anything that relies on it will fail or be invalidated until this runs. If this is part of the schema creation then making it dynamic doesn't seem to add much. You also said you wanted both to happen in one transaction, but the DDL will do an implicit commit, you can't roll back DDL, and your manual commit will start a new transaction for the insert(s) anyway. Perhaps you mean the inserts shouldn't happen if the table creation fails - but they would fail anyway, whether they're in the same block or not. It seems a bit odd, anyway.
Also, using all_tables for the check could still cause this to behave oddly. If that table exists in another schema, you create will be skipped, but you select and insert might still fail as they might not be able to see, or won't look for, the other schema version. Using user_tables or adding an owner check might be a bit safer.
Try the following approach, i.e. create and insert are in two different blocks
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT (*)
INTO cnt
FROM all_tables
WHERE table_name LIKE 'Newtable';
IF (cnt = 0)
THEN
EXECUTE IMMEDIATE 'CREATE TABLE Newtable(c1 varchar2(256))';
END IF;
END;
DECLARE
cnt2 NUMBER;
BEGIN
SELECT COUNT (*)
INTO cnt2
FROM newtable
WHERE c1 = 'jack';
IF (cnt2 = 0)
THEN
EXECUTE IMMEDIATE 'INSERT INTO Newtable values(''jill'')';
END IF;
END;
Oracle handles the execution of a block in two steps:
First it parses the block and compiles it in an internal representation (so called "P code")
It then runs the P code (it may be interpreted or compiled to machine code, depending on your architecture and Oracle version)
For compiling the code, Oracle must know the names (and the schema!) of the referenced tables. Your table doesn't exist yet, hence there is no schema and the code does not compile.
To your intention to create the tables in one big transaction: This will not work. Oracle always implicitly commits the current transaction before and after a DDL statement (create table, alter table, truncate table(!) etc.). So after each create table, Oracle will commit the current transaction and starts a new one.

Oracle Forms - Commit Single SQL Statement Instead of Entire Form

I'm working on an Oracle Form (10g) that has two blocks on a single canvas. The top block is called QUERY_BLOCK which the user fills out to fill PRICING_BLOCK with rows of data.
However, in QUERY_BLOCK I also have a checkbox which needs to perform an INSERT and DELETE on the database, respectively. My WHEN-CHECKBOX-CHANGED trigger looks like this:
begin
if :query_block.profile_code is not null then
if :query_block.CHECKBOX_FLAG = 'Y' then
begin
INSERT INTO profile_table VALUES ('Y', :query_block.profile_code);
end;
else
begin
DELETE FROM profile_table WHERE profile_code = :query_block.profile_code and profile_type_code = 'FR';
end;
end if;
end if;
end;
I know that I need to add some sort of commit statement in here, otherwise the record locks and nothing actually happens. However, if I do a COMMIT; then the entire form goes through validation and updates any changed rows.
How do I execute these one-line queries I have without the rest of my form updating as well?
Without commenting on the actual wisdom of this, you could create a procedure in the database that performed an autonomous transaction:
CREATE OR REPLACE FUNCTION my_fnc(p_flag IN VARCHAR2)
RETURN VARCHAR2 IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
IF p_flag = 'Y' THEN
INSERT...
ELSE
DELETE...
END IF;
COMMIT;
RETURN 'SUCCESS';
EXCPTION
WHEN OTHERS THEN
RETURN 'FAIL';
END;
Your Forms code could then look like:
begin
if :query_block.profile_code is not null then
stat := my_fnc(:query_block.CHECKBOX_FLAG);
end if;
end;
This allows your function to commit independent of the calling transaction. Beware of this, however - if your outer transaction must roll back, the autonomous transaction will still be committed. I would think there should be a transactional way to do what you need done to solve your locking problem, which would likely be the superior approach. Without knowing the specifics of your process, I can't tell. Generally speaking, autonomous transactions are used when an update must occur regardless of whether the transaction commits or rolls back, e.g., logging.

Resources