Execute Oracle For Loop in parallel - oracle

I am looping through a list of table and updating a list of column in each table. Is it possible to execute the loop parallely, that is update more than one table at a time.
FOR Table_rec IN Table_list_cur
LOOP
--Check if the table is partitioned
IF Check_if_partitioned (Table_rec.Table_name, Table_rec.Owner)
THEN
--If Yes, loop through each parition
EXECUTE IMMEDIATE
'Select partition_name from USER_TAB_PARTITIONS where table_name = '''
|| Table_rec.Table_name
|| ''' and owner = '''
|| Table_rec.Owner
|| ''''
BULK COLLECT INTO L_part;
FOR I IN L_part.FIRST .. L_part.LAST
LOOP
--Update each parition
DBMS_OUTPUT.Put_line ('V_sql_stmt' || V_sql_stmt);
V_sql_stmt :=
'UPDATE /*+ PARALLEL(upd_tbl,4) */ '
|| Table_rec.Table_name
|| ' PARTITION ('
|| L_part (I)
|| ') upd_tbl'
|| ' SET '
|| V_sql_stmt_col_list;
DBMS_OUTPUT.Put_line ('V_sql_stmt' || V_sql_stmt);
EXECUTE IMMEDIATE V_sql_stmt;
END IF;
END LOOP;
END LOOP;

Not directly, no.
You could take the guts of your loop, factor that out into a stored procedure call, and then submit a series of jobs to do the actual processing that would run asynchronously. Using the dbms_job package so that the job submission is part of the transaction, that would look something like
CREATE OR REPLACE PROCEDURE do_update( p_owner IN VARCHAR2,
p_table_name IN VARCHAR2 )
AS
BEGIN
<<your dynamic SQL>>
END;
and then run the loop to submit the jobs
FOR Table_rec IN Table_list_cur
LOOP
--Check if the table is partitioned
IF Check_if_partitioned (Table_rec.Table_name, Table_rec.Owner)
THEN
dbms_job.submit( l_jobno,
'begin do_update( ''' || table_rec.owner || ''', ''' || table_rec.table_name || '''); end;' );
END IF;
END LOOP;
commit;
Once the commit runs, the individual table jobs will start running (how many will run is controlled by the job_queue_processes parameter) while the rest are queued up.
Now, that said, your approach seems a bit off. First, it's almost never useful to specify a partition name explicitly. You almost certainly want to submit a single UPDATE statement, omit the partition name, and let Oracle do the updates to the various partitions in parallel. Running one update statement per partition rather defeats the purpose of partitioning. And if you really want 4 parallel threads for each partition, you probably don't want many of those updates running in parallel. The point of parallelism is that one statement can be allowed to consume a large fraction of the system's resources. If you really want, say 16 partition-level updates to be running simultaneously and each of those to run 4 slaves, it would make far more sense to let Oracle run 64 slaves for a single update (or whatever number of slaves you really want to devote to this particular task depending on how many resources you want to leave for everything else the system has to do).

Related

How to delete sequences and procedures during logoff trigger?

Could you please help me in a unique situation I am in. I am receiving "ORA-30511: invalid DDL operation in system triggers" when dropping sequences and procedures during logoff trigger.
I need to delete tables, sequences and procedures of users before logoff event happens. I am writing the table details in DB_OBJECTS table upon create using a separate trigger. Below is my logoff trigger - could you please help me where I am doing wrong. Dropping tables is working fine in the below code. Only Dropping sequences and procedures is giving me "ORA-30511: invalid DDL operation in system triggers" error.
CREATE OR REPLACE TRIGGER DELETE_BEFORE_LOGOFF
BEFORE LOGOFF ON DATABASE
DECLARE
USER_ID NUMBER := SYS_CONTEXT('USERENV', 'SESSIONID');
BEGIN
FOR O IN (SELECT USER, OBJECT_NAME, OBJECT_TYPE
FROM DB_OBJECTS WHERE SID = USER_ID
AND USERNAME = USER AND SYSDATE > CREATED_DTTM) LOOP
IF O.OBJECT_TYPE = 'TABLE' THEN
EXECUTE IMMEDIATE 'DROP TABLE ' || O.USER || '.' || O.OBJECT_NAME || ' CASCADE CONSTRAINTS';
ELSIF O.OBJECT_TYPE = 'SEQUENCE' THEN
EXECUTE IMMEDIATE 'DROP SEQUENCE ' || O.USER || '.' || O.OBJECT_NAME;
ELSIF O.OBJECT_TYPE = 'PROCEDURE' THEN
EXECUTE IMMEDIATE 'DROP PROCEDURE ' || O.USER || '.' || O.OBJECT_NAME;
END IF;
END LOOP;
EXCEPTION WHEN NO_DATA_FOUND THEN NULL;
END;
/
That's a simple one.
Error code: ORA-30511
Description: invalid DDL operation in system triggers
Cause: An attempt was made to perform an invalid DDL operation in a system trigger. Most DDL operations currently are not supported in system triggers. The only currently supported DDL operations are table operations and ALTER/COMPILE operations.
Action: Remove invalid DDL operations in system triggers.
That's why only
Dropping tables is working fine
succeeded.
Therefore, you can't do that using trigger.
You asked (in a comment) how to drop these objects, then. Manually, as far as I can tell. Though, that's quite unusual - what if someone accidentally logs off? You'd drop everything they created. If you use that schema for educational purposes (for example, every student gets their own schema), then you could create a "clean-up" script you'd run once class is over. Something like this:
SET SERVEROUTPUT ON;
DECLARE
l_user VARCHAR2 (30) := 'SCOTT';
l_str VARCHAR2 (200);
BEGIN
IF USER = l_user
THEN
FOR cur_r IN (SELECT object_name, object_type
FROM user_objects
WHERE object_name NOT IN ('EMP',
'DEPT',
'BONUS',
'SALGRADE'))
LOOP
BEGIN
l_str :=
'drop '
|| cur_r.object_type
|| ' "'
|| cur_r.object_name
|| '"';
DBMS_OUTPUT.put_line (l_str);
EXECUTE IMMEDIATE l_str;
EXCEPTION
WHEN OTHERS
THEN
NULL;
END;
END LOOP;
END IF;
END;
/
PURGE RECYCLEBIN;
It is far from being perfect; I use it to clean up my Scott schema I use to answer questions on various sites so - once it becomes a mess, I run that PL/SQL code several times (because of possible foreign key constraint).
Other option is to keep a create user script(s) (along with all grant statements) and - once class is over - drop existing user and simply recreate it.
Or, if that user contains some pre-built tables, keep export file (I mean, result of data pump export) and import it after the user is dropped.
There are various options - I don't know whether I managed to guess correctly, but now you have something to think about.

reset sequence by trigger

I am using a trigger to reset a sequence every year,
but there is some issue when calling a procedure into the triggers
CREATE OR REPLACE TRIGGER t_dmd_pk
BEFORE INSERT
ON S_DEMANDE
FOR EACH ROW
BEGIN
IF (TO_CHAR (SYSDATE, 'dd') = '16' AND TO_CHAR (SYSDATE, 'mm') = '12')
THEN
reset_seq ('SEQ_ID_DMD');
END IF;
SELECT SEQ_ID_DMD.NEXTVAL || TO_CHAR (SYSDATE, 'yyyy')
INTO :new.DMD_ID
FROM DUAL;
END;
/
and that's my procedure
CREATE OR REPLACE PROCEDURE reset_seq (p_seq_name IN VARCHAR2)
IS
l_val NUMBER;
BEGIN
EXECUTE IMMEDIATE 'select ' || p_seq_name || '.nextval from dual'
INTO l_val;
EXECUTE IMMEDIATE
'alter sequence ' || p_seq_name || ' increment by -' || l_val;
END;
/
The trigger is executed inside an INSERT statement, and the trigger call a procedure that tries to commit the transaction (ALTER SEQUENCE is a DDL statatement, so it is auto-commited).
To ensure statement atomicity the transaction can only be commited when the last statement is finalized. So it is not possible to commit the current transaction inside a trigger.
But you can execute your trigger or procedure as an autonomous transaction (Oracle opens a new transaction and executes the code of your trigger or porcedure inside this new transaction).
See this link for more details: http://www.oracle-base.com/articles/misc/autonomous-transactions.php
But remember:
the autonomous transaction cannot see your still uncommited data, and
if you finally rollback your current transaction (after the execution of the trigger and the commit of the autonomous transaction) the inserted tuples will be rolled back, but the autonomous transaction will not be rolled back.
Your procedure doesn't work the way you think it should.. If your sequence last value was 10, then you are altering the sequence to increment by -10 every time it is called. I am guessing the first time you execute it, you get an ORA-08004 because your minvalue is probably 1 and it would be trying to return 0 (which isn't allowed). Even if that didn't error, the next time you called it would, as it would try to return -10 in my example. What I believe you really want is:
CREATE OR REPLACE PROCEDURE reset_seq (p_seq_name IN VARCHAR2)
IS
l_val NUMBER;
BEGIN
-- Get Current Value of Sequence
EXECUTE IMMEDIATE 'select ' || p_seq_name || '.nextval from dual'
INTO l_val;
-- Alter to sequence to allow to go to 0 and decrease by current value
EXECUTE IMMEDIATE
'alter sequence ' || p_seq_name || ' minvalue 0 increment by -' || l_val;
-- Get value from sequence again (should set seq to 0)
EXECUTE IMMEDIATE 'select ' || p_seq_name || '.nextval from dual'
INTO l_val;
-- Alter sequence to increase by 1 again
EXECUTE IMMEDIATE
'alter sequence ' || p_seq_name || ' increment by 1';
END;
This allows the sequence to be 0 (which you need if you want the next call to return 1), sets it to 0, then changes it back to increment by 1 with each successive call. However, it is probably much easier to just drop and recreate the sequence.
The real question though, is why you would ever want to do this. This looks like bad design. A sequence is just supposed to return a unique number. Nothing more, nothing less. There should be no meaning behind the number, and it sure seems like you are trying to assign meaning here. Sequences don't guarantee your rows will be inserted in order and don't guarantee there won't be gaps, they just provide a unique number. Concatenating the year on the end of this unique number makes this design more suspect.
i found a solution, i used dbms_job instead of trigger, that works fine for me

Building some dynamic query select and display its output immediately in Oracle PL/SQL

in my actual job I need, very often, to read some tables and acting consequently, sometimes updating these data manually.
So I built a PL/SQL block that creates my SELECT statements (yes, with the "FOR UPDATE" clause, just commented).
As an example, this is just one of the queries I build:
phtr_QUERY := 'SELECT *
FROM ' || tabriabi_impianto || '.pdfhtr t
WHERE t.k_abi=''' || tabriabi_abi || ''' ';
if length(myNag) > 0 then
phtr_QUERY := phtr_QUERY || 'and t.ndg like ''%' || myNag || '%'' ';
end if;
if length(myPrat) > 0 then
phtr_QUERY := phtr_QUERY || ' and t.pratica like ''%' || myPrat || '%'' ';
end if;
phtr_QUERY := phtr_QUERY || crlf || ' order by 2 ';
phtr_QUERY := phtr_QUERY || crlf || '--for update';
phtr_QUERY := phtr_QUERY || crlf || ';';
Then I copy these statements from the Output window (obtained through the dbms_output.put_line) and paste it into a new SQL Window and executing it, obtaining the results in multiple tabs.
I was wondering if there is a better way, some commands that I can use just to have the (editable) results directly without the need of cut&paste...
TIA.
F.
A very horrifying/hackish way to do what you want would be to store the resulting query in a temporary table, afterwards you could do something like the process described here:
How can I use an SQL statement stored in a table as part of another statement?
Please Note: This is probably a bad idea.
select a.rowid, a.* from table_name a;
will open in edit mode in many tools.
I was wondering if there is a better way, some commands that I can use just to have the (editable) results directly without the need of cut&paste
You should understand that editing features are features of database tool you are using. When you insert, update or delete some record in the results grid this tool translates your actions into respective SQL statements and executes it on the fly.
As a kind of workaround I suggest you to create a stored procedure which takes some parameters as 'table name', 'where conditions' and then creates updateable database view. After execution of procedure and preparation of the view you can run "select ... for update" and work with returned data as you do it now.

first attempt at learning oracle triggers

I am just starting to learn triggers so please bear with me. If the row being inserted has a gift that is the same as any gift already in the table, print a message saying that the gift was already given to receiver from donor.
create or replace TRIGGER Same_Gift_Given
BEFORE INSERT ON GIVING
FOR EACH ROW
DECLARE
giftgiven varchar(255);
BEGIN
SELECT giftname INTO giftgiven from GIVING;
IF :new.giftname = giftgiven then
dbms_output.put_line(giftgiven || ' has already been gifted to ' || giving.receiver || ' by ' || giving.donor);
end if;
END;
This is a really awful homework problem. You would never, ever, ever us a trigger to do anything like this in a real system. It will break most INSERT operations and it will fail if there are ever multiple users. In reality, you would use a constraint. In reality, if for some reason you were forced at gunpoint to use a trigger, you would need a series of three triggers, a package, and a collection to do it properly.
What the professor is probably looking for
Just to emphasize, though, you would never, ever consider doing this in a real system
create or replace trigger same_gift_given
before insert on giving
for each row
declare
l_existing_row giving%rowtype;
begin
select *
into l_existing_row
from giving
where giftname = :new.giftname
and rownum = 1;
dbms_output.put_line( :new.giftname ||
' has already been gifted to ' ||
l_existing_row.receiver ||
' from ' ||
l_existing_row.donor );
exception
when no_data_found
then
null;
end;
This does not prevent you from inserting duplicate rows. It will throw a mutating trigger error if you try to do anything other than an INSERT ... VALUES on the giving table. It is inefficient. It does not handle multiple sessions. In short, it is absolutely atrocious code that should never be used in any real system.
What you would do in reality
In reality, you would create a constraint
ALTER TABLE giving
ADD CONSTRAINT unique_gift UNIQUE( giftname );
That will work in a multi-user environment. It will not throw a mutating trigger exception. It is much more efficient. It is much less code. It actually prevents duplicate rows from being inserted.
Let's try something a bit different:
CREATE OR REPLACE TRIGGER GIVING_COMPOUND_INSERT
FOR INSERT ON GIVING
COMPOUND TRIGGER
TYPE STRING_COL IS TABLE OF VARCHAR2(255) INDEX BY VARCHAR2(255);
colGiftnames STRING_COL;
aGiftname VARCHAR2(255);
nCount NUMBER;
-- Note that the way the associative array is used here is a bit of a cheat.
-- In the BEFORE EACH ROW block I'm putting the string of interest into the
-- collection as both the value *and* the index. Then, when iterating the
-- collection only the index is used - the value is never retrieved (but
-- since it's the same as the index, who cares?). I do this because I'd
-- rather not write code to call a constructor and maintain the collections
-- size - so I just use an associative array and let Oracle do the work for
-- me.
BEFORE EACH ROW IS
BEGIN
colGiftnames(:NEW.GIFTNAME) := :NEW.GIFTNAME;
END BEFORE EACH ROW;
AFTER STATEMENT IS
BEGIN
aGiftname := colGiftnames.FIRST;
WHILE aGiftname IS NOT NULL LOOP
SELECT COUNT(*)
INTO nCount
FROM GIVING
WHERE GIFTNAME = aGiftname;
IF nCount > 1 THEN
DBMS_OUTPUT.PUT_LINE('Found ' || nCount || ' instances of gift ''' ||
aGiftname || '''');
RAISE_APPLICATION_ERROR(-20001, 'Found ' || nCount ||
' instances of gift ''' ||
aGiftname || '''');
END IF;
aGiftname := colGiftnames.NEXT(aGiftname);
END LOOP;
END AFTER STATEMENT;
END GIVING_COMPOUND_INSERT;
Again, this is a LOUSY way to try to guarantee uniqueness. In practice the "right way" to do this is with a constraint (either UNIQUE or PRIMARY KEY). Just because you can do something doesn't mean you should.
Share and enjoy.

Best practice for performing inserts in a cursor

I need to do some inserts in a cursor over about 300000 rows, this is however running slowly, any ideas on how i can make it run faster? Can i speed it up by batching the commits? So for example i would perform a commit after the 1000th row
DECLARE
CURSOR test_cursor IS
SELECT a from database.mytable
BEGIN
FOR curRow IN test_cursor LOOP
insert into tableb (testval)
values ('something');
commit;
END LOOP;
END;
300000 rows is not that many rows. Unless the rows are each extremely large, you should not commit in the middle of the batch.
Intermediate commits will only achieve:
additional overhead because each commit creates additional work,
loss of restartability in case of error (and loss of transactional integrity),
greater chance of running into ORA-1555
If your process is really a cursor with a single insert inside the loop, you should run a single statement:
BEGIN
INSERT INTO tableb (col1..coln) (SELECT col1..coln FROM database.mytable);
END;
If you still need extra performance, you could look into direct insert and parallel operation but It might be over-optimization with "only" 300k rows.
By far the single greatest optimization available to you is to think in term of sets instead of the traditional procedural approach that consists of batches of single row statements.
Or you can try this:
DECLARE
CURSOR test_cursor IS
SELECT col1 from table_a;
TYPE fetch_array IS TABLE OF test_cursor%ROWTYPE;
test_array fetch_array;
l_errors PLS_INTEGER;
l_dml_errors EXCEPTION;
PRAGMA EXCEPTION_INIT(l_dml_errors, -24381);
BEGIN
open test_cursor;
loop
fetch test_cursor bulk collect into test_array limit 10000;
forall i in 1..test_array.count save exceptions
insert into table_b(col1)
values(test_array(i).col1);
exit when test_cursor%notfound;
end loop;
close test_cursor;
commit;
EXCEPTION
WHEN l_dml_errors THEN
l_errors := SQL%BULK_EXCEPTIONS.COUNT;
dbms_output.put_line('Number of INSERT statements that failed: ' || l_errors);
FOR i IN 1 .. l_errors
LOOP
dbms_output.put_line('Error #' || i || ' at '|| 'iteration #' || SQL%BULK_EXCEPTIONS(i).ERROR_INDEX);
dbms_output.put_line('Error message is ' || SQLERRM(-SQL%BULK_EXCEPTIONS(i).ERROR_CODE));
END LOOP;
END;
I would not recommend a cursor approach for this. I use append parallel hints for situations like this. Most of the time your query literally runs N times as fast where N is the parallel degree. It is occasionally a good idea to bypass disaster recovery with nologging / noarchivelog.
For truly large migrations (dozens to hundreds of GB), I've found it's a good idea to batch on a table's natural key (date, usually). Some small amount of state around it can let you cancel + resume the migration at will if necessary.
May Be This will help you please try this
DECLARE
i number;
CURSOR test_cursor IS
SELECT a from database.mytable
BEGIN
FOR curRow IN test_cursor LOOP
insert into tableb (testval)
values ('something');
i:i+1;
if mod(i,1000)=0 then
commit;
end if;
END LOOP;
commit;
END;

Resources