Running PLSQL in parallel - oracle

Running PLSQL script to generate load
For some reasons (reproducing errors, ...) I would like to generate some load (with specific actions) in a PL SQL script.
What I would like to do:
A) Insert 1.000.000 rows in Schema A Table 1
B) In a loop and as best in parallel (2 or 3 times)
1) read from Schema-A.Table-1 one row with locking
2) insert it to Schema-B.Table-2
3) delete row from Schema-A.Table-1
Is there a way to do this B-task in a script in parallel in PLSQL on calling the script?
Who would this look like?

It's usually better to parallelize SQL statements inside a PL/SQL block, instead of trying to parallelize the entire PL/SQL block:
begin
execute immediate 'alter session enable parallel dml';
insert /*+ append parallel */ into schemaA.Table1 ...
commit;
insert /*+ append parallel */ into schemaB.Table2 ...
commit;
delete /*+ parallel */ from schemaA.Table1 where ...
commit;
dbms_stats.gather_table_stats('SCHEMAA', 'TABLE1', degree => 8);
dbms_stats.gather_table_stats('SCHEMAB', 'TABLE2', degree => 8);
end;
/
Large parallel DML statements usually require less code and run faster than creating your own parallelism in PL/SQL. Here are a few things to look out for:
You must have Enterprise Edition, large tables, decent hardware, and a sane configuration to run parallel SQL.
Setting the DOP is difficult. Using the hint /*+ parallel */ lets Oracle decide but you might want to play around with it by specifying a number, such as /*+ parallel(8) */.
Direct-path writes (the append hint) can be significantly faster. But they lock the entire table and the new results won't be recoverable until after the next backup.
Check the execution plan to ensure that direct-path writes are used - look for the operation LOAD AS SELECT instead of LOAD TABLE CONVENTIONAL. Tuning parallel SQL statements is best done with the Real-Time SQL Monitoring reports, found in select dbms_sqltune.report_sql_monitor(sql_id => 'SQL_ID') from dual;
You might want to read through the Parallel Execution Concepts chapter of the manual. Oracle parallelism can be tricky but it can also make your processes runs orders of magnitude faster if you're careful.

If objective is fast load and parallel is just an attempt to get that then do.
Create table newtemp as select old
To create table.
Then
Create table old_remaning as select old with not exists newtemp
Then drop old and new tables. Then do rename table . These operations will use parallel options at db level

Related

How to delete huge rows in oracle table using parallel sessions query quickly

I am using mentioned query to delete 250 million plus rows from my table and it is taking more time
I have tried with t_delete limit with up to 20000.
Still slow deletion happening.
Please suggest a few optimisations in the same code to done my job faster.
DECLARE
TYPE tt_delete IS TABLE OF ROWID; t_delete tt_delete;
CURSOR cIMAV IS SELECT ROWID FROM moc_attribute_value where id in (select
id from ORPHANS_MAV);
total Number:=0;
rcount Number:=0;
Stmt1 varchar2(2000);
Stmt2 varchar2(2000);
BEGIN
--- CREATE TABLE orphansInconsistenDelProgress (currentTable
VARCHAR(100), deletedCount INT, totalToDelete INT);
--- INSERT INTO orphansInconsistenDelProgress (currentTable,
deletedCount,totalToDelete) values ('',0,0);
Stmt1:='ALTER SESSION SET parallel_degree_policy = AUTO';
Stmt2:='ALTER SESSION FORCE PARALLEL DML';
EXECUTE IMMEDIATE Stmt1;
EXECUTE IMMEDIATE Stmt2;
--- ALTER SESSION SET parallel_degree_policy = AUTO;
--- ALTER SESSION FORCE PARALLEL DML;
COMMIT;
--- MOC_ATTRIBUTE_VALUE
SELECT count(*) INTO total FROM ORPHANS_MAV;
UPDATE orphansInconsistenDelProgress SET currentTable='ORPHANS_MAV',
totalToDelete=total;
rcount := 0;
OPEN cIMAV;
LOOP
FETCH cIMAV BULK COLLECT INTO t_delete LIMIT 2000;
EXIT WHEN t_delete.COUNT = 0;
FORALL i IN 1..t_delete.COUNT
DELETE moc_attribute_value WHERE ROWID = t_delete (i);
COMMIT;
rcount := rcount + 2000;
UPDATE orphansInconsistenDelProgress SET deletedCount=rcount;
END LOOP;
CLOSE cIMAV;
COMMIT;
END;
/
A single Oracle parallel query can simplify the code and improve performance.
declare
execute immediate 'alter session enable parallel dml';
delete /*+ parallel */
from moc_attribute_value
where id in (select id from ORPHANS_MAV);
update OrphansInconsistenDelProgress
set currentTable = 'ORPHANS_MAV',
totalToDelete = sql%rowcount;
commit;
end;
/
In general, we want to either let Oracle break the task into pieces or use our own custom chunking. The original code seems to be doing both - it reads the data in chunks, and then submits each chunk to be further divided into a parallel delete. That approach generates lots of tiny pieces, and Oracle likely wastes a lot of time on things like thread coordination.
Deleting a large number of rows is expensive because there's no way to avoid REDO and UNDO. You might want to look into using DDL options, such as truncating a partition, or dropping and recreating the table. (But be careful recreating objects, it's difficult to perfectly recreate complex objects. We tend to forget things like privileges and table options.)
Tuning parallelism and large jobs is complicated. It's important to use the best monitoring tools, to ensure that Oracle is requesting, allocating, and using the right number of parallel processes, and that the execution plan is correct. One strong advantage of using a single SQL statement is that you can use real-time SQL monitoring reports to monitor progress. If you have the Diagnostics and Tuning Pack licenses, find the SQL_ID in GV$SQL and generate the report with select dbms_sqltune.report_sql_monitor('your SQL_ID here');.
maybe use SQL TRUNCATE TABLE,
Truncate table is faster and uses lesser resources than DELETE TABLE command.
If you are keeping only a fraction of the rows, it is likely to be much faster to copy over the rows to keep, then swap tables and delete the old table.
(No I don't know the threshold at which this is faster than DELETEing.)

Explicit cursors using bulk collect vs. implicit cursors: any performance issues?

In an older article from Oracle Magazine (now online as On Cursor FOR Loops) Steven Feuerstein showed an optimization for explicit cursor for loops using bulk collect (listing 4 in the online article):
DECLARE
CURSOR employees_cur is SELECT * FROM employees;
TYPE employee_tt IS TABLE OF employees_cur%ROWTYPE INDEX BY PLS_INTEGER;
l_employees employee_tt;
BEGIN
OPEN employees_cur;
LOOP
FETCH employees_cur BULK COLLECT INTO l_employees LIMIT 100;
-- process l_employees using pl/sql only
EXIT WHEN employees_cur%NOTFOUND;
END LOOP;
CLOSE employees_cur;
END;
I understand that bulk collect enhances the performance because there are less context switches between SQL and PL/SQL.
My question is about implicit cursor for loops:
BEGIN
FOR S in (SELECT * FROM employees)
LOOP
-- process current record of S
END LOOP;
END;
Is there a context switch in each loop for each record? Is the problem the same as with explicit cursors or is it somehow optimized "behind the scene"? Would it be better to rewrite the code using explicit cursors with bulk collect?
Starting from Oracle 10g the optimizing PL/SQL compiler can automatically convert FOR LOOPs into BULK COLLECT loops with a default array size of 100.
So generally there's no need to convert implicit FOR loops into BULK COLLECT loops.
But sometimes you may want to use BULK COLLECT instead. For example, if the default array size of 100 rows per fetch does not satisfy your requirements OR if you prefer to update your data within a set.
The same question was answered by Tom Kyte. You can check it here: Cursor FOR loops optimization in 10g
Yes, even if your -- process current record of S contains pure SQL and no PL/SQL you have context switch as the FOR ... LOOP is PL/SQL but the query is SQL.
Whenever possible you should prefer to process your data with single SQL statements (consider also MERGE, not only DELETE, UPDATE, INSERT), in most cases they are faster than a row-by-row processing.
Note, you will not gain any performance if you make a loop through l_employees and perform DLL for each record.
LIMIT 100 is rather useless. Processing only 100 rows at once would be almost the same as processing rows one-by-one - Oracle does not run on Z80 with 64K Memory.

Parallel Hints in "Select Into" statement in PL/SQL

Parallel hints in normal DML SQL queries in oracle can be used in following fashion
select /*+ PARALLEL (A,2) */ * from table A ;
In similar fashion can we use parallel hints in PL/SQL for select into statements in oracle?
select /*+ PARALLEL(A,2) */ A.* BULK COLLECT INTO g_table_a from Table A ;
If i use the above syntax is there any way to verify whether the above select statement is executed in parallel?
Edit : Assuming g_table_a is a table data structure of ROWTYPE table
If the statement takes short elapsed time, you don't want to run it in parallel. Note, that e.g. query taking say 0.5 seconds in serial execution could take 2,5 second in parallel, as the most overhead is to set up the parallel execution.
So, if the query takes long time, you have enough time to check V$SESSION (use gv$sessionin RAC) and see all session with the user running the query.
select * from gv$session where username = 'your_user'
For serial execution you see only one session, for parallel execution you see one coordinator and additional session up to twice of the chosen parallel degree.
Alternative use the v$px_session which connects the parallel worker sessions with the query coordinator.
select SID, SERIAL#, DEGREE, REQ_DEGREE
from v$px_session
where qcsid = <SID of the session running teh parallel statement>;
Here you see also the required degree of parallelism and the real used DOP
YOu can easily check this from Explain Plan of the query. In case of Plsql you can also have trace of the procedure and check in the TKprof file.

Copy everything from one schema to another in Oracle, but only the first n rows

I want to recreate the complete structure of multiple very large schemas (size in GB/TB) in another schema, but when filling the tables I only want the first n rows.
Right now I am using the following statement to copy the tables but this works only if there are no foreign key constraints.
create table DEV_OWN.mytable as select * from TEST_OWN.mytable where rownum < 10
I want to make a script that loops through all tables and copies the first n rows or maybe more or less if it is dependent on a foreign key, and also the indexes, views, packages, stored procedures and preferrably everything else so that the resulting schema is a replica of the originial but with only a limited number or records.
Since I have to run this script often I would like it to be as optimal as possible.
As #Aleksej has suggested, you can export the schema and then import it again
Alternatively, you can use Execute immediate to do this.
you can access the system views, such as
ALL_TABLES, all_indexes, all_triggers.
This will allow you to build dynamic sql instructions that you can execute with execute immediate command, but this way is more complicated the to Export and Import the hole schema.
Here is a simple example for creating and filling the table
declare
v_new_schema varchar2(100) := 'DEV_OWN';
begin
for rec in (select * from all_tables)
loop
execute immediate ('create table '||v_new_schema||'.'|| rec.table_name ||' as select * from '||rec.owner||'.'|| rec.table_name ||' where rownum < 10');
end loop;
end;
/
In this example, only the tables without contraint, triggers, or anything else belonging to it are created.
If you need it all, then it's actually easier to dump a Schema.
I think best solution for your case is make an expdp with a where clause and then make an impdp.

Running a sql script on a slow network

I have a sql script which creates my applications tables, sequences, triggers etc. & inserts around 10k rows of data.
I am on a slow network and when I run this script from my local machine it takes a long time to finish.
Wondering if there is any support in sqlplus (or sqldeveloper) to run this script on server. So the entire script is first transported to the server executed and then returns say a log file of the execution.
No, there is not. There are some things you can do that might the data load go faster, such as use sql loader if you are doing individual inserts, and increasing your commit interval. However, I would have to see the code to really help very much.
If you have access to the remote server on which the database is hosted and you have access to execute sqlplus on the said server, sure you can.
Login or SSH (depending upon OS - Windows or *nix) to the
server
Create your SQL script (myscript.sql) over there.
Login to SQL*Plus and execute using the command # myscript.sql
There is rarely a need to run these kinds of scripts on the server. A few simple changes to batch commands can significantly improve performance. All of the changes below combine multiple statements, reducing the number of network round-trips. Some of them also decrease parsing time, which will significantly improve performance even if the scripts run on the server.
Combine INSERTs into a single statement
Replace individual inserts:
insert into some_table values(1);
insert into some_table values(2);
...
with combind inserts like this:
insert into some_table
select 1 from dual union all
select 2 from dual union all
...
Use PL/SQL blocks
Replace individual DDL:
create sequence sequence1;
create sequence sequence2;
with a PL/SQL block:
begin
execute immediate 'create sequence sequence1';
execute immediate 'create sequence sequence2';
end;
/
Use inline constraints
Combine DDL as much as possible. For example, use this statement:
create table some_table(a number not null);
Instead of this:
create table some_table(a number);
alter table some_table modify a not null;

Resources