How to calculate context swicthing for anonymous plsql block - oracle

How many context switches will happen for the below given plsql block
Declare
ll_row_count number := 0;
begin
for i in (select * from employee)
loop
ll_row_count := ll_row_count+1;
update employee
set emp_name = upper(emp_name)
where emp_id = i.emp_id;
commit;
end loop;
dbms_output.put_line('Total rows updated' || ll_row_count);
end;
/

Context switches can happen for many reasons, including multitasking and interrupts. When speaking about Oracle database development, we generally mean only the switches between the PL/SQL engine and the SQL engine.
In your example, PL/SQL is calling SQL. There is one context switch when PL/SQL calls SQL and a second one when SQL returns to PL/SQL.
PL/SQL calls SQL to PARSE a statement, to EXECUTE a statement or to FETCH rows from a query. We can measure the number of calls using the SQL trace facility and the trace profiler called TKPROF.
I created a table with 199 rows and traced the execution of your code:
There were 3 PARSE calls: 1 for the SELECT, 1 for the UPDATE and 1 for the COMMIT.
There were 2 FETCH calls. When you code for i in (select * from employee) loop, PL/SQL will automatically fetch 100 rows at a time (in order to reduce the number of context switches).
There were 399 EXECUTE calls: 1 for the SELECT, 199 for the UPDATE and 199 for the COMMIT.
So there were 404 calls from PL/SQL to SQL, and 404 returns from SQL to PL/SQL, making 808 context switches in all.
We can cut the number of context switches in half by committing once after the loop. It is strongly recommended to avoid too frequent commits. If you commit within a SELECT loop, you can get an exception related to UNDO no longer being available.
Generally, the best way to reduce context switches and enhance performance is to use set-based SQL. Failing that, we can process a bunch of rows at a time using BULK COLLECT and FORALL.
Best regards,
Stew Ashton

Context switching
While executing any code block or query, if executing engine needs to fetch data from other engine then it is called context switching. Here engine refers to the SQL engine and PL/SQL engine.
Means, While executing your code in PL/SQL, If there comes a SQL statement then PL/SQL engine need to pass this SQL statement to SQL engine, SQL engine fetches the result and passes it back to PL/SQL engine. Hence, Two context switch happens.
Now, Comming to your block, please see inline comments. We will use N as a number of record in table employee
Declare
ll_row_count number := 0;
begin
for i in (select * from employee) -- (CEIL(N/100)*2) context switch
loop
ll_row_count := ll_row_count+1;
update employee
set emp_name = upper(emp_name)
where emp_id = i.emp_id; -- 2*N context switch
commit; -- 2*N context switch
end loop;
dbms_output.put_line('Total rows updated' || ll_row_count);
end;
/
Now, Why we divide N by 100?
Because In oracle 10g and above For loop is optimized to use bulk transaction of LIMIT 100 to reduce context switching in the loop.
So finally, the number of context switch are: (CEIL(N/100)*2) + 2*N + 2*N
Cheers!!

Related

Explicit cursors using bulk collect vs. implicit cursors: any performance issues?

In an older article from Oracle Magazine (now online as On Cursor FOR Loops) Steven Feuerstein showed an optimization for explicit cursor for loops using bulk collect (listing 4 in the online article):
DECLARE
CURSOR employees_cur is SELECT * FROM employees;
TYPE employee_tt IS TABLE OF employees_cur%ROWTYPE INDEX BY PLS_INTEGER;
l_employees employee_tt;
BEGIN
OPEN employees_cur;
LOOP
FETCH employees_cur BULK COLLECT INTO l_employees LIMIT 100;
-- process l_employees using pl/sql only
EXIT WHEN employees_cur%NOTFOUND;
END LOOP;
CLOSE employees_cur;
END;
I understand that bulk collect enhances the performance because there are less context switches between SQL and PL/SQL.
My question is about implicit cursor for loops:
BEGIN
FOR S in (SELECT * FROM employees)
LOOP
-- process current record of S
END LOOP;
END;
Is there a context switch in each loop for each record? Is the problem the same as with explicit cursors or is it somehow optimized "behind the scene"? Would it be better to rewrite the code using explicit cursors with bulk collect?
Starting from Oracle 10g the optimizing PL/SQL compiler can automatically convert FOR LOOPs into BULK COLLECT loops with a default array size of 100.
So generally there's no need to convert implicit FOR loops into BULK COLLECT loops.
But sometimes you may want to use BULK COLLECT instead. For example, if the default array size of 100 rows per fetch does not satisfy your requirements OR if you prefer to update your data within a set.
The same question was answered by Tom Kyte. You can check it here: Cursor FOR loops optimization in 10g
Yes, even if your -- process current record of S contains pure SQL and no PL/SQL you have context switch as the FOR ... LOOP is PL/SQL but the query is SQL.
Whenever possible you should prefer to process your data with single SQL statements (consider also MERGE, not only DELETE, UPDATE, INSERT), in most cases they are faster than a row-by-row processing.
Note, you will not gain any performance if you make a loop through l_employees and perform DLL for each record.
LIMIT 100 is rather useless. Processing only 100 rows at once would be almost the same as processing rows one-by-one - Oracle does not run on Z80 with 64K Memory.

INSERT INTO SELECT vs. INSERT from Cursor in PL/SQL

So I have this project I'm working on at work and I've noticed a lot of people using a the INSERT INTO SELECT method:
INSERT INTO candy_tbl (candy_name,
candy_type,
candy_qty)
SELECT food_name,
food_type,
food_qty
FROM food_tbl WHERE food_type = 'C';
However, I use the following cursor method:
FOR rec IN ( SELECT
food_name,
food_type,
food_qty
FROM food_tbl WHERE food_type = 'C')
LOOP INSERT INTO candy_tbl(candy_name,
candy_type,
candy_qty)
VALUES(rec.food_name,
rec.food_type,
rec.food_qty)
END LOOP;
This will be going into a PL/SQL package. My question is, which is usually the 'preferred' method and when? I usually choose the cursor method because it gives me a little more flexibility with exception handling. But, I could see how it might be a performance issue when inserting a whole lot of records.
The FOR LOOP will require a fetch for each row from the CURSOR. The INSERT in the loop will happen 1 by 1. PLSQL runs in a PLSQL engine and SQL runs in a SQL engine, so the FOR LOOP:
- runs in the PLSQL engine
- sends the query to the SQL engine to execute the query and open a cursor then switches back to the PLSQL engine
- each loop does a FETCH from the CURSOR then does an INSERT meaning back to the SQL engine then returning to the PLSQL engine
each switch between SQL and PLSQL as well as each FETCH is expensive.
The INSERT INTO SELECT will be sent to the SQL engine once and run there until done and then back to PLSQL.
Other advantages exist, but that is the main PLSQL difference between the 2 methods.
The first one is faster since it's basically a single transaction aka set-based-processing.
The latter operates by row, for a very large table there will be a large difference in performance.
If you really need the flexibility of cursor processing but better performance there is a third intermediate option available - BULK COLLECT and FORALL with save exceptions option. However, the trade off is much more code complexity. The following is the basic structure.
declare
exception error_in_forall ;
pragma exception_init (error_in_forall, -24381);
cursor c_select is ( select ... ) ;
type c_array_type table of c_select%rowtype;
v_select_data c_array_type ;
begin
open c_select;
loop
fetch c_select
bulk collect
into v_select_data;
forall rdata in v_select_data.first .. v_select_data.last save exceptions
insert into ( ... ) values (v_select_data(rdata).column ... ) ;
exceptions
when error_in_forall then
<Process Oracle generated bulk error collection >
end ;
When complete if any errors occurred during execution of the Insert then the exceptions fires once. Oracle having built a SQL%BULK_EXCEPTIONS collection containing the index value and the error code of each. See the PL/SQL Language Reference for your version for details.

PL/SQL Append_Values Hint gives error message

I am having trouble doing a large number of inserts into an Oracle table using PL/SQL. My query goes row-by-row and for each row the query makes a calculation to determine the number of rows it needs to insert into the another table. The conventional inserts work but the code takes a long time to run for a large number of rows. To speed up the inserts I tried to use the Append_Values hint as in the following example:
BEGIN
FOR iter in 1..100 LOOP
INSERT /*+ APPEND_VALUES*/ INTO test_append_value_hint values (iter);
END LOOP;
END;
I get the following error message when doing this:
ORA-12838: cannot read/modify an object after modifying it in parallel
ORA-06512: at line 3
12838. 00000 - "cannot read/modify an object after modifying it in parallel"
*Cause: Within the same transaction, an attempt was made to add read or
modification statements on a table after it had been modified in parallel
or with direct load. This is not permitted.
*Action: Rewrite the transaction, or break it up into two transactions
one containing the initial modification and the second containing the
parallel modification operation.
Does anyone have ideas of how to make this code work, or how to quickly insert large numbers of rows into another table?
You get this error because every your INSERT executes as a separate DML statement. Oracle prevents read/write on the table where data were added using direct path insert until commit.
Technically you can use PL/SQL collections and FORALL instead:
SQL> declare
2 type array_t is table of number index by pls_integer;
3 a_t array_t;
4 begin
5 for i in 1..100 loop
6 a_t(i) := i;
7 end loop;
8 forall i in 1..100
9 insert /*+ append_values */ into t values (a_t(i));
10 end;
11 /
But the question Justin asked is in action - where are your data coming from and why can't you use usual INSERT /*+ append */ INTO ... SELECT FROM approach ?
Hi Request you to use commit after insert as below:
BEGIN
FOR iter in 1..100 LOOP
INSERT /*+ APPEND_VALUES*/ INTO test_append_value_hint values (iter);
COMMIT;
END LOOP;
END;
We cannot execute 2 DML transactions in a table without committing the first transaction. And hence this error will be thrown.
SO, commit your previous transaction in that table and continue the second transaction.

Why Oracle BULK DML operations are faster [duplicate]

Can you help me to understand this phrase?
Without the bulk bind, PL/SQL sends a SQL statement to the SQL engine
for each record that is inserted, updated, or deleted leading to
context switches that hurt performance.
Within Oracle, there is a SQL virtual machine (VM) and a PL/SQL VM. When you need to move from one VM to the other VM, you incur the cost of a context shift. Individually, those context shifts are relatively quick, but when you're doing row-by-row processing, they can add up to account for a significant fraction of the time your code is spending. When you use bulk binds, you move multiple rows of data from one VM to the other with a single context shift, significantly reducing the number of context shifts, making your code faster.
Take, for example, an explicit cursor. If I write something like this
DECLARE
CURSOR c
IS SELECT *
FROM source_table;
l_rec source_table%rowtype;
BEGIN
OPEN c;
LOOP
FETCH c INTO l_rec;
EXIT WHEN c%notfound;
INSERT INTO dest_table( col1, col2, ... , colN )
VALUES( l_rec.col1, l_rec.col2, ... , l_rec.colN );
END LOOP;
END;
then every time I execute the fetch, I am
Performing a context shift from the PL/SQL VM to the SQL VM
Asking the SQL VM to execute the cursor to generate the next row of data
Performing another context shift from the SQL VM back to the PL/SQL VM to return my single row of data
And every time I insert a row, I'm doing the same thing. I am incurring the cost of a context shift to ship one row of data from the PL/SQL VM to the SQL VM, asking the SQL to execute the INSERT statement, and then incurring the cost of another context shift back to PL/SQL.
If source_table has 1 million rows, that's 4 million context shifts which will likely account for a reasonable fraction of the elapsed time of my code. If, on the other hand, I do a BULK COLLECT with a LIMIT of 100, I can eliminate 99% of my context shifts by retrieving 100 rows of data from the SQL VM into a collection in PL/SQL every time I incur the cost of a context shift and inserting 100 rows into the destination table every time I incur a context shift there.
If can rewrite my code to make use of bulk operations
DECLARE
CURSOR c
IS SELECT *
FROM source_table;
TYPE nt_type IS TABLE OF source_table%rowtype;
l_arr nt_type;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_arr LIMIT 100;
EXIT WHEN l_arr.count = 0;
FORALL i IN 1 .. l_arr.count
INSERT INTO dest_table( col1, col2, ... , colN )
VALUES( l_arr(i).col1, l_arr(i).col2, ... , l_arr(i).colN );
END LOOP;
END;
Now, every time I execute the fetch, I retrieve 100 rows of data into my collection with a single set of context shifts. And every time I do my FORALL insert, I am inserting 100 rows with a single set of context shifts. If source_table has 1 million rows, this means that I've gone from 4 million context shifts to 40,000 context shifts. If context shifts accounted for, say, 20% of the elapsed time of my code, I've eliminated 19.8% of the elapsed time.
You can increase the size of the LIMIT to further reduce the number of context shifts but you quickly hit the law of diminishing returns. If you used a LIMIT of 1000 rather than 100, you'd eliminate 99.9% of the context shifts rather than 99%. That would mean that your collection was using 10x more PGA memory, however. And it would only eliminate 0.18% more elapsed time in our hypothetical example. You very quickly reach a point where the additional memory you're using adds more time than you save by eliminating additional context shifts. In general, a LIMIT somewhere between 100 and 1000 is likely to be the sweet spot.
Of course, in this example, it would be more efficient still to eliminate all context shifts and do everything in a single SQL statement
INSERT INTO dest_table( col1, col2, ... , colN )
SELECT col1, col2, ... , colN
FROM source_table;
It would only make sense to resort to PL/SQL in the first place if you're doing some sort of manipulation of the data from the source table that you can't reasonably implement in SQL.
Additionally, I used an explicit cursor in my example intentionally. If you are using implicit cursors, in recent versions of Oracle, you get the benefits of a BULK COLLECT with a LIMIT of 100 implicitly. There is another StackOverflow question that discusses the relative performance benefits of implicit and explicit cursors with bulk operations that goes into more detail about those particular wrinkles.
AS I understand this, there are two engine involved, PL/SQL engine and SQL Engine. Executing a query that make use of one engine at a time is more efficient than switching between the two
Example:
INSERT INTO t VALUES(1)
is processed by SQL engine while
FOR Lcntr IN 1..20
END LOOP
is executed by PL/SQL engine
If you combine the two statement above, putting INSERT in the loop,
FOR Lcntr IN 1..20
INSERT INTO t VALUES(1)
END LOOP
Oracle will be switching between the two engines, for the each (20) iterations.
In this case BULK INSERT is recommended which makes use of PL/SQL engine all through the execution

Oracle: Bulk Collect performance

Can you help me to understand this phrase?
Without the bulk bind, PL/SQL sends a SQL statement to the SQL engine
for each record that is inserted, updated, or deleted leading to
context switches that hurt performance.
Within Oracle, there is a SQL virtual machine (VM) and a PL/SQL VM. When you need to move from one VM to the other VM, you incur the cost of a context shift. Individually, those context shifts are relatively quick, but when you're doing row-by-row processing, they can add up to account for a significant fraction of the time your code is spending. When you use bulk binds, you move multiple rows of data from one VM to the other with a single context shift, significantly reducing the number of context shifts, making your code faster.
Take, for example, an explicit cursor. If I write something like this
DECLARE
CURSOR c
IS SELECT *
FROM source_table;
l_rec source_table%rowtype;
BEGIN
OPEN c;
LOOP
FETCH c INTO l_rec;
EXIT WHEN c%notfound;
INSERT INTO dest_table( col1, col2, ... , colN )
VALUES( l_rec.col1, l_rec.col2, ... , l_rec.colN );
END LOOP;
END;
then every time I execute the fetch, I am
Performing a context shift from the PL/SQL VM to the SQL VM
Asking the SQL VM to execute the cursor to generate the next row of data
Performing another context shift from the SQL VM back to the PL/SQL VM to return my single row of data
And every time I insert a row, I'm doing the same thing. I am incurring the cost of a context shift to ship one row of data from the PL/SQL VM to the SQL VM, asking the SQL to execute the INSERT statement, and then incurring the cost of another context shift back to PL/SQL.
If source_table has 1 million rows, that's 4 million context shifts which will likely account for a reasonable fraction of the elapsed time of my code. If, on the other hand, I do a BULK COLLECT with a LIMIT of 100, I can eliminate 99% of my context shifts by retrieving 100 rows of data from the SQL VM into a collection in PL/SQL every time I incur the cost of a context shift and inserting 100 rows into the destination table every time I incur a context shift there.
If can rewrite my code to make use of bulk operations
DECLARE
CURSOR c
IS SELECT *
FROM source_table;
TYPE nt_type IS TABLE OF source_table%rowtype;
l_arr nt_type;
BEGIN
OPEN c;
LOOP
FETCH c BULK COLLECT INTO l_arr LIMIT 100;
EXIT WHEN l_arr.count = 0;
FORALL i IN 1 .. l_arr.count
INSERT INTO dest_table( col1, col2, ... , colN )
VALUES( l_arr(i).col1, l_arr(i).col2, ... , l_arr(i).colN );
END LOOP;
END;
Now, every time I execute the fetch, I retrieve 100 rows of data into my collection with a single set of context shifts. And every time I do my FORALL insert, I am inserting 100 rows with a single set of context shifts. If source_table has 1 million rows, this means that I've gone from 4 million context shifts to 40,000 context shifts. If context shifts accounted for, say, 20% of the elapsed time of my code, I've eliminated 19.8% of the elapsed time.
You can increase the size of the LIMIT to further reduce the number of context shifts but you quickly hit the law of diminishing returns. If you used a LIMIT of 1000 rather than 100, you'd eliminate 99.9% of the context shifts rather than 99%. That would mean that your collection was using 10x more PGA memory, however. And it would only eliminate 0.18% more elapsed time in our hypothetical example. You very quickly reach a point where the additional memory you're using adds more time than you save by eliminating additional context shifts. In general, a LIMIT somewhere between 100 and 1000 is likely to be the sweet spot.
Of course, in this example, it would be more efficient still to eliminate all context shifts and do everything in a single SQL statement
INSERT INTO dest_table( col1, col2, ... , colN )
SELECT col1, col2, ... , colN
FROM source_table;
It would only make sense to resort to PL/SQL in the first place if you're doing some sort of manipulation of the data from the source table that you can't reasonably implement in SQL.
Additionally, I used an explicit cursor in my example intentionally. If you are using implicit cursors, in recent versions of Oracle, you get the benefits of a BULK COLLECT with a LIMIT of 100 implicitly. There is another StackOverflow question that discusses the relative performance benefits of implicit and explicit cursors with bulk operations that goes into more detail about those particular wrinkles.
AS I understand this, there are two engine involved, PL/SQL engine and SQL Engine. Executing a query that make use of one engine at a time is more efficient than switching between the two
Example:
INSERT INTO t VALUES(1)
is processed by SQL engine while
FOR Lcntr IN 1..20
END LOOP
is executed by PL/SQL engine
If you combine the two statement above, putting INSERT in the loop,
FOR Lcntr IN 1..20
INSERT INTO t VALUES(1)
END LOOP
Oracle will be switching between the two engines, for the each (20) iterations.
In this case BULK INSERT is recommended which makes use of PL/SQL engine all through the execution

Resources