ORA-12839 when I from parallel DML in my ATP instance? - oracle

I am testing ATP with my application and get the following error:
ORA-12839 Cannot Modify An Object In Parallel After Modifying It.
Is there any way to disable the parallel DML on the ATP without making changes to the application code?
DROP TABLE objects PURGE;
CREATE TABLE objects
AS
SELECT *
FROM user_objects;
UPDATE /*+ parallel (objects) */ objects
SET
object_id = object_id + 1000;
SELECT *
FROM objects;

Do NOT use the HIGH or MEDIUM service, where parallelism is built-in and configured out of the box without you actively enabling it.
You should either use the transactional services (LOW, TP, TPURGENT) or you can disable parallel DML using “alter session disable parallel dml”.
Here is the same script, running on the LOW service -
select sys_context('userenv', 'service_name') from dual;
DROP TABLE objects PURGE;
CREATE TABLE objects
AS
SELECT *
FROM user_objects;
UPDATE /*+ parallel (objects) */ objects
SET
object_id = object_id + 1000;
SELECT *
FROM objects;
But wait, what are these 'LOW' or 'HIGH' services?
(Docs)
Note the words 'parallel' in the descriptions -
The basic characteristics of these consumer groups are:
HIGH: Highest resources, lowest concurrency. Queries run in parallel.
MEDIUM: Less resources, higher concurrency. Queries run in parallel.
You can modify the MEDIUM service concurrency limit. See Change MEDIUM
Service Concurrency Limit for more information.
LOW: Least resources, highest concurrency. Queries run serially.

Related

What is the difference between NOPARALLEL and PARALLEL 1 in Oracle?

What is the difference between NOPARALLEL and PARALLEL 1? If I create three tables like so:
CREATE TABLE t0 (i NUMBER) NOPARALLEL;
CREATE TABLE t1 (i NUMBER) PARALLEL 1;
CREATE TABLE t2 (i NUMBER) PARALLEL 2;
They show up in the data dictionary as
SELECT table_name, degree FROM user_tables WHERE table_name IN ('T0','T1','T2');
TABLE_NAME DEGREE
T0 1 <==
T1 1 <==
T2 2
The documentation, however, states quite clearly
NOPARALLEL: Specify NOPARALLEL for serial execution. This is the default.
PARALLEL integer: Specification of integer indicates the degree of parallelism, which is the number of parallel threads used in the parallel operation. Each parallel thread may use one or two parallel execution servers.
So, NOPARALLEL is definitely serial, while PARALLEL 1 uses one thread, which may use one or two parallel servers??? But how can Oracle distinguish between both of them when the data dictionary stores the same value 1 for both?
BTW, the CREATE TABLE sys.tab$ statment in ?/rdbms/admin/dcore.bsq has the comment
/*
* Legal values for degree, instances:
* NULL (used to represent 1 on disk/dictionary and implies noparallel), or
* 2 thru EB2MAXVAL-1 (user supplied values), or
* EB2MAXVAL (implies use default value)
*/
degree number, /* number of parallel query slaves per instance */
instances number, /* number of OPS instances for parallel query */
There is no difference between NOPARALLEL and PARALLEL 1 - those options are stored the same way and behave the same way. This is a documentation bug because Oracle will never use two parallel execution servers for PARALLEL 1. We can test this situation by looking at V$PX_PROCESS and by understanding the producer/consumer model of parallelism.
How to Test Parallelism
There are many ways to measure the amount of parallelism, such as the execution plan or looking at GV$SQL.USERS_EXECUTING. But one of the best ways is to use the view GV$PX_PROCESS. The following query will show all the parallel servers currently being used:
select *
from gv$px_process
where status <> 'AVAILABLE';
Producer/Consumer Model
The Using Parallel Execution chapter of the VLDB and Partitioning Guide is worth reading if you want to fully understand Oracle parallelism. In particular, read the Producer/Consumer Model section of the manual to understand when Oracle will double the number of parallel servers.
In short - Each operation is executed in parallel separately, but the operations need to feed data into each other. A full table scan may use 4 parallel servers to read the data but the group by or order by operations need another 4 parallel servers to hash or sort the data. While the degree of parallelism is 4, the number of parallel servers is 8. This is what the SQL Language Reference means by the sentence "Each parallel thread may use one or two parallel execution servers."
Oracle doesn't just randomly double the number of servers. The doubling only happens for certain operations like an ORDER BY, which lets us test precisely when Oracle is enabling parallelism. The below tests demonstrate that Oracle will not double 1 parallel thread to 2 parallel servers.
Tests
Create these three tables:
create table table_noparallel noparallel as select level a from dual connect by level <= 1000000;
create table table_parallel_1 parallel 1 as select level a from dual connect by level <= 1000000;
create table table_parallel_2 parallel 2 as select level a from dual connect by level <= 1000000;
Run the below queries, and while they are running use a separate session to run the previous query against GV$PX_PROCESS. It may be helpful to use an IDE here, because you only have to retrieve the first N rows and keep the cursor open to count as using the parallel servers.
--0 rows:
select * from table_noparallel;
--0 rows:
select * from table_noparallel order by 1;
--0 rows:
select * from table_parallel_1;
--0 rows:
select * from table_parallel_1 order by 1;
--2 "IN USE":
select * from table_parallel_2;
--4 "IN USE":
select * from table_parallel_2 order by 1;
Notice that the NOPARALLEL and the PARALLEL 1 table work exactly the same way and neither of them use any parallel servers. But the PARALLEL 2 table will cause the number of parallel execution servers to double when the results are ordered.
Why is PARALLEL 1 Even Allowed?
Why doesn't Oracle just force the PARALLEL clause to only accept numbers larger than one and avoid this ambiguity? After all, the compiler already enforces a limit; the clause PARALLEL 0 will raise the error "ORA-12813: value for PARALLEL or DEGREE must be greater than 0".
I would guess that allowing a numeric value to mean "no parallelism" can make some code simpler. For example, I've written programs where the DOP was calculated and passed as a variable. If only numbers are used, the dynamic SQL is as simple as:
v_sql := 'create or replace table test1(a number) parallel ' || v_dop;
If we had to use NOPARALLEL, the code gets a bit uglier:
if v_dop = 1 then
v_sql := 'create or replace table test1(a number) noparallel';
else
v_sql := 'create or replace table test1(a number) parallel ' || v_dop;
end if;

Is optimizer_use_sql_plan_baselines and resource_manager_cpu_allocation oracle system parameter have impact on sql query performance

Is optimizer_use_sql_plan_baselines and resource_manager_cpu_allocation oracle system parameter have impact on sql query performance.
We have two envt suppose A and B. On A Envt query is running fine but in Envt. B its tacking time. I have compared system parameter and found difference in values in optimizer_use_sql_plan_baselines and resource_manager_cpu_allocation .
SQL plan baselines and the resource manager certainly could have a huge impact on performance, and you should use the below two queries or confirm or deny that those parameters are related to your problem.
GV$SQL stores which SQL plan baseline is associated with each SQL statement. Compare the SQL_PLAN_BASELINE column in the below query, and if they are equal then your problem is not related to baselines:
select sql_plan_baseline, round(elapsed_time/1000000) elapsed_seconds, gv$sql.*
from gv$sql
order by elapsed_time desc;
The Active Session History (ASH) views can tell you if the resource manager is an issue. If your queries are being throttled then you will see an event
named "resmgr:cpu quantum" in the below query. (But pay attention to the counts - don't troubleshoot a wait event if it only happens a small number of times.)
select nvl(event, 'CPU') event, count(*)
from gv$active_session_history
group by event
order by count(*) desc;
Resource manager can have other potentially negative affects. If you're in a data warehouse, and using parallel queries, it's possible that resource manager has downgraded the queries on one system. If you're using parallel queries, try comparing the SQL monitoring reports from both systems:
select dbms_sqltune.report_sql_monitor(sql_id => '&YOUR_SQL_ID') from dual;
However, I have a feeling that you're using the wrong approach for your problem. There are generally two approaches to Oracle database performance - database tuning and query tuning. If you're only interested in a single query, then you should probably focus on things like the execution plan and the wait events for the operations of that specific query.

Parallel Hints in "Select Into" statement in PL/SQL

Parallel hints in normal DML SQL queries in oracle can be used in following fashion
select /*+ PARALLEL (A,2) */ * from table A ;
In similar fashion can we use parallel hints in PL/SQL for select into statements in oracle?
select /*+ PARALLEL(A,2) */ A.* BULK COLLECT INTO g_table_a from Table A ;
If i use the above syntax is there any way to verify whether the above select statement is executed in parallel?
Edit : Assuming g_table_a is a table data structure of ROWTYPE table
If the statement takes short elapsed time, you don't want to run it in parallel. Note, that e.g. query taking say 0.5 seconds in serial execution could take 2,5 second in parallel, as the most overhead is to set up the parallel execution.
So, if the query takes long time, you have enough time to check V$SESSION (use gv$sessionin RAC) and see all session with the user running the query.
select * from gv$session where username = 'your_user'
For serial execution you see only one session, for parallel execution you see one coordinator and additional session up to twice of the chosen parallel degree.
Alternative use the v$px_session which connects the parallel worker sessions with the query coordinator.
select SID, SERIAL#, DEGREE, REQ_DEGREE
from v$px_session
where qcsid = <SID of the session running teh parallel statement>;
Here you see also the required degree of parallelism and the real used DOP
YOu can easily check this from Explain Plan of the query. In case of Plsql you can also have trace of the procedure and check in the TKprof file.

Running PLSQL in parallel

Running PLSQL script to generate load
For some reasons (reproducing errors, ...) I would like to generate some load (with specific actions) in a PL SQL script.
What I would like to do:
A) Insert 1.000.000 rows in Schema A Table 1
B) In a loop and as best in parallel (2 or 3 times)
1) read from Schema-A.Table-1 one row with locking
2) insert it to Schema-B.Table-2
3) delete row from Schema-A.Table-1
Is there a way to do this B-task in a script in parallel in PLSQL on calling the script?
Who would this look like?
It's usually better to parallelize SQL statements inside a PL/SQL block, instead of trying to parallelize the entire PL/SQL block:
begin
execute immediate 'alter session enable parallel dml';
insert /*+ append parallel */ into schemaA.Table1 ...
commit;
insert /*+ append parallel */ into schemaB.Table2 ...
commit;
delete /*+ parallel */ from schemaA.Table1 where ...
commit;
dbms_stats.gather_table_stats('SCHEMAA', 'TABLE1', degree => 8);
dbms_stats.gather_table_stats('SCHEMAB', 'TABLE2', degree => 8);
end;
/
Large parallel DML statements usually require less code and run faster than creating your own parallelism in PL/SQL. Here are a few things to look out for:
You must have Enterprise Edition, large tables, decent hardware, and a sane configuration to run parallel SQL.
Setting the DOP is difficult. Using the hint /*+ parallel */ lets Oracle decide but you might want to play around with it by specifying a number, such as /*+ parallel(8) */.
Direct-path writes (the append hint) can be significantly faster. But they lock the entire table and the new results won't be recoverable until after the next backup.
Check the execution plan to ensure that direct-path writes are used - look for the operation LOAD AS SELECT instead of LOAD TABLE CONVENTIONAL. Tuning parallel SQL statements is best done with the Real-Time SQL Monitoring reports, found in select dbms_sqltune.report_sql_monitor(sql_id => 'SQL_ID') from dual;
You might want to read through the Parallel Execution Concepts chapter of the manual. Oracle parallelism can be tricky but it can also make your processes runs orders of magnitude faster if you're careful.
If objective is fast load and parallel is just an attempt to get that then do.
Create table newtemp as select old
To create table.
Then
Create table old_remaning as select old with not exists newtemp
Then drop old and new tables. Then do rename table . These operations will use parallel options at db level

Optimizing SQL by using temporary table in Oracle

I have a data cleanup-er procedure which deletes the same data from the card rows of two tables.
Both of these update statement use the same subQuery for detecting which rows should be updated.
UPDATE table_1 SET card = NULL WHERE id in
(select id from sub_table WHERE /* complex clause here */);
UPDATE table_2 SET card = NULL WHERE id in
(select id from sub_table WHERE /* complex clause here */);
Is using Oracle Temporary table good solution for optimizing my code?
CREATE TEMPORARY TABLE tmp_sub_table AS
select id from sub_table WHERE /* complex clause here */;
UPDATE table_1 SET card = NULL WHERE id in (select * from tmp_sub_table);
UPDATE table_2 SET card = NULL WHERE id in (select * from tmp_sub_table);
Should I use Local temporary table or Global Temporary table?
Global Temporary Tables are persistent data structures. When we INSERT the data is written to disk, when we SELECT the data is read from disk. So that's quite a lot of Disk I/O: the cost saving from running the same query twice must be greater than the cost of all those writes and reads.
One thing to watch out for is that GTTs are built on a Temporary Tablespace, so you might get contention with other long running processes which are doing sorts, etc. It's a good idea to have a separate Temporary Tablespace, just for GTTs but not many DBAs do this.
An alternative solution would be to use a collection to store subsets of the records in memory and use bulk processing.
declare
l_ids sys.ocinumberlist;
cursor l_cur is
select id from sub_table WHERE /* complex clause here */
order by id
;
begin
open lcur;
loop
fetch lcur bulk collect into l_ids limit 5000;
exit when l_ids.count() = 0;
update table1
set card=null
where id member of l_ids;
update table2
set card=null
where id member of l_ids;
end loop;
end;
"updating many rows with one update statement ... works much faster than updating separately using Looping over cursor"
That is the normal advice. But this is a bulk operation: it is updating five thousand rows at a time, so it's faster than row-by-row. The size of the batch is governed by the BULK COLLECT ... LIMIT clause: you don't want to make the value too high because the collection is in session memory but as you're only select one column - and a number - maybe you can make it higher.
As always tuning is a matter of benchmarking. Have you established that running this sub-query twice is a high-cost operation?
select id from sub_table WHERE /* complex clause here */
If it seems too slow you need to test other approaches and see whether they're faster. Maybe a Global Temporary Table is faster than a bulk operation. Generally memory access is faster than disk access, but you need to see which works best for you.

Resources