Users have noticed an "ORA-12838: cannot read/modify an object" after modifying it in parallel error in their ADB instance
The low, medium and high services seem to have parallel DML enabled by default.
Do we have any recommendations to deal with parallel DML execution on ADB Shared?
On ADB-Shared, Parallel DML is enabled by default. If your application does not support P-DML, a workaround would be to disable P-DML at the session level as below. You may also trigger this via a LOGON trigger.
ALTER SESSION DISABLE PARALLEL DML;
Parallel DML brings you performance benefits with the limitation of touch-once. In general, the best practice is to apply parallel DML where it brings benefits and to fall back to serial DML when truly required.
Related
We have a Oracle 19C database (19.0.0.0.ru-2021-04.rur-2021-04.r1) on AWS RDS which is hosted on an 4 CPU 32 GB RAM instance. The size of the database is not big (35 GB) and the PGA Aggregate Limit is 8GB & Target is 4GB. Whenever the scheduled internal Oracle Auto Optimizer Stats Collection Job (ORA$AT_OS_OPT_SY_nnn) runs then it consumes substantially high PGA memory (approx 7GB) and sometimes this makes database unstable and AWS loses communication with the RDS instance so it restarts the database.
We thought this may be linked to existing Oracle bug 30846782 (19C+: Fast/Excessive PGA growth when using DBMS_STATS.GATHER_TABLE_STATS) but Oracle & AWS had fixed it in the current 19C version we are using. There are no application level operations that consume this much PGA and the database restart have always happened when the Auto Optimizer Stats Collection Job was running. There are couple of more databases, which are on same version, where same pattern was observed and the database was restarted by AWS. We have disabled the job now on those databases to avoid further occurrence of this issue however we want to run this job as disabling it may cause old stats being available in the database.
Any pointers on how to tackle this issue?
I found the same issue in my AWS RDS Oracle 18c and 19c instances, even though I am not in the same patch level as you.
In my case, I applied this workaround and it worked.
SQL> alter system set "_fix_control"='20424684:OFF' scope=both;
However, before applying this change, I strongly suggest that you test it on your non production environments, and if you can, try to consult with Oracle Support. Dealing with hidden parameters might lead to unexpected side effects, so apply it at your own risk.
Instead of completely abandoning automatic statistics gathering, try find any specific objects that are causing the problem. If only a small number of tables are responsible for a large amount of statistics gathering, you can manually analyze those tables or change their preferences.
First, use the below SQL to see which objects are causing the most statistics gathering. According to the test case in bug 30846782, the problem seems to be only related to the number of times DBMS_STATS is called.
select *
from dba_optstat_operations
order by start_time desc;
In addition, you may be able to find specific SQL statements or sessions that generate a lot of PGA memory with the below query. (However, if the database restarts, it's possible that AWR won't save the recorded values.)
select username, event, sql_id, pga_allocated/1024/1024/1024 pga_allocated_gb, gv$active_session_history.*
from gv$active_session_history
join dba_users on gv$active_session_history.user_id = dba_users.user_id
where pga_allocated/1024/1024/1024 >= 1
order by sample_time desc;
If the problem is only related to a small number of tables with a large number of partitions, you can manually gather the stats on just that table in a separate session. Once the stats are gathered, the table won't be analyzed again until about 10% of the data is changed.
begin
dbms_stats.gather_table_stats(user, 'PGA_STATS_TEST');
end;
/
It's not uncommon for a database to spend a long time gathering statistics, but it is uncommon for a database to constantly analyze thousands of objects. Running into this bug implies there is something unusual about your database - are you constantly dropping and creating objects, or do you have a large number of objects that have 10% of their data modified every day? You may need to add a manual gather step to a few of your processes.
Turning off the automatic statistics job entirely will eventually cause many performance problems. Even if you can't add manual gathering steps, you may still want to keep the job enabled. For example, if tables are being analyzed too frequently, you may want to increase the table preference for the "STALE_PERCENT" threshold from 10% to 20%:
begin
dbms_stats.set_table_prefs
(
ownname => user,
tabname => 'PGA_STATS_TEST',
pname => 'STALE_PERCENT',
pvalue => '20'
);
end;
/
i would like to know if there is a connect param, that i can use in JDBC Thin Oracle Connection URL,
to tell the Oracle DB that i want to use Parallelism in processing the queries.
The Application, that should be use this parameter generates Statements during runtime and fires them against the Database, so i can't update or optimize them. In nearly every query i run in timeouts and the User on the other side gets an error message.
If i fire the generated Statements and send them with /* parallel */ Hint with the SQL Developer, i have a much better performance.
Maybe someone has a hint, that i can achive a better performance.
You could use a logon trigger to force parallel execution of all query statements in the session for which parallelization is possible. This would override any default parallelism property on individual objects.
CREATE OR REPLACE TRIGGER USER1.LOGON_TRG
AFTER LOGON ON SCHEMA
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL QUERY 4';
END;
/
https://docs.oracle.com/en/database/oracle/oracle-database/19/vldbg/parameters-parallel-exec.html#GUID-FEDED00B-57AF-4BB0-ACDB-73F43B71754A
I'm using Oracle database version - 12.1.0.2.0
I've an Oracle function that I need to ensure is executed serially. I want to restrict concurrent execution of this function by different sessions.
Couple of ways this can be achieved is:
Update a "static" row in a parameter table at beginning of function and commit before function ends. Since no other session will be able to update same row, it will ensure concurrent access is restricted.
Implement logic using user locks.
Is there any other way this control can be achieved. I've read about "latch", but I believe it is used for internal mechanism to control access to Oracle data structures (mainly resource in SGA).
Is there a way to implement latch (or something similar) to fulfill my requirement.
I understand latch is lightweight as against locks which are heavier in comparison.
Thanks in advance.
Oracle implementes DBMS_LOCK.ALLOCATE_UNIQUE for this purpose.
At the beginning of the procedure allocate a unique lockhandle for a given lockname.
Then REQUEST the lock
/* lock parallel executions */
DBMS_LOCK.ALLOCATE_UNIQUE( v_lockname, v_lockhandle);
v_res := DBMS_LOCK.REQUEST( lockhandle=>v_lockhandle, release_on_commit => TRUE);
Perform your serial stuff and at the end of the function RELEASE the lock.
v_res := DBMS_LOCK.RELEASE (v_lockhandle);
Do not forget to release the lock in the EXCEPTION section to be not blocked after abnormal termination
I've two questions
First, it's substantial the difference between execute the ALTER SYSTEM FLUSH SHARED_POOL command in the server and in one client? In my company they teach me that I've to execute directly in the server that command, but I think is just a command that goes for the network and just a message that is flushed I think that shouldn't be substantial differente how it happens with lot of data, I'm talking of a system that take about 5 minutes approximately to flush
The second, how can I flush a instance from another instance?
ALTER SYSTEM FLUSH SHARED_POOL; can be run from either the client or the server, it doesn't matter.
Many DBAs will run the command from the server, for two reasons. First, many DBAs run all commands from the server, usually because they never learned the importance of an IDE. Second, the command ALTER SYSTEM FLUSH SHARED_POOL; only affects one instance in a clustered database. Connecting directly to the server is usually an easy way of ensuring you connect to each database instance of a cluster.
But you can easily flush the shared pool from all instances without directly connecting to each instance, using the below code. (Thanks to berxblog for this idea.)
--Assumes you have elevated privileges, like DBA role or ALTER SYSTEM privilege.
create or replace function flush_shared_pool return varchar2 authid current_user as
begin
execute immediate 'alter system flush shared_pool';
return 'Done';
end;
/
select *
from table(gv$(cursor(
select instance_number, flush_shared_pool from v$instance
)));
INSTANCE_NUMBER FLUSH_SHARED_POOL
--------------- -----------------
1 Done
3 Done
2 Done
I partially disagree with #sstan - flushing the shared pool should be rare in production, but it may be relatively common in development. Flushing the shared pool and buffer cache can help imitate running queries "cold".
What is the difference in following statements
ALTER SESSION FORCE PARALLEL QUERY;
ALTER SESSION ENABLE PARALLEL DDL;
ALTER SESSION DISABLE PARALLEL DML;
Which one is more suggested for the optimization point of view. Initially, I was using DISABLE, later on I tested with ENABLE which performed better and now FORCE is doing better. Is there any chance that FORCE will backfire in any case.
Please refer to http://docs.oracle.com/cd/E11882_01/server.112/e41084/statements_2013.htm
PARALLEL DML | DDL | QUERY
The PARALLEL parameter determines whether all subsequent DML, DDL, or query statements in the session will be considered for parallel execution. This clause enables you to override the degree of parallelism of tables during the current session without changing the tables themselves. Uncommitted transactions must either be committed or rolled back prior to executing this clause for DML.
ENABLE Clause
Specify ENABLE to execute subsequent statements in the session in parallel. This is the default for DDL and query statements.
•
DML: DML statements are executed in parallel mode if a parallel hint or a parallel clause is specified.
•
DDL: DDL statements are executed in parallel mode if a parallel clause is specified.
•
QUERY: Queries are executed in parallel mode if a parallel hint or a parallel clause is specified.
Restriction on the ENABLE clause You cannot specify the optional PARALLEL integer with ENABLE.
DISABLE Clause
Specify DISABLE to execute subsequent statements in the session serially. This is the default for DML statements.
•
DML: DML statements are executed serially.
•
DDL: DDL statements are executed serially.
•
QUERY: Queries are executed serially.
Restriction on the DISABLE clause You cannot specify the optional PARALLEL integer with DISABLE.
FORCE Clause
FORCE forces parallel execution of subsequent statements in the session. If no parallel clause or hint is specified, then a default degree of parallelism is used. This clause overrides any parallel_clause specified in subsequent statements in the session but is overridden by a parallel hint.
•
DML: Provided no parallel DML restrictions are violated, subsequent DML statements in the session are executed with the default degree of parallelism, unless a degree is specified in this clause.
•
DDL: Subsequent DDL statements in the session are executed with the default degree of parallelism, unless a degree is specified in this clause. Resulting database objects will have associated with them the prevailing degree of parallelism.
Specifying FORCE DDL automatically causes all tables created in this session to be created with a default level of parallelism. The effect is the same as if you had specified the parallel_clause (with the default degree) in the CREATE TABLE statement.
•
QUERY: Subsequent queries are executed with the default degree of parallelism, unless a degree is specified in this clause.
PARALLEL integer Specify an integer to explicitly specify a degree of parallelism:
•
For FORCE DDL, the degree overrides any parallel clause in subsequent DDL statements.
•
For FORCE DML and QUERY, the degree overrides the degree currently stored for the table in the data dictionary.
•
A degree specified in a statement through a hint will override the degree being forced.
The following types of DML operations are not parallelized regardless of this clause:
•
Operations on cluster tables
•
Operations with embedded functions that either write or read database or package states
•
Operations on tables with triggers that could fire
•
Operations on tables or schema objects containing object types, or LONG or LOB data types