I have both DML and DDL as part of my procedure and enabled the parallel on both DML and DDL. I want to run them in Parallel mode using parallel hint but neither of them execute in parallel. Is this a limitation of using the Dynamic SQL?
For example
DECLARE
v_parallel_degree NUMBER := 8;
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL DML PARALLEL ' || v_parallel_degree;
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL QUERY PARALLEL ' || v_parallel_degree;
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL DDL PARALLEL ' || v_parallel_degree;
EXECUTE IMMEDIATE 'INSERT /*+PARALLEL(DEFAULT)*/ INTO '|| p_target_tabname || ' NOLOGGING
SELECT /*+PARALLEL(dmf,DEFAULT)*/*
FROM ' || p_source_tabname ||' PARTITION('|| p_part_name ||');
EXECUTE IMMEDIATE 'CREATE UNIQUE INDEX idx_pk ON TAB_HIST
(COL1,COL2,COL3)
LOCAL
NOLOGGING PARALLEL ' || v_parallel_degree;
END;
I even tried the below block but not working.
v_sql := 'BEGIN
EXECUTE IMMEDIATE ''ALTER SESSION FORCE PARALLEL DML PARALLEL ' || v_parallel_degree ||''';
EXECUTE IMMEDIATE ''ALTER SESSION FORCE PARALLEL QUERY PARALLEL ' || v_parallel_degree ||''';
INSERT /*+PARALLEL(DEFAULT)*/ INTO '|| p_target_tabname || ' NOLOGGING
SELECT /*+PARALLEL(dmf,DEFAULT)*/*
FROM ' || p_source_tabname ||' PARTITION('|| p_part_name ||') dmf;
DBMS_OUTPUT.PUT_LINE(''Inserted '' || SQL%ROWCOUNT || '' Rows into Table- '' || p_target_tabname || '' Partition - '' || p_part_name );
COMMIT;
END;';
EXECUTE IMMEDIATE v_sql;
Oracle Version -
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
PL/SQL Release 12.1.0.2.0 - Production
CORE 12.1.0.2.0 Production
Soon will be upgraded to 19c.
Any suggestions are appreciated..
TIA
Venkat
TLDR
Most probably you forgot to enable parallel DML.
ALTER SESSION ENABLE PARALLEL DML;
Additionaly if you force parallel execution you typically do not use parallel hints and vice versa.
Sample Setup (11.2)
create table TAB_HIST (
col1 int,
col2 int,
col3 varchar2(4000))
PARTITION BY RANGE (col1)
interval(1000000)
(
partition p_init values less than (1000000)
);
create table TAB_SRC (
col1 int,
col2 int,
col3 varchar2(4000)
)
PARTITION BY RANGE (col1)
interval(1000000)
(
partition p_init values less than (1000000)
);
insert into tab_src
select rownum, rownum, rpad('x',1000,'y') from dual connect by level <= 100000;
commit;
Insert
You must enable parallel DML in the first step
ALTER SESSION ENABLE PARALLEL DML;
Note that alternatively a hint can be used
INSERT /*+ ENABLE_PARALLEL_DML */ …
Additionally if you force parallel DML and QUERY, you typically do not use parallel hints. I'm hinting a direct insert with APPEND that is often used in this situation.
DECLARE
v_parallel_degree NUMBER := 2;
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL DML PARALLEL ' || v_parallel_degree;
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL QUERY PARALLEL ' || v_parallel_degree;
EXECUTE IMMEDIATE 'INSERT /*+ APPEND */ INTO TAB_HIST
SELECT *
FROM TAB_SRC PARTITION(P_INIT)';
END;
/
How to check if the table was inserted in parallel?
The simplest way is to query the table (before making a commit) - if you get the bellow error, it way a parallel direct insert.
select count(*) from TAB_HIST;
ORA-12838: cannot read/modify an object after modifying it in parallel
Index
If you specify a parallel degree in the create index statement you need not enable or force anything.
DECLARE
v_parallel_degree NUMBER := 2;
BEGIN
EXECUTE IMMEDIATE 'CREATE UNIQUE INDEX idx_pk ON TAB_HIST
(COL1,COL2,COL3)
LOCAL
NOLOGGING PARALLEL ' || v_parallel_degree;
END;
/
The check is as simple as to look on the degree in data dictionary
select DEGREE from user_indexes where table_name = 'TAB_HIST';
DEGREE
---------
2
Note that after creating index in parallel mode you often want to reset the DOP to one. Otherwise some simple nested loop queries may be confused and will open a parallel query...
Is this a limitation of using the Dynamic SQL?
No.
may be helpful:
parallel DML : https://docs.oracle.com/database/121/VLDBG/GUID-1D5C8D6C-0A0E-4CDB-8B32-16EC3C856ACC.htm#VLDBG1431
restriction PDML: https://docs.oracle.com/database/121/VLDBG/GUID-6626C70C-876C-47A4-8C01-9B66574062D8.htm
parallel DDL: https://docs.oracle.com/database/121/VLDBG/GUID-41774038-773B-40A5-BDCD-AB16A189C035.htm#VLDBG1411
You can start with it: https://stackoverflow.com/a/67377464/429100
Then you can check a real execution plan (Note section) and RTSM (Real-time SQL Monitor) report (select/*+ no_monitor */ dbms_sqltune.report_sql_monitor(sql_id => '&1',report_level => 'ALL',type => 'TEXT') sqlmon from dual;). They should show more information about the used DOP.
And, finally, you can trace parallel execution using the following command:
alter session set "_px_trace"="compilation","execution","messaging";
More info: "Tracing Parallel Execution with _px_trace (Doc ID 444164.1)"
Related
I am trying to understand the reason why i get the below error.
`ORA-04021: timeout occurred while waiting to lock object`
This error is thrown from a procedure while running the command alter table <<T_NAME>> truncate subpartition <<SUBPARTITION_NAME>>.
v_dyncursor_stmt := 'with obj as (select /*+ materialize */ data_object_id, subobject_name from user_objects where object_name = UPPER(''' ||
p_table_name ||
''') and object_type = ''TABLE SUBPARTITION'') select '||p_hint||' distinct subobject_name from ' ||
p_table_name || ' t, obj where data_object_id = DBMS_MView.PMarker(t.rowid) and ' || p_where;
/* log */
log_text(v_unit_name, 'INFO', 'Open cursor', v_dyncursor_stmt);
/* loop over partitions which needs to be truncated */
v_counter := 0;
open c_subpartitions for v_dyncursor_stmt;
loop
FETCH c_subpartitions
INTO v_subpartition_name;
EXIT WHEN c_subpartitions%NOTFOUND;
v_statement := 'alter table ' || p_table_name || ' truncate subpartition "' || v_subpartition_name || '"';
execStmt(v_statement);
the code is calling above procedure twice and the first attempt is successful. it truncates the subpartition fine. In the second attempt it is failing... The execStmt function is given below, the error is thrown from EXCEUTE IMMEDITE line...
procedure execStmt(p_statement IN VARCHAR2) IS
v_unit_name varchar2(1024) := 'execStmt';
v_simulate varchar2(256);
begin
v_simulate := utilities.get_parameter('PART_PURGE_SIMULATE', '0');
if (v_simulate = '1') then
log_text(v_unit_name, 'INFO', 'Statement skipped. (PART_PURGE_SIMULATE=1)',
p_statement);
else
/* log */
log_text(v_unit_name, 'INFO', 'Executing statement', p_statement);
EXECUTE IMMEDIATE p_statement;
end if;
end;
As this happens mostly over the weekend, i do not get a chance to inspect the lock tables to see what has locked the object. but i know for sure that it is a table which has alot of inserts happening.
So my question is can an insert operation on a table prevent the above DDL ??
from oracle docs,i see that an insert aquires a SX lock which is explained as below,
A row exclusive lock (RX), also called a subexclusive table lock (SX), indicates that the transaction holding the lock has updated table rows or issued SELECT ... FOR UPDATE. An SX lock allows other transactions to query, insert, update, delete, or lock rows concurrently in the same table. Therefore, SX locks allow multiple transactions to obtain simultaneous SX and SS locks for the same table.
This error happens because partition you are trying to truncate is in use at that time. And as mentioned by you, these insert statements are running that time, and it can affect DDL operation.
Could you please help me in a unique situation I am in. I am receiving "ORA-30511: invalid DDL operation in system triggers" when dropping sequences and procedures during logoff trigger.
I need to delete tables, sequences and procedures of users before logoff event happens. I am writing the table details in DB_OBJECTS table upon create using a separate trigger. Below is my logoff trigger - could you please help me where I am doing wrong. Dropping tables is working fine in the below code. Only Dropping sequences and procedures is giving me "ORA-30511: invalid DDL operation in system triggers" error.
CREATE OR REPLACE TRIGGER DELETE_BEFORE_LOGOFF
BEFORE LOGOFF ON DATABASE
DECLARE
USER_ID NUMBER := SYS_CONTEXT('USERENV', 'SESSIONID');
BEGIN
FOR O IN (SELECT USER, OBJECT_NAME, OBJECT_TYPE
FROM DB_OBJECTS WHERE SID = USER_ID
AND USERNAME = USER AND SYSDATE > CREATED_DTTM) LOOP
IF O.OBJECT_TYPE = 'TABLE' THEN
EXECUTE IMMEDIATE 'DROP TABLE ' || O.USER || '.' || O.OBJECT_NAME || ' CASCADE CONSTRAINTS';
ELSIF O.OBJECT_TYPE = 'SEQUENCE' THEN
EXECUTE IMMEDIATE 'DROP SEQUENCE ' || O.USER || '.' || O.OBJECT_NAME;
ELSIF O.OBJECT_TYPE = 'PROCEDURE' THEN
EXECUTE IMMEDIATE 'DROP PROCEDURE ' || O.USER || '.' || O.OBJECT_NAME;
END IF;
END LOOP;
EXCEPTION WHEN NO_DATA_FOUND THEN NULL;
END;
/
That's a simple one.
Error code: ORA-30511
Description: invalid DDL operation in system triggers
Cause: An attempt was made to perform an invalid DDL operation in a system trigger. Most DDL operations currently are not supported in system triggers. The only currently supported DDL operations are table operations and ALTER/COMPILE operations.
Action: Remove invalid DDL operations in system triggers.
That's why only
Dropping tables is working fine
succeeded.
Therefore, you can't do that using trigger.
You asked (in a comment) how to drop these objects, then. Manually, as far as I can tell. Though, that's quite unusual - what if someone accidentally logs off? You'd drop everything they created. If you use that schema for educational purposes (for example, every student gets their own schema), then you could create a "clean-up" script you'd run once class is over. Something like this:
SET SERVEROUTPUT ON;
DECLARE
l_user VARCHAR2 (30) := 'SCOTT';
l_str VARCHAR2 (200);
BEGIN
IF USER = l_user
THEN
FOR cur_r IN (SELECT object_name, object_type
FROM user_objects
WHERE object_name NOT IN ('EMP',
'DEPT',
'BONUS',
'SALGRADE'))
LOOP
BEGIN
l_str :=
'drop '
|| cur_r.object_type
|| ' "'
|| cur_r.object_name
|| '"';
DBMS_OUTPUT.put_line (l_str);
EXECUTE IMMEDIATE l_str;
EXCEPTION
WHEN OTHERS
THEN
NULL;
END;
END LOOP;
END IF;
END;
/
PURGE RECYCLEBIN;
It is far from being perfect; I use it to clean up my Scott schema I use to answer questions on various sites so - once it becomes a mess, I run that PL/SQL code several times (because of possible foreign key constraint).
Other option is to keep a create user script(s) (along with all grant statements) and - once class is over - drop existing user and simply recreate it.
Or, if that user contains some pre-built tables, keep export file (I mean, result of data pump export) and import it after the user is dropped.
There are various options - I don't know whether I managed to guess correctly, but now you have something to think about.
i have stored procedure that takes 3.5 second to execute.
my sp is in below:
CREATE OR REPLACE PROCEDURE ProcTest (columnNumber IN VARCHAR2,
TG OUT VARCHAR2)
IS
stmt VARCHAR2 (1000);
BEGIN
TG := 't' || TO_CHAR (SYSDATE, 'YYYYMMDDHH24MISS') || columnNumber;
stmt :=
'CREATE GLOBAL TEMPORARY TABLE ' || TG
|| ' ON COMMIT PRESERVE ROWS AS (SELECT * FROM USER1.Tbl WHERE CHARGINGPARTY='
|| columnNumber
|| ')';
EXECUTE IMMEDIATE stmt;
END;
i execute this part (CREATE GLOBAL TEMPORARY TABLE ' || TG
|| ' ON COMMIT PRESERVE ROWS AS (SELECT * FROM USER1.Tbl WHERE CHARGINGPARTY=' || columnNumber || ')' ) in sql developer, and it takes 0.2s, but when i execute the sp, it takes 3.2.
i created an index on my table(USER1.Tbl),
when i run the create table query in sql developer before creating that index, it took 3.2s.
My question is:
Does sp uses indexes? or how can i force the sp to use an index???
In general indexes are used by Oracle optimizer regardless from where the query is executed (Stored Procedure Vs Query from SQL Developer).
Based on whether you have enabled statistics gathering (by default it is enabled I believe), the optimizer decides to use the index or not depending on the statistics metrics for the table in the query (There are cases were querying directly the table would be faster rather than using the index. Usually this is when number of records is low).
Without being sure which query was executed first, there is also a possibility that you have faced the case where query result was stored in the buffer cache (from the first execution using the Stored Procedure) and when re-executing the query from SQL Developer, performance was much better because of this as there was no need to access the disk.
In the case where you want to direct the query to use your index you could use a hint
I am working with an Oracle 11g database, release 11.2.0.3.0 - 64 bit production
I have several defined packages, procedures, functions and data types. After numerous intermediate calculations largely done using collections, arrays and other data structure, I ultimately need to create a database table dynamically to output my final results. For the purpose of this question, I have the following:
TYPE ids_t IS TABLE OF NUMBER INDEX BY PLS_INTEGER;
benefit_ids ids_t;
--Lots of other code which successfully populates benefit_ids.
--benefit_ids has several million rows, and is used successfully as
the input to the following function:
FUNCTION find_max_ids(in_ids in ids_t)
RETURN ids_t
IS
str_sql varchar2(200);
return_ids ids_t;
BEGIN
str_sql := 'SELECT max(b.benefit_id)
FROM TABLE(:1) a
JOIN benefits b ON b.benefit_id = a.column_value
GROUP BY b.benefit_id';
EXECUTE IMMEDIATE str_sql BULK COLLECT INTO return_ids USING in_ids;
RETURN return_ids;
END;
The above works fine and clearly demonstrates that it is possible to pass an array as a parameter to a dynamic sql function or procedure.
However, when I try using EXECUTE IMMEDIATE and USING to create a database table as my final output I run into problems:
PROCEDURE create_output_table(in_ids in ids_t, in_tbl_nme in varchar2)
AUTHID CURRENT_USER
IS
str_sql := 'CREATE TABLE Final_Results AS (
SELECT a.client_id, a.benefit_id
FROM ' || in_tbl_nme || ' a
LEFT JOIN TABLE(:1) b on b.column_value = a.benefit_id
WHERE b.column_value is NOT NULL)';
EXECUTE IMMEDIATE str_sql USING IN in_ids;
END;
Rather unhelpfully the only error message I receive back is ORA-00933: SQL command not properly ended. However, I can't see anything wrong with the syntax per se, though I suspect the problem is with how I am applying the EXECUTE IMMEDIATE in this instance.
Any advice would be gratefully received.
The code you've shown doesn't get ORA-00933, but it still isn't valid:
create type ids_t is table of number
/
create table test_table (client_id number, benefit_id number)
/
insert into test_table values (1, 1)
/
declare
str_sql varchar2(4000);
in_tbl_nme varchar2(30) := 'TEST_TABLE';
in_ids ids_t := ids_t(1, 2, 3);
begin
str_sql := 'CREATE TABLE Final_Results AS (
SELECT a.client_id, a.benefit_id
FROM ' || in_tbl_nme || ' a
LEFT JOIN TABLE(:1) b on b.column_value = a.benefit_id
WHERE b.column_value is NOT NULL)';
EXECUTE IMMEDIATE str_sql USING IN in_ids;
end;
/
Error report -
ORA-22905: cannot access rows from a non-nested table item
That error doesn't look right; lets cast it to see if it's happier, even though it shouldn't be necessary:
declare
str_sql varchar2(4000);
in_tbl_nme varchar2(30) := 'TEST_TABLE';
in_ids ids_t := ids_t(1, 2, 3);
begin
str_sql := 'CREATE TABLE Final_Results AS (
SELECT a.client_id, a.benefit_id
FROM ' || in_tbl_nme || ' a
LEFT JOIN TABLE(CAST(:1 AS ids_t)) b on b.column_value = a.benefit_id
WHERE b.column_value is NOT NULL)';
EXECUTE IMMEDIATE str_sql USING IN in_ids;
end;
/
Error report -
ORA-01027: bind variables not allowed for data definition operations
That error is described in this article.
So you need to create and populate the table in two steps:
declare
str_sql varchar2(4000);
in_tbl_nme varchar2(30) := 'TEST_TABLE';
in_ids ids_t := ids_t(1, 2, 3);
begin
str_sql := 'CREATE TABLE Final_Results AS
SELECT a.client_id, a.benefit_id
FROM ' || in_tbl_nme || ' a
WHERE 1=0'; -- or anything that always evaluates to false
EXECUTE IMMEDIATE str_sql;
str_sql := 'INSERT INTO Final_Results (client_id, benefit_id)
SELECT a.client_id, a.benefit_id
FROM ' || in_tbl_nme || ' a
LEFT JOIN TABLE(CAST(:1 AS ids_t)) b on b.column_value = a.benefit_id
WHERE b.column_value is NOT NULL';
EXECUTE IMMEDIATE str_sql USING IN in_ids;
end;
/
PL/SQL procedure successfully completed.
select * from final_results;
CLIENT_ID BENEFIT_ID
---------- ----------
1 1
Creating a table on the fly isn't generally a good idea; aside from schema management and maintainability considerations, you have to be sure that only one session is calling the procedure and that the table doesn't already exist. If you have a process that does this work, uses the results and then drops the table then you still have to be sure it cannot be run simultaneously, and can be restarted if it fails part way through.
If all the work is done in the same session then you could create a (permanent) global temporary table instead, as a one-off schema set-up task. The insert to populate it would still have to be dynamic as in_table_nme isn't known, but it would be a bit of an improvement. (I'm not sure why your query in find_max_ids is dynamic though, unless you're also creating benefits dynamically). Or depending on the amount of data involved, you could use another collection type, instead of a table.
The data in a GTT is only visible to that session, and is destroyed when it ends. If that isn't appropriate then a normal table could be created once, which would be better than creating/dropping it dynamically. You still need to prevent multiple sessions running the process simultaneous in that case though, as they might not see the data they expect.
I am looping through a list of table and updating a list of column in each table. Is it possible to execute the loop parallely, that is update more than one table at a time.
FOR Table_rec IN Table_list_cur
LOOP
--Check if the table is partitioned
IF Check_if_partitioned (Table_rec.Table_name, Table_rec.Owner)
THEN
--If Yes, loop through each parition
EXECUTE IMMEDIATE
'Select partition_name from USER_TAB_PARTITIONS where table_name = '''
|| Table_rec.Table_name
|| ''' and owner = '''
|| Table_rec.Owner
|| ''''
BULK COLLECT INTO L_part;
FOR I IN L_part.FIRST .. L_part.LAST
LOOP
--Update each parition
DBMS_OUTPUT.Put_line ('V_sql_stmt' || V_sql_stmt);
V_sql_stmt :=
'UPDATE /*+ PARALLEL(upd_tbl,4) */ '
|| Table_rec.Table_name
|| ' PARTITION ('
|| L_part (I)
|| ') upd_tbl'
|| ' SET '
|| V_sql_stmt_col_list;
DBMS_OUTPUT.Put_line ('V_sql_stmt' || V_sql_stmt);
EXECUTE IMMEDIATE V_sql_stmt;
END IF;
END LOOP;
END LOOP;
Not directly, no.
You could take the guts of your loop, factor that out into a stored procedure call, and then submit a series of jobs to do the actual processing that would run asynchronously. Using the dbms_job package so that the job submission is part of the transaction, that would look something like
CREATE OR REPLACE PROCEDURE do_update( p_owner IN VARCHAR2,
p_table_name IN VARCHAR2 )
AS
BEGIN
<<your dynamic SQL>>
END;
and then run the loop to submit the jobs
FOR Table_rec IN Table_list_cur
LOOP
--Check if the table is partitioned
IF Check_if_partitioned (Table_rec.Table_name, Table_rec.Owner)
THEN
dbms_job.submit( l_jobno,
'begin do_update( ''' || table_rec.owner || ''', ''' || table_rec.table_name || '''); end;' );
END IF;
END LOOP;
commit;
Once the commit runs, the individual table jobs will start running (how many will run is controlled by the job_queue_processes parameter) while the rest are queued up.
Now, that said, your approach seems a bit off. First, it's almost never useful to specify a partition name explicitly. You almost certainly want to submit a single UPDATE statement, omit the partition name, and let Oracle do the updates to the various partitions in parallel. Running one update statement per partition rather defeats the purpose of partitioning. And if you really want 4 parallel threads for each partition, you probably don't want many of those updates running in parallel. The point of parallelism is that one statement can be allowed to consume a large fraction of the system's resources. If you really want, say 16 partition-level updates to be running simultaneously and each of those to run 4 slaves, it would make far more sense to let Oracle run 64 slaves for a single update (or whatever number of slaves you really want to devote to this particular task depending on how many resources you want to leave for everything else the system has to do).