How to Configure dynamic value of Parallel session in liquibase - oracle

There is an oracle database in our project. We update oracle DB with liquibase. In our case we need to run an update query using liquibase. This query takes lot of time so we enabled parallel session for DML statment. We are looking for a way to make number of parallel session dynamic. Like in below query you see 15 as hardcoded values. Initially we thought we put this value to liquibase.properties file and we will pick it up from there but it is not possible in our case due to deployment constraint. Folks please let us know if you have any suggestions for us.
ALTER SESSION ENABLE PARALLEL DML;
ALTER SESSION FORCE PARALLEL DML PARALLEL 15;

You can try using Dynamic SQL for this.
Below is a sample pseudo code
Declare v_dop number; begin execute immediate 'alter session enable parallel dml'; execute immediate 'ALTER SESSION FORCE PARALLEL DML PARALLEL '||v_dop; end;
V_Dop can be fetched from some configuration table

Related

Parameter for setting up parallel processing in Oracle Thin JDBC Connect String

i would like to know if there is a connect param, that i can use in JDBC Thin Oracle Connection URL,
to tell the Oracle DB that i want to use Parallelism in processing the queries.
The Application, that should be use this parameter generates Statements during runtime and fires them against the Database, so i can't update or optimize them. In nearly every query i run in timeouts and the User on the other side gets an error message.
If i fire the generated Statements and send them with /* parallel */ Hint with the SQL Developer, i have a much better performance.
Maybe someone has a hint, that i can achive a better performance.
You could use a logon trigger to force parallel execution of all query statements in the session for which parallelization is possible. This would override any default parallelism property on individual objects.
CREATE OR REPLACE TRIGGER USER1.LOGON_TRG
AFTER LOGON ON SCHEMA
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL QUERY 4';
END;
/
https://docs.oracle.com/en/database/oracle/oracle-database/19/vldbg/parameters-parallel-exec.html#GUID-FEDED00B-57AF-4BB0-ACDB-73F43B71754A

Running PLSQL in parallel

Running PLSQL script to generate load
For some reasons (reproducing errors, ...) I would like to generate some load (with specific actions) in a PL SQL script.
What I would like to do:
A) Insert 1.000.000 rows in Schema A Table 1
B) In a loop and as best in parallel (2 or 3 times)
1) read from Schema-A.Table-1 one row with locking
2) insert it to Schema-B.Table-2
3) delete row from Schema-A.Table-1
Is there a way to do this B-task in a script in parallel in PLSQL on calling the script?
Who would this look like?
It's usually better to parallelize SQL statements inside a PL/SQL block, instead of trying to parallelize the entire PL/SQL block:
begin
execute immediate 'alter session enable parallel dml';
insert /*+ append parallel */ into schemaA.Table1 ...
commit;
insert /*+ append parallel */ into schemaB.Table2 ...
commit;
delete /*+ parallel */ from schemaA.Table1 where ...
commit;
dbms_stats.gather_table_stats('SCHEMAA', 'TABLE1', degree => 8);
dbms_stats.gather_table_stats('SCHEMAB', 'TABLE2', degree => 8);
end;
/
Large parallel DML statements usually require less code and run faster than creating your own parallelism in PL/SQL. Here are a few things to look out for:
You must have Enterprise Edition, large tables, decent hardware, and a sane configuration to run parallel SQL.
Setting the DOP is difficult. Using the hint /*+ parallel */ lets Oracle decide but you might want to play around with it by specifying a number, such as /*+ parallel(8) */.
Direct-path writes (the append hint) can be significantly faster. But they lock the entire table and the new results won't be recoverable until after the next backup.
Check the execution plan to ensure that direct-path writes are used - look for the operation LOAD AS SELECT instead of LOAD TABLE CONVENTIONAL. Tuning parallel SQL statements is best done with the Real-Time SQL Monitoring reports, found in select dbms_sqltune.report_sql_monitor(sql_id => 'SQL_ID') from dual;
You might want to read through the Parallel Execution Concepts chapter of the manual. Oracle parallelism can be tricky but it can also make your processes runs orders of magnitude faster if you're careful.
If objective is fast load and parallel is just an attempt to get that then do.
Create table newtemp as select old
To create table.
Then
Create table old_remaning as select old with not exists newtemp
Then drop old and new tables. Then do rename table . These operations will use parallel options at db level

tracing all SQL queries which have executed when application fire an order

I need to collect all the SQL queries (SELECT, UPDATE, DELETE, INSERT) which have been used by the application when any order is processed through the application.
If I can get all SQL's for atleast 50 orders processed through the application then I can check that which SELECT, UPDATE, DELETE statements are frequently in use and which tables are being frequently used by the application after finding these information.
I can get to conclusion that on which table I can use partitioning as if I get the whole SQL's with the WHERE clause I can also get to know that which type of partitioning will be better for any particular table and the partitioning.
However it seems to be a hectic exercise as there could be lots of SQL's which the application use but it helps me understand the application and also after this exercise i will be having a scrutiny report of my application behavior with database which can be used by the later employees.
For this till now i have used the DBMS_adivsor package which gives me some tables of my database to be partitioned and when i check the EXPLAIN PLAN of SQL which i used in the DBMS_ADVISOR then it occur to me that tables which are being full table scan in EXPLAIN PLAN the DBMS_ADVISOR told me to partition them.
The thing is that i can not partition the tables based on this information as its a application level partitioning and also my manager will be not convinced by this little information. so i have come up with the ABOVE plan:(
I need to do this to find out the tables where i can perform table partitioning and other performance tuning things like creating index's as i can get the where clause with filter so its like a database tuning and i want to do this as it will help me grow my career in database development.
Please help me out with this scenario.
Will this query give me required information !
select st.command
from V$SQLTEXT_WITH_NEWLINES st, SYS.V_$SQL s
where st.hash_value = s.hash_value
and parsing_schema_name = 'NETSERVICOS2CM'
and s.module = 'JDBC THIN CLIENT';
Tracing for non-dba USER's ----
GRANT SELECT ON SYS.V_$SESSION TO USER;
GRANT SELECT ON SYS.V_$MYSTAT TO USER;
To get the SID and SERAIL#
SELECT sid, serial# FROM SYS.V_$SESSION
WHERE SID = (SELECT DISTINCT SID FROM SYS.V_$MYSTAT);
Then on DBA user execute this --
EXEC DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION (sid=>3002, serial#=>31833,sql_trace=> true);
OR
on no-dba user i am using --
ALTER SESSION SET SQL_TRACE = TRUE;
OR
EXEC DBMS_SESSION.set_sql_trace(sql_trace => TRUE);
Trigger to trace a session for a particular user ----
CREATE OR REPLACE TRIGGER ON_MY_SCHEMA_LOGIN
AFTER LOGON ON DATABASE
WHEN ( USER = 'NETSERVICOS1CM' )
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET TRACEFILE_IDENTIFIER = "net1cm"';
EXECUTE IMMEDIATE 'alter session set statistics_level=ALL';
EXECUTE IMMEDIATE 'alter session set events ''10046 trace name context forever, level 12''';
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
After that to stop trace i am using
ALTER SESSION SET EVENTS '10046 trace name context off';
ALTER SYSTEM SET EVENTS '10046 trace name context off';
As suggested by Derek.
After this you may have multiple trace files to make a consolidate trace file we can use TRCSESS utility --
trcsess output=net1cm_trcsess.trc module="JDBC Thin Client" *net1cm.trc
It will create a single trace file net1cm_trcsess.trc for all trace file generated in my case (with trace file identifier net1cm).
Now we can use TKPROF utility to generate a report which is in human readable form using below command for example ---
tkprof net1cm_trcsess.trc OUTPUT=net1cm_trcsess.txt EXPLAIN=netservicos1cm/netservicos1 SYS=NO
Thanks
So here is my advise.
You can use several different traces for application context actions, such as INSERT, DELETE, UPDATE, SELECT, or even all actions.
Say you have a PL/SQL program run by an application, or have an OCI call to the database. You would have this oracle code at the module/stored proc level:
dbms_application_info.set_module(<module_name>,'execute');
before you execute the entire code. (After the BEGIN in the code).
or
dbms_application_info.set_module(<module_name>,'UPDATE');
before you do an update SQL statement.
To turn off application context, you would use (before the END;):
dbms_application_info.set_module(NULL,NULL);
Then when you execute the module or run the update statement you would like to trace in the module you would make sure you did this prior to and after the module runs
execute DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE( -
service_name => '<service_name>', -
module_name => '<module_name>', -
action_name => DBMS_MONITOR.ALL_ACTIONS, -
waits => TRUE, -
binds => TRUE);
All actions would be traced and you would know exactly where the statement ran and what action was executed.
To turn it off:
execute DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE( -
service_name => '<service_name>', -
module_name => '<module_name>', -
action_name => DBMS_MONITOR.ALL_ACTIONS);
To do this at the session level, you would do the following when 9 is the serial number and 100 is the Sid, for example. (check the syntax).
execute DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(9,190,TRUE);
To turn it off:
execute DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(9,190,FALSE);
At the database level, (You have to be very careful with this because it will generate a trace for the entire database and can fill up your diagnostic directory on your oracle database. Disclaimer: USE WITH CAUTION).
execute DBMS_MONITOR.DATABASE_TRACE_ENABLE(waits=>TRUE, binds=>TRUE, instance_name=>'<Instance_name>');
execute DBMS_MONITOR.DATABASE_TRACE_DISABLE(instance_name=>'<instance_name>');
You can leverage v$sqltext_with_newlines ,V$SESSION and v$session_longops. You can google with these words and see if these can be useful for your requirements.

Does NLS_SORT=BINARY_CI and NLS_COMP=BINARY make sense?

I hope this is not a dumb question:
I inherited a number of Oracle stored procedures, triggers, ... One of the triggers was written as follows:
create or replace trigger TRN_NIS_LOGON after logon on database
begin
execute immediate 'ALTER SESSION SET NLS_SORT=BINARY_CI';
execute immediate 'ALTER SESSION SET NLS_COMP=LINGUISTIC';
end;
/
This caused issues. So, I read http://docs.oracle.com/cd/B19306_01/server.102/b14225/ch5lingsort.htm
Then, I was told is that I should just change the trigger as so:
create or replace trigger TRN_NIS_LOGON after logon on database
begin
execute immediate 'ALTER SESSION SET NLS_SORT=BINARY_CI';
execute immediate 'ALTER SESSION SET NLS_COMP=BINARY';
end;
/
However, I do not think that makes sense. It seems to me that NLS_SORT=BINARY_CI/AI needs to be combined with NLS_COMP=LINGUISTIC and the corresponding linguistic index. The goal is to make oracle behave like SQL server (I also skimmed through this: https://hoopercharles.wordpress.com/2010/06/04/sql-experimenting-with-case-insensitive-searches/).
It seems like the above trigger is a really bad idea and NLS_COMP and NLS_SORT must be set together, for a session, while working with tables that have linguistic indexes. A trigger like this should not be used. NLS_SORT=BINARY_CI without linguistic indexes will cause full table scans. Is this correct?

Running a sql script on a slow network

I have a sql script which creates my applications tables, sequences, triggers etc. & inserts around 10k rows of data.
I am on a slow network and when I run this script from my local machine it takes a long time to finish.
Wondering if there is any support in sqlplus (or sqldeveloper) to run this script on server. So the entire script is first transported to the server executed and then returns say a log file of the execution.
No, there is not. There are some things you can do that might the data load go faster, such as use sql loader if you are doing individual inserts, and increasing your commit interval. However, I would have to see the code to really help very much.
If you have access to the remote server on which the database is hosted and you have access to execute sqlplus on the said server, sure you can.
Login or SSH (depending upon OS - Windows or *nix) to the
server
Create your SQL script (myscript.sql) over there.
Login to SQL*Plus and execute using the command # myscript.sql
There is rarely a need to run these kinds of scripts on the server. A few simple changes to batch commands can significantly improve performance. All of the changes below combine multiple statements, reducing the number of network round-trips. Some of them also decrease parsing time, which will significantly improve performance even if the scripts run on the server.
Combine INSERTs into a single statement
Replace individual inserts:
insert into some_table values(1);
insert into some_table values(2);
...
with combind inserts like this:
insert into some_table
select 1 from dual union all
select 2 from dual union all
...
Use PL/SQL blocks
Replace individual DDL:
create sequence sequence1;
create sequence sequence2;
with a PL/SQL block:
begin
execute immediate 'create sequence sequence1';
execute immediate 'create sequence sequence2';
end;
/
Use inline constraints
Combine DDL as much as possible. For example, use this statement:
create table some_table(a number not null);
Instead of this:
create table some_table(a number);
alter table some_table modify a not null;

Resources