How to see error logging done by Oracle's DBMS_PARALLEL_EXECUTE Utility ? - oracle

I'm running a procedure in parallel by using oracle's dbms parallel execute utility which chunks the workload ; but I have few chunks failing and I can't see any error logs. Is there a way to find out ?

Check this. The last two columns.
SELECT *
FROM user_parallel_execute_chunks
WHERE task_name = '{task name}'
ORDER BY chunk_id
/
I'm not sure, but probably you can use job_name from previous table. And query for more information from scheduler's tables. user_scheduler_job_run_details, user_scheduler_job_log.

Related

Submitted Oracle job using dbms_job.submit and it failed but I don't know where to look for an error message

We are initiating the rebuilding of many materialized views by using dbms_job.submit to execute a stored procedure that perform the rebuilding. However, I am having trouble trying to figure out how to determine if a submitted job failed. The issue that I am having is that the job is failing but I cannot identify what the issue is. So, I am trying to start out with a simple example, which is probably failing on a permission issue but I don't know where to look for the error message.
I have the following test procedure that I want to initiate using dbms_job.submit.
CREATE OR REPLACE PROCEDURE MYLANID.JUNKPROC
AS
lv_msg varchar2(3000);
BEGIN
INSERT INTO MYLANID.junk_log ( msg ) VALUES ('Hello World' );
commit;
EXCEPTION
WHEN OTHERS THEN
lv_msg := SUBSTR(sqlerrm, 1, 3000);
INSERT INTO MYLANID.junk_log ( msg ) VALUES (lv_msg);
END;
/
Note that this table is used above:
CREATE TABLE MYLANID.JUNK_LOG (
EVENT_TIME TIMESTAMP(6) DEFAULT systimestamp,
MSG VARCHAR2(3000 BYTE))
To submit the above procedure as a job, I execute the following anonymous block.
declare l_jobid binary_integer;
BEGIN
dbms_job.submit(job => l_jobid, what => 'BEGIN MYLANID.JUNKPROC; END;');
DBMS_OUTPUT.PUT_LINE('l_jobid:' || l_jobid);
commit;
END;
I then execute the following SQL...
select * from all_jobs;
...to see one record that represents my submitted job. When I re-query the all_jobs view, I see that this record quickly disappears from the view within a few seconds, presumably when the job completes. All is happy so far. I would like to use the presence of a record in the all_jobs view to determine whether a submitted job is running or has failed. I expect to be able to tell if it failed by looking at the ALL_JOBS.FAILURES column having a non null value > 0.
The problem, probably a permission issue, begins when I switch to another schema and I switch all of the occurrences of the above SQL and replace "MYSCHEMA" with "ANOTHERSCHEMA" that I also have access to. For example, I create the following
Table: ANOTHERSCHEMA.JUNK_LOG
Procedure: ANOTHERSCHEMA.JUNKPROC
I am even able to execute the stored procedure successfully in a query window while logged in as MYSCHEMA:
EXEC ANOTHERSCHEMA.JUNKPROC
However, if I execute the following code to submit a job that involves running the same ANOTHERSCHEMA procedure but by submitting it as a JOB...
declare l_jobid binary_integer;
BEGIN
dbms_job.submit(job => l_jobid, what => 'BEGIN ANOTHERSCHEMA.JUNKPROC; END;');
DBMS_OUTPUT.PUT_LINE('l_jobid:' || l_jobid);
commit;
END;
...then, when I query the jobs ALL_JOBS view...
select * from all_jobs;
...I see that the job has a positive value for the column FAILURE and I have no record of what the error was. This FAILURE count value continues to gradually increment over time as Oracle presumably retries up to 16? times until the job is marked BROKEN in the ALL_JOBS view.
But this is just a simple example and I don't know where to look for the error message that would tell me why the job using ANOTEHRSCHEMA references failed.
Where Do I look for the error log of failed jobs? I'm wondering if this will be somewhere only the DBA can see...
Update:
The above is just a simple test example. In my actual real world situation, my log shows that the job was submitted but I never see anything in USER_JOBS or even DBA_JOBS, which should show everything. I don't understand why the dbms_job.submit procedure would return the job number of the submitted job indicating that it was submitted but no job is visible in the DBA_JOBS view! The job that I did submit should have taken a long time to run, so I don't expect that it completed faster than I could notice.
First off, you probably shouldn't be using dbms_job. That package has been superseded for some time by the dbms_scheduler package which is significantly more powerful and flexible. If you are using Oracle 19c or later, Oracle automatically migrates dbms_job jobs to dbms_scheduler.
If you are using an Oracle version prior to 19c and a dbms_job job fails, the error information is written to the database alert log. That tends to be a bit of a pain to query from SQL particularly if you're not a DBA. You can define an external table that reads the alert log to make it queryable. Assuming you're on 11g, there is a view, x$dbgalertext, that presents the alert log information in a way that you can query it but DBAs generally aren't going to rush to give users permission on x$ tables.
If you use dbms_scheduler instead (or if you are on 19c or later and your dbms_job jobs get converted to dbms_scheduler jobs), errors are written to dba_scheduler_job_run_details. dbms_scheduler in general gives you a lot more logging information than dbms_job does so you can see things like the history of successful runs without needing to add a bunch of instrumentation code to your procedures.

Parallel Hints in "Select Into" statement in PL/SQL

Parallel hints in normal DML SQL queries in oracle can be used in following fashion
select /*+ PARALLEL (A,2) */ * from table A ;
In similar fashion can we use parallel hints in PL/SQL for select into statements in oracle?
select /*+ PARALLEL(A,2) */ A.* BULK COLLECT INTO g_table_a from Table A ;
If i use the above syntax is there any way to verify whether the above select statement is executed in parallel?
Edit : Assuming g_table_a is a table data structure of ROWTYPE table
If the statement takes short elapsed time, you don't want to run it in parallel. Note, that e.g. query taking say 0.5 seconds in serial execution could take 2,5 second in parallel, as the most overhead is to set up the parallel execution.
So, if the query takes long time, you have enough time to check V$SESSION (use gv$sessionin RAC) and see all session with the user running the query.
select * from gv$session where username = 'your_user'
For serial execution you see only one session, for parallel execution you see one coordinator and additional session up to twice of the chosen parallel degree.
Alternative use the v$px_session which connects the parallel worker sessions with the query coordinator.
select SID, SERIAL#, DEGREE, REQ_DEGREE
from v$px_session
where qcsid = <SID of the session running teh parallel statement>;
Here you see also the required degree of parallelism and the real used DOP
YOu can easily check this from Explain Plan of the query. In case of Plsql you can also have trace of the procedure and check in the TKprof file.

Running PLSQL in parallel

Running PLSQL script to generate load
For some reasons (reproducing errors, ...) I would like to generate some load (with specific actions) in a PL SQL script.
What I would like to do:
A) Insert 1.000.000 rows in Schema A Table 1
B) In a loop and as best in parallel (2 or 3 times)
1) read from Schema-A.Table-1 one row with locking
2) insert it to Schema-B.Table-2
3) delete row from Schema-A.Table-1
Is there a way to do this B-task in a script in parallel in PLSQL on calling the script?
Who would this look like?
It's usually better to parallelize SQL statements inside a PL/SQL block, instead of trying to parallelize the entire PL/SQL block:
begin
execute immediate 'alter session enable parallel dml';
insert /*+ append parallel */ into schemaA.Table1 ...
commit;
insert /*+ append parallel */ into schemaB.Table2 ...
commit;
delete /*+ parallel */ from schemaA.Table1 where ...
commit;
dbms_stats.gather_table_stats('SCHEMAA', 'TABLE1', degree => 8);
dbms_stats.gather_table_stats('SCHEMAB', 'TABLE2', degree => 8);
end;
/
Large parallel DML statements usually require less code and run faster than creating your own parallelism in PL/SQL. Here are a few things to look out for:
You must have Enterprise Edition, large tables, decent hardware, and a sane configuration to run parallel SQL.
Setting the DOP is difficult. Using the hint /*+ parallel */ lets Oracle decide but you might want to play around with it by specifying a number, such as /*+ parallel(8) */.
Direct-path writes (the append hint) can be significantly faster. But they lock the entire table and the new results won't be recoverable until after the next backup.
Check the execution plan to ensure that direct-path writes are used - look for the operation LOAD AS SELECT instead of LOAD TABLE CONVENTIONAL. Tuning parallel SQL statements is best done with the Real-Time SQL Monitoring reports, found in select dbms_sqltune.report_sql_monitor(sql_id => 'SQL_ID') from dual;
You might want to read through the Parallel Execution Concepts chapter of the manual. Oracle parallelism can be tricky but it can also make your processes runs orders of magnitude faster if you're careful.
If objective is fast load and parallel is just an attempt to get that then do.
Create table newtemp as select old
To create table.
Then
Create table old_remaning as select old with not exists newtemp
Then drop old and new tables. Then do rename table . These operations will use parallel options at db level

Running a sql script on a slow network

I have a sql script which creates my applications tables, sequences, triggers etc. & inserts around 10k rows of data.
I am on a slow network and when I run this script from my local machine it takes a long time to finish.
Wondering if there is any support in sqlplus (or sqldeveloper) to run this script on server. So the entire script is first transported to the server executed and then returns say a log file of the execution.
No, there is not. There are some things you can do that might the data load go faster, such as use sql loader if you are doing individual inserts, and increasing your commit interval. However, I would have to see the code to really help very much.
If you have access to the remote server on which the database is hosted and you have access to execute sqlplus on the said server, sure you can.
Login or SSH (depending upon OS - Windows or *nix) to the
server
Create your SQL script (myscript.sql) over there.
Login to SQL*Plus and execute using the command # myscript.sql
There is rarely a need to run these kinds of scripts on the server. A few simple changes to batch commands can significantly improve performance. All of the changes below combine multiple statements, reducing the number of network round-trips. Some of them also decrease parsing time, which will significantly improve performance even if the scripts run on the server.
Combine INSERTs into a single statement
Replace individual inserts:
insert into some_table values(1);
insert into some_table values(2);
...
with combind inserts like this:
insert into some_table
select 1 from dual union all
select 2 from dual union all
...
Use PL/SQL blocks
Replace individual DDL:
create sequence sequence1;
create sequence sequence2;
with a PL/SQL block:
begin
execute immediate 'create sequence sequence1';
execute immediate 'create sequence sequence2';
end;
/
Use inline constraints
Combine DDL as much as possible. For example, use this statement:
create table some_table(a number not null);
Instead of this:
create table some_table(a number);
alter table some_table modify a not null;

How to find the cost of a stored procedure in Oracle and optimize it

Can anybody let me know if there is any way to find out cost of a stored procedure in Oracle? If no direct way is there, I would like to know any substitutes.
The way I found the cost is doing an auto trace of all the queries used in the stored procedure and then estimate the proc cost according to the frequency of the queries execution.
In addition to that I would like suggestions to optimize my stored procedure especially the query given below.
Logic of the procedure:
Below is the dynamic sql query used as a cursor in my stored procedure. This cursor is opened and fetched inside a loop. I fetch the info and put them in a varray, count the data and then insert it to a table.
My objective is to find out the cost of the proc as well as optimize the sp.
SELECT DISTINCT acct_no
FROM raw
WHERE 1=1
AND code = ''' || code ||
''' AND qty < 0
AND acct_no
IN (SELECT acct_no FROM ' || table_name || ' WHERE counter =
(SELECT MAX(counter) FROM ' || table_name || '))
One of the best tool in analyzing SQL and PLSQL performance is the native SQL trace.
enable tracing in your session:
SQL> alter session set SQL_TRACE=TRUE;
Session altered
Run your procedure
Exit your session
Navigate to your server udump directory and find your trace file (usually the latest)
Run tkprof
This will produce a file containing a list of all statements with lots of information, including the number of times each was executed, its query plan and statistics. This is more detailed and precise than manually running the plan for each select.
If you want to optimize performance on a procedure, you would usually sort the trace file by the time taken to execute (with sort=EXEELA) or fetch SQL and try to optimize the queries that make the most work.
You can also make the trace file log wait events by using the following command at step 1:
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
The way to find out the cost (in execution of time) for a stored procedure is to employ a profiler. 11g introduced the Hierarchical Profiler which is highly neat. Find out more.
Prior to 11g there was only the DBMS_PROFILER, which is good enough, especially if your stored procedure doesn't use objects in other schemas. Find out more.
Trace is good for identifying poorly performing SQL. Profilers are good for identifying the cost of the PL/SQL elements of a stored proc. If your proc has some expensive computation elements which don't read or write to tables then that won't show up in SQL trace.
Likewise if you have a well-tuned SQL statement but use it badly ia profiler run is likely to be more help than trace. An example of what I mean is repeatedly executing the same SELECT statement inside a Cursor loop: I know that's not quite what you're doing but it's close enough.
Apparently the hierarchical profiler DBMS_HPROF is installed by default in 11g but a DBA has to grant some privileges to developers who want to use it. Find out more.
To install the DBMS_PROFILER in 10g (or earlier) a DBA has to run this script:
$ORACLE_HOME/rdbms/admin/proftab.sql
Be sure to get the reporting infrastructure as well:
$ORACLE_HOME/plsql/demo/profsum.sql
(The name or location of this script may vary in earlier versions).
The easy way is to execute the procedure and then query v$sql.
if you want a little tip to make your life easier (not just for packages) add a blank comment to the query inside the procedure, something like
select /* BIG DADDY */ * from dual;
and then query v$sql as follows
select * from v$sql where sql_text like '%BIG DADDY%';
the best way is definitely the way #Vincent Malgrat suggested.
good luck.

Resources