DECLARE
v_run_no binary_integer;
v_run_time_in_sec number;
BEGIN
dbms_profiler.start_profiler('Profiling my proc', 'Tom', v_run_no);
schema.my_package.my_proc();
dbms_profiler.stop_profiler();
-- Calculate the TOTAL_TIME for each unit in the run by summing the TOTAL_TIME
-- from each data record in the unit and writing it to the TOTAL_TIME column
-- in the plsql_profiler_units table.
dbms_profiler.rollup_run(v_run_no);
-- Grab total run time for display
SELECT r.RUN_TOTAL_TIME / 1000000000
INTO v_run_time_in_sec
FROM ucms_crim_conv.plsql_profiler_runs r
WHERE r.RUNID = v_run_no;
dbms_output.put_line('Profiled as run ' || v_run_no || ' in ' || v_run_time_in_sec || ' total sec.');
END;
I've run this same script to profile a different procedure call, by changing ONLY the schema.my_package.my_proc(); line to call a different procedure, and everything went fine.
This time, after the script completes, I can see a row with a value in the TOTAL_TIME column for the run id in the plsql_profiler_runs table.
Previously I would also see 2 rows in plsql_profiler_units, one for the anonymous calling block, and 1 for the procedure being profiled, with associated rows in plsql_profiler_data for each unit. However, this time, I see only the anonymous block in plsql_profiler_units, and the only plsql_profiler_data records for this run id are for the calling anonymous block, not the procedure itself, which is obviously what I'm interested in.
Why might this happen? What can I do to fix it and see data for my procedure?
According to the DBMS_PROFILER reference:
Security Model
The profiler only gathers data for units for which a user has CREATE
privilege; you cannot use the package to profile units for which
EXECUTE ONLY access has been granted. In general, if a user can debug
a unit, the same user can profile it. However, a unit can be profiled
whether or not it has been compiled DEBUG. Oracle advises that modules
that are being profiled should be compiled DEBUG, since this provides
additional information about the unit in the database.
I was able to reproduce your issue when profiled procedure was created in another schema and my profiling user lacked either CREATE ANY PROCEDURE or ALTER ANY PROCEDURE privileges. When he had both - everything ran just fine. Probably you reference another schema's package and suffer the same issue.
Related
We are initiating the rebuilding of many materialized views by using dbms_job.submit to execute a stored procedure that perform the rebuilding. However, I am having trouble trying to figure out how to determine if a submitted job failed. The issue that I am having is that the job is failing but I cannot identify what the issue is. So, I am trying to start out with a simple example, which is probably failing on a permission issue but I don't know where to look for the error message.
I have the following test procedure that I want to initiate using dbms_job.submit.
CREATE OR REPLACE PROCEDURE MYLANID.JUNKPROC
AS
lv_msg varchar2(3000);
BEGIN
INSERT INTO MYLANID.junk_log ( msg ) VALUES ('Hello World' );
commit;
EXCEPTION
WHEN OTHERS THEN
lv_msg := SUBSTR(sqlerrm, 1, 3000);
INSERT INTO MYLANID.junk_log ( msg ) VALUES (lv_msg);
END;
/
Note that this table is used above:
CREATE TABLE MYLANID.JUNK_LOG (
EVENT_TIME TIMESTAMP(6) DEFAULT systimestamp,
MSG VARCHAR2(3000 BYTE))
To submit the above procedure as a job, I execute the following anonymous block.
declare l_jobid binary_integer;
BEGIN
dbms_job.submit(job => l_jobid, what => 'BEGIN MYLANID.JUNKPROC; END;');
DBMS_OUTPUT.PUT_LINE('l_jobid:' || l_jobid);
commit;
END;
I then execute the following SQL...
select * from all_jobs;
...to see one record that represents my submitted job. When I re-query the all_jobs view, I see that this record quickly disappears from the view within a few seconds, presumably when the job completes. All is happy so far. I would like to use the presence of a record in the all_jobs view to determine whether a submitted job is running or has failed. I expect to be able to tell if it failed by looking at the ALL_JOBS.FAILURES column having a non null value > 0.
The problem, probably a permission issue, begins when I switch to another schema and I switch all of the occurrences of the above SQL and replace "MYSCHEMA" with "ANOTHERSCHEMA" that I also have access to. For example, I create the following
Table: ANOTHERSCHEMA.JUNK_LOG
Procedure: ANOTHERSCHEMA.JUNKPROC
I am even able to execute the stored procedure successfully in a query window while logged in as MYSCHEMA:
EXEC ANOTHERSCHEMA.JUNKPROC
However, if I execute the following code to submit a job that involves running the same ANOTHERSCHEMA procedure but by submitting it as a JOB...
declare l_jobid binary_integer;
BEGIN
dbms_job.submit(job => l_jobid, what => 'BEGIN ANOTHERSCHEMA.JUNKPROC; END;');
DBMS_OUTPUT.PUT_LINE('l_jobid:' || l_jobid);
commit;
END;
...then, when I query the jobs ALL_JOBS view...
select * from all_jobs;
...I see that the job has a positive value for the column FAILURE and I have no record of what the error was. This FAILURE count value continues to gradually increment over time as Oracle presumably retries up to 16? times until the job is marked BROKEN in the ALL_JOBS view.
But this is just a simple example and I don't know where to look for the error message that would tell me why the job using ANOTEHRSCHEMA references failed.
Where Do I look for the error log of failed jobs? I'm wondering if this will be somewhere only the DBA can see...
Update:
The above is just a simple test example. In my actual real world situation, my log shows that the job was submitted but I never see anything in USER_JOBS or even DBA_JOBS, which should show everything. I don't understand why the dbms_job.submit procedure would return the job number of the submitted job indicating that it was submitted but no job is visible in the DBA_JOBS view! The job that I did submit should have taken a long time to run, so I don't expect that it completed faster than I could notice.
First off, you probably shouldn't be using dbms_job. That package has been superseded for some time by the dbms_scheduler package which is significantly more powerful and flexible. If you are using Oracle 19c or later, Oracle automatically migrates dbms_job jobs to dbms_scheduler.
If you are using an Oracle version prior to 19c and a dbms_job job fails, the error information is written to the database alert log. That tends to be a bit of a pain to query from SQL particularly if you're not a DBA. You can define an external table that reads the alert log to make it queryable. Assuming you're on 11g, there is a view, x$dbgalertext, that presents the alert log information in a way that you can query it but DBAs generally aren't going to rush to give users permission on x$ tables.
If you use dbms_scheduler instead (or if you are on 19c or later and your dbms_job jobs get converted to dbms_scheduler jobs), errors are written to dba_scheduler_job_run_details. dbms_scheduler in general gives you a lot more logging information than dbms_job does so you can see things like the history of successful runs without needing to add a bunch of instrumentation code to your procedures.
We know that it is possible to dynamically figure out the name of the procedure or package that is currently executing as explained here and here. This generally applies to statements being executed from other stored procedures (compiled) in the database.
The problem:
We have been trying to log all UPDATE activity on a specific column (called STATE) by placing a trigger on the table and invoking who_called_me from within the trigger. The purpose of doing this is apparently as per the application design the column STATE could get updated by multiple pieces of code (residing in the database) based on certain business conditions. In addition to that, the column could also get updated by the application which is a hibernate based application and at times when the update happens by a hibernate query the who_called_me function returns nothing. There are multiple parts in the application that could also UPDATE the column STATE based on certain conditions.
The who_called_me strategy is working well for us in cases where a stored procedure (which resides in the database) issues the UPDATE statement and who_called_me is successfully capturing the corresponding owner, name, line no. etc. of the stored procedure. But in case the UPDATE happens from hibernate, the function captures no details.
Is there a way to capture which hibernate query UPDATEd the row through the trigger? Or is there any other way?
Note: The trigger code is similar to the answer posted on this question.
you can track the query with ora_sql_text function, e.g. this is the function I use for that:
-- getting sql code, which is calling the current event, as clob
function getEventSQLtext
return clob
is
sqllob clob;
sql_text ora_name_list_t;
dummy integer;
begin
dummy := ora_sql_txt(sql_text);
dbms_lob.createtemporary(sqllob,false);
for i in 1..sql_text.count loop
dbms_lob.writeappend(sqllob,length(sql_text(i)),sql_text(i));
end loop;
return sqllob;
if dummy is null then null; end if; -- removing warning of non-used variable :)
end;
This will be a query which is generated by hibernate and this is the only information you can get because this should be the only thing hibernate can do with DB.
It turns out, the who_called_me approach works better for stored procedure calls where the stack trace can point exactly which line invoked a DML. In, case of hibernate it is possible that the code may not call a stored procedure but in-turn may have individual DMLs which get invoked based on certain conditions. As opposed to other answer given by #simon, the ora_sql_txt function may only work in system event triggers or I may be wrong, but either way it is not capable of capturing the SQL Statement issued by Hibernate (tested that it does not works and retunrs a NULL value).
So at the end of the day, to find what SQL Hibernate is using, DB Trace files and Hibernate debug level logs is the only way for now.
Can anybody let me know if there is any way to find out cost of a stored procedure in Oracle? If no direct way is there, I would like to know any substitutes.
The way I found the cost is doing an auto trace of all the queries used in the stored procedure and then estimate the proc cost according to the frequency of the queries execution.
In addition to that I would like suggestions to optimize my stored procedure especially the query given below.
Logic of the procedure:
Below is the dynamic sql query used as a cursor in my stored procedure. This cursor is opened and fetched inside a loop. I fetch the info and put them in a varray, count the data and then insert it to a table.
My objective is to find out the cost of the proc as well as optimize the sp.
SELECT DISTINCT acct_no
FROM raw
WHERE 1=1
AND code = ''' || code ||
''' AND qty < 0
AND acct_no
IN (SELECT acct_no FROM ' || table_name || ' WHERE counter =
(SELECT MAX(counter) FROM ' || table_name || '))
One of the best tool in analyzing SQL and PLSQL performance is the native SQL trace.
enable tracing in your session:
SQL> alter session set SQL_TRACE=TRUE;
Session altered
Run your procedure
Exit your session
Navigate to your server udump directory and find your trace file (usually the latest)
Run tkprof
This will produce a file containing a list of all statements with lots of information, including the number of times each was executed, its query plan and statistics. This is more detailed and precise than manually running the plan for each select.
If you want to optimize performance on a procedure, you would usually sort the trace file by the time taken to execute (with sort=EXEELA) or fetch SQL and try to optimize the queries that make the most work.
You can also make the trace file log wait events by using the following command at step 1:
ALTER SESSION SET EVENTS '10046 trace name context forever, level 8';
The way to find out the cost (in execution of time) for a stored procedure is to employ a profiler. 11g introduced the Hierarchical Profiler which is highly neat. Find out more.
Prior to 11g there was only the DBMS_PROFILER, which is good enough, especially if your stored procedure doesn't use objects in other schemas. Find out more.
Trace is good for identifying poorly performing SQL. Profilers are good for identifying the cost of the PL/SQL elements of a stored proc. If your proc has some expensive computation elements which don't read or write to tables then that won't show up in SQL trace.
Likewise if you have a well-tuned SQL statement but use it badly ia profiler run is likely to be more help than trace. An example of what I mean is repeatedly executing the same SELECT statement inside a Cursor loop: I know that's not quite what you're doing but it's close enough.
Apparently the hierarchical profiler DBMS_HPROF is installed by default in 11g but a DBA has to grant some privileges to developers who want to use it. Find out more.
To install the DBMS_PROFILER in 10g (or earlier) a DBA has to run this script:
$ORACLE_HOME/rdbms/admin/proftab.sql
Be sure to get the reporting infrastructure as well:
$ORACLE_HOME/plsql/demo/profsum.sql
(The name or location of this script may vary in earlier versions).
The easy way is to execute the procedure and then query v$sql.
if you want a little tip to make your life easier (not just for packages) add a blank comment to the query inside the procedure, something like
select /* BIG DADDY */ * from dual;
and then query v$sql as follows
select * from v$sql where sql_text like '%BIG DADDY%';
the best way is definitely the way #Vincent Malgrat suggested.
good luck.
Here's a piece of Oracle code I'm trying to adapt. I've abbreviated all the details:
declare
begin
loop
--do stuff to populate a global temporary table. I'll call it 'TempTable'
end loop;
end;
/
Select * from TempTable
Right now, this query runs fine provided I run it in two steps. First I run the program at the top, then I run the select * to get the results.
Is it possible to combine the two pieces so that I can populate the global temp table and retrieve the results all in one step?
Thanks in advance!
Well, for me it depends on how I would see the steps. You are doing a PL/SQL and SQL command. I would rather type in those into a file, and run them in one command (if that could called as a single step for you)...
Something like
file.sql
begin
loop
--do stuff to populate a global temporary table. I'll call it 'TempTable'
end loop;
end;
/
Select *
from TempTable
/
And run it as:
prompt> sqlplus /#db #file.sql
If you give us more details like how you populate the GTT, perhaps we might find a way to do it in a single step.
Yes, but it's not trivial.
create global temporary table my_gtt
( ... )
on commit preserve rows;
create or replace type my_gtt_rowtype as object
( [columns definition] )
/
create or replace type my_gtt_tabtype as table of my_gtt_rowtype
/
create or replace function pipe_rows_from_gtt
return my_gtt_tabtype
pipelined
is
pragma autonomous_transaction;
type rc_type is refcursor;
my_rc rc_type;
my_output_rec my_gtt_rectype := my_gtt_rectype ([nulls for each attribute]);
begin
delete from my_gtt;
insert into my_gtt ...
commit;
open my_rc for select * from my_gtt;
loop
fetch my_rc into my_output_rec.attribute1, my_output_rec.attribute1, etc;
exit when my_rc%notfound;
pipe_row (my_output_rec);
end loop;
close my_rc;
return;
end;
/
I don't know it the autonomous transaction pragma is required - but I suspect it is, otherwise it'll throw errors about functions performing DML.
We use code like this to have reporting engines which can't perform procedural logic build the global temporary tables they use (and reuse) in various subreports.
In oracle, an extra table to store intermediate results is very seldom needed. It might help to make things easier to understand. When you are able to write SQL to fill the intermediate table, you can certainly query the rows in a single step without having to waste time by filling a GTT. If you are using pl/sql to populate the GTT, see if this can be corrected to be pure SQL. That will almost certainly give you a performance benefit.
I have a messy stored procedure which uses dynamic sql.
I can debug it in runtime by adding print #sql; where #sql; is the string containing the dynamic SQL, right before I call execute (#sql);.
Now, the multi-page stored procedure also creates dynamic tables and uses them in a query. I want to print those tables to the console right before I do an execute, so that I know exactly what the query is trying to do.
However, the SQL Server 08 does not like that. When I try:
print #temp_table; and try to compile the S.P. I get this error:
The name "#temp_table" is not permitted in this context. Valid expressions are constants, constant expressions, and (in some contexts) variables. Column names are not permitted.
Please help.
EDIT:
I am a noob when it comes to SQL. However, the following statement: select * from #tbl; does not print anything to the console when it is run non-interactively; the print statement works though.
The following statement has incorrect syntax: print select * from #tbl;. Is there a way for me to redirect the output of select to a file, if stdout is not an option?
Thanks.
When we use dynamic SQl we start by having a debug input variable in the sp (make it the last one and give it a default value of 0 (to indicate not in debug mode that way it won't break existing code calling the proc).
Now when you run it in debug mode, you print instead of execute or you print and execute but you always rollback at the end. If you need to see the data at various stages, the best thing to do is the second. Then before you rollback you put the data you want to see into a a table varaiable (This is important it can't be a temp table). after the rollback, select from the table variable (which did not go out of scope with the rollback) and run your print tstatments to see the queries which were run.
Debugging this way the only way you'll get output is
select * from #temp_table;
Alternatively, look into the debugging features built into SQL Server Management Studio. For example, this web page may help you
SQL Server Performance . com
You can print Variables, but not tables. You can, however, SELECT from the #table.
Now, if the table is created, filled up and modified in a single statement that is executed, then you can view the state of the table as it was before being modified, but the data will have changed since.
of course, as soon as the dynamic sql finishes, the #table is no longer available so you're stuck!
To counter that, you can insert into a ##Table (note the double hash marks) in your dynamic SQL along with the #table and then query that ##table at the end of execution of the dynamic sql.
for as much as I hate cursors, give this a try:
SET NOCOUNT ON
CREATE TABLE #TempTable1
(ColumnInt int
,ColumnVarchar varchar(50)
,ColumnDatetime datetime
)
INSERT INTO #TempTable1 VALUES (1,'A',GETDATE())
INSERT INTO #TempTable1 VALUES (12345,'abcdefghijklmnop','1/1/2010')
INSERT INTO #TempTable1 VALUES (null,null,null)
INSERT INTO #TempTable1 VALUES (445454,null,getdate())
SET NOCOUNT OFF
DECLARE #F_ColumnInt int
,#F_ColumnVarchar varchar(50)
,#F_ColumnDatetime datetime
DECLARE CursorTempTable1 CURSOR FOR
SELECT
ColumnInt, ColumnVarchar, ColumnDatetime
FROM #TempTable1
ORDER BY ColumnInt
FOR READ ONLY
--populate and allocate resources to the cursor
OPEN CursorTempTable1
PRINT '#TempTable1 contents:'
PRINT ' '+REPLICATE('-',20)
+' '+REPLICATE('-',50)
+' '+REPLICATE('-',23)
--process each row
WHILE 1=1
BEGIN
FETCH NEXT FROM CursorTempTable1
INTO #F_ColumnInt, #F_ColumnVarchar, #F_ColumnDatetime
--finished fetching all rows?
IF ##FETCH_STATUS <> 0
BEGIN --YES, all done fetching
--exith the loop
BREAK
END --IF finished fetching
PRINT ' '+RIGHT( REPLICATE(' ',20) + COALESCE(CONVERT(varchar(20),#F_ColumnInt),'null') ,20)
+' '+LEFT( COALESCE(#F_ColumnVarchar,'null') + REPLICATE(' ',50) ,50)
+' '+LEFT( COALESCE(CONVERT(char(23),#F_ColumnDatetime,121),'null') + REPLICATE(' ',23) ,23)
END --WHILE
--close and free the cursor's resources
CLOSE CursorTempTable1
DEALLOCATE CursorTempTable1
OUTPUT:
#TempTable1 contents:
-------------------- -------------------------------------------------- -----------------------
null null null
1 A 2010-03-18 13:28:24.260
12345 abcdefghijklmnop 2010-01-01 00:00:00.000
445454 null 2010-03-18 13:28:24.260
If I knew that your temp table had a PK, I'd give a cursor free loop example.