How can I easily analyze an Oracle package's execution for performance issues? - oracle

I have a pl/sql package in an 11g R2 DB that has a fair number of related procedures & functions. I execute a top level function and it does a whole lot of stuff. It is currently processing < 10 rows per second. I would like to improve the performance, but I am not sure which of the routines to focus on.
This seems like a good job for Oracle's "PL/SQL hierarchical profiler (DBMS_HPROF)" Being the lazy person that I am I was hoping that either SQLDeveloper v3.2.20.09 or Enterprise Manager would be able do what I want with a point & click interface. I cannot find this.
Is there an "easy" way to analyze the actual PL/SQL code in a procedure/package?
I have optimized the queries in the package using the "Top Activity" section of Enterprise Manager, looking at all of the queries that were taking a long time. Now all I have is a single "Execute PL/SQL" showing up, and I need to break that down into at least the procedures & functions that are called, and what percentage of the time they are taking.

The PL/SQL Hierarchical Profiler, documented here, is actually very easy to run, assuming that you have the necessary privileges.
Basically you just need to run a PL/SQL block like this:
begin
dbms_hprof.start_profiling('PLSHPROF_DIR', 'test.trc');
your_top_level_procedure;
dbms_hprof.stop_profiling;
end;
The plshprof utility will generate HTML reports from the raw profiler output file (test.trc in the example above).

Related

Running a stored procedure in multi threaded way in oracle

I have a job which picks a record from a cursor and then it calls a stored procedure which processes the record picked up from the cursor.
The stored procedure has multiple queries to process the records. In all, procedure takes about 0.3 seconds to process a single record picked up by the cursor but since cursor contains more than 100k records it takes hours to complete the job.
The queries in the stored procedure are all optimized
I was thinking of making the procedure run in multi threaded way as in java and other programming language.
Can it be done in oracle? or is there any other way I can reduce the run time of my job.
I agree with the comments regarding processing cursors in a loop. As Tom Kyte often said "Row at a time [processing] is slow at a time"; Oracle performs best with set based operations and row-at-a-time operations usually have scalability issues (i.e. very susceptible to poor performance when things change on the DB such as CPU capacity, workload, number of records that need processing, changes in size of underlying tables, ...).
You probably already know that Oracle since 8i has a Java VM built in to the DB engine, so you might be able to have java code wrappered as PL/SQL, but this is not for the faint of heart [not saying that you are, just sayin'].
Before going to the trouble of re-writing your application, I would recommend the following tuning approach as it may yield some actionable tunings [assumes diagnostics and tuning pack licenses; won't remove the scalability issues but may lessen the impact of them]:
In versions of oracle 11g and above:
Find the the top level sql id recorded in gv$active_session_history and dba_hist_active_sess_history for the call to the PL/SQL procedure.
Examine the wait events for the sql_id's under that top_level_sql_id. (they tell you what the SQL is waiting on).
Run the tuning advisor on those sql_id's and check for any tuning recommendations. Sometimes if SQL is already sub-second getting it from hundredths of a second to thousandths of a second can have a big impact when call many times.
Run the ADDM report for the period when the procedure is running. Often you will find that heavy PL/SQL processes require increase in PGA. Further, ADDM may advise other relevant actions (e.g. increase SGA, session cached cursors, db writer processes, log buffer, run segment tuning advisor, ...)

ORACLE - Which is better to generate a large resultset of records, View, SP, or Function

I recently working with Oracle database to generate some reports. What I need is to get result sets of specific records (only SELECT statement), sometimes are large records, to be used for generating the report in excel file.
At first, the reports are queried in Views but some of them are slow (have some complex subqueries). I was asked to increase the performance and also fixed some field mapping. I also want to tidy things up, because when I query against View, I must specifically call the right column name. I want to separate the data works into database, and the web app just for passing parameters and call the right result set.
I'm new to Oracle, so which is better to do this kind of task? Using SP or Function? or in what condition that maybe View is better?
Makes no difference whether you compile your SQL in a view, SP or function. It is the SQL itself that matters.
As long as you are able to meet your requirements with the views they should be a good option. If you intend to break-up your queries into multiple ones for achieving better performance then you should go for stored procedures. If you decide to go for stored procedure then it would be advisable to create a package and bundle all the stored procedures together in the package. If your problem is performance then there may not be a silver bullet solution for the same. You will have to work on your queries and design for the same.
If the problem is performance due to complex SELECT query (queries), you can consider tuning the queries. Often you will find queries written 15-20 years ago, which do not use functionality and techniques that were introduced by Oracle in more recent versions (even if the organization spent the big bucks to buy the more recent versions - making it into a waste of money). Honestly, that may be too much of a task for you if you are new at Oracle; also, some slow queries may have been written by people just like you, many years ago - before they had a chance to learn a lot about Oracle and have experience with it.
Another thing, if the reports don't need to use the absolute current state of the underlying tables (for example, if "what was in the tables at the end of the business day yesterday" is acceptable), you can create a materialized view. It will not work any faster than a regular view, but it can run overnight (say), or every six hours, or whatever - so that the further reporting processing from there will not have to wait for the queries to complete. This is one of the main uses of materialized views.
Good luck!

How can I utilize Oracle bind variables with Delphi's SimpleDataSet?

I have an Oracle 9 database from which my Delphi 2006 application reads data into a TSimpleDataSet using a SQL statement like this one (in reality it is more complex, of course):
select * from myschema.mytable where ID in (1, 2, 4)
My applications starts up and executes this query quite often during the course of the day, each time with different values in the in clause.
My DBAs have notified me that this is creating execessive load on the database server, as the query is re-parsed on every run. They suggested to use bind variables instead of building the SQL statement on the client.
I am familiar with using parameterized queries in Delphi, but from the article linked to above I get the feeling that is not exactly what bind variables are. Also, I would need theses prepared statements to work across different runs of the application.
Is there a way to prepare a statement containing an in clause once in the database and then have it executed with different parameters passed in from a TSimpleDataSet so it won't need to be reparsed every time my application is run?
My answer is not directly related to Delphi, but this problem in general. Your problem is that of the variable-sized in-list. Tom Kyte of Oracle has some recommendations which you can use. Essentially, you are creating too many unique queries, causing the database to do a bunch of hard-parsing. This will spike the CPU consumption (and DBA blood pressures) unnecessarily.
By making your query static, it can get by with a soft-parse or perhaps no parse at all! The DB can then cache the execution plan, the DBAs can deal with a more "stable" SQL, and overall performance should be improved.

Oracle PL/SQL stored procedure compiler vs PostgreSQL PGSQL stored procedure compiler

I noticed that Oracle takes a while to compile a stored procedure but it runs much faster than its PostgreSQL PGSQL counterpart.
With PostgreSQL, the same procedure (i.e. it's all in SQL-92 format with functions in the select and where clauses) takes less time to compile but longer to run.
Is there a metrics web site where I can find side by side Oracle vs PostgreSQL performance metrics on stored procedures, SQL parsing, views and ref_cursors?
Is the PostgreSQL PGSQL compiler lacking in optimization routines? What are the main differences between how Oracle handles stored procedures and how PostgreSQL handles them?
I'm thinking of writing a PGSQL function library that will allow PL/SQL code to be compiled and run in PGSQL. (i.e. DECODE, NVL, GREATEST, TO_CHAR, TO_NUMBER, all
PL/SQL functions) but I don't know if this is feasible.
This is a hard question to answer fully because it can run pretty deep, but I'll add my 2 cents towards a high level contribution to an answer. First off I really like PostgreSQL and I really like Oracle. However PL/SQL is so much more deep of a language/environment than PL/PGSQL provides or really any other database engines procedure language that I have ever ran into for that matter. Oracle since at least 10G uses a optimizing compiler for PL/SQL. Which most likely contributes to why it compiles slower in your use cases. PL/SQL also has native compilation. You can compile the PL/SQL code down to machine code with a simple compiler directive. This is good for computation intensive logic not for SQL logic. My point of all this is Oracle has spent lots of resources on making PL/SQL a real treat from a functionality standpoint and a performance stand point and I only touched on two of many examples.PL/SQL is light years ahead of PG/SQL is what it sums up to and I don't imagine as nice as PG/SQL is catching up to Oracle any time soon.
I doubt you will find a side by side comparison, though I think this would be really nice. The effort to do so wouldn't probably be worth most people's time.
Also I wouldn't re-write what is already out there.
http://www.pgsql.cz/index.php/Oracle_functionality_(en)
There is no official benchmark for stored procedures like TPC for SQL (see tpc.org). I'm also not aware of any database application with specific PL/SQL and pgSQL implementations which could be used as benchmark.
Both languages are compiled and optimized into intermediate code and then ran by an interpreter. PL/SQL can be compiled to machine code which doesn't improve overall performance as much as one might think, because the interpreter is quite efficient and typical applications spend most time in the SQL engine and not in the procedural code (see AskTom article).
When procedural code calls SQL it happens just like in any other program, using statements and bind parameters for input and output. Oracle is able to keep these SQLs "prepared" which means that the cursors are ready to be used again without an additional SQL "soft parse" (usually a SQL "hard parse" happens only when the database runs a SQL for the first time since it was started).
When functions are used in select or where clauses, the database has to switch back-and-forth between the SQL and procedural engines. This can consume more processing time then the code itself.
A major difference between the two compilers is that Oracle maintains a dependency tree, which causes PL/SQL to automaticly recompile when underlying objects are changed. Compilation errors are detected without actually running the code, which is not the case with Postgres (see Documentation)
Is there a metrics web site where I can find side by side Oracle vs
PostgreSQL performance metrics on stored procedures, SQL parsing,
views and ref_cursors?
Publicly benchmarking Oracle performance is probably a breach of your licensing terms. If you're a corporate user make sure legal check it out before you do anything like that.
I'm thinking of writing a PGSQL function library that will allow
PL/SQL code to be compiled and run in PGSQL. (i.e. DECODE, NVL,
GREATEST, TO_CHAR, TO_NUMBER, all PL/SQL functions) but I don't know
if this is feasible.
Do check the manual and make sure you need all of these, since some are already implemented as built-in functions. Also, I seem to recall an add-on pack of functions to improve Oracle compatibility.
Lastly, don't forget PostgreSQL offers choices on how to write functions. Simple ones can be written in pure SQL (and automatically inlined in some cases) and for heavy lifting, I prefer to use Perl (you might like Python or Java).

How to performance test nested Sybase stored procedures?

I am looking for any tool which will allow the performance testing/tuning of Sybase nested stored procedures. There are many tools around and of course Sybase's own for performance tuning and testing SQL but none of these can handle nested stored procedures (i.e. a stored proc calling another stored proc). Does anyone have/know of such a tool?
I don't know anything that does this but I'd love to see a tool that does. What I tend to do in this situation is to try to establish which of the nested stored procedures is consuming the most resources or taking the longest and then performance tuning that procedure in isolation.
I am not sure which Sybase DB you are using at the moment, but have you tried the Profiler in the Sybase Central tool? Right Click on DB Connection and then select PROFILE (or PROFILER???)
I have used it in the past for single stored procedures but I do not recall if it works all the way down the calling chain from one SP to another. At the least it should tell you how long each sub-SP which was called from your initial SP took and then you can home in on the procedures needing the most time.
I hope that helps.
Cheers,
Kevin
Late to the game, but in Sybase you have the option of using "SET FMTONLY" to get around "SET NOEXEC" turning off the evaluation of the nested procedure.
For example:
suppose:
sp_B is defined
sp_A is defined and calls sp_B
Then, the following will show the execution plans for both sp_A and sp_B
SET SHOWPLAN ON
GO
SET FMTONLY ON
GO
sp_A
GO
See the sybase writeup here...this worked in ASE 12.5 as well as ASE 15.
Using set showplan with noexec

Resources