How to performance test nested Sybase stored procedures? - performance

I am looking for any tool which will allow the performance testing/tuning of Sybase nested stored procedures. There are many tools around and of course Sybase's own for performance tuning and testing SQL but none of these can handle nested stored procedures (i.e. a stored proc calling another stored proc). Does anyone have/know of such a tool?

I don't know anything that does this but I'd love to see a tool that does. What I tend to do in this situation is to try to establish which of the nested stored procedures is consuming the most resources or taking the longest and then performance tuning that procedure in isolation.

I am not sure which Sybase DB you are using at the moment, but have you tried the Profiler in the Sybase Central tool? Right Click on DB Connection and then select PROFILE (or PROFILER???)
I have used it in the past for single stored procedures but I do not recall if it works all the way down the calling chain from one SP to another. At the least it should tell you how long each sub-SP which was called from your initial SP took and then you can home in on the procedures needing the most time.
I hope that helps.
Cheers,
Kevin

Late to the game, but in Sybase you have the option of using "SET FMTONLY" to get around "SET NOEXEC" turning off the evaluation of the nested procedure.
For example:
suppose:
sp_B is defined
sp_A is defined and calls sp_B
Then, the following will show the execution plans for both sp_A and sp_B
SET SHOWPLAN ON
GO
SET FMTONLY ON
GO
sp_A
GO
See the sybase writeup here...this worked in ASE 12.5 as well as ASE 15.
Using set showplan with noexec

Related

Using Stored Procedures Vs Embedded Queries in Tibco BW

Today we got a new direction from management to use only stored procedures instead of SQL queries in Tibco BW. I'm new to tibco and worked only in couple of projects. Can someone help me to understand what added advantage will bring if we use stored procedure in Tibco? Also every process might be using 10 different queries. So if we add that many stored procedures and create indexes and maintain them is it worth on the whole for 50+ processes? I'm having hard time to present Advantages Vs disadvantages
People are used to queries and this is why a lot of times are skeptical when it comes to Stored Procedures. Use your databases' capabilities at its fullest. Do not depend only on your back end programming language. Database programming is very powerful. Stored procedures for complicated logic are usually faster than your typical query. Make sure you plan carefully what the Stored Procedure needs to do and also different scenarios within the stored procedure. Once you get used to it you will not want to do not use them.
The secret is to maintain a current and well written documentation of your stored procedures.

Oracle Bind Query is very slow

I have an Oracle bind query that is extremely slow (about 2 minutes) when it executes in my C# program but runs very quickly in SQL Developer. It has two parameters that hit the tables index:
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
Also, if I remove the bind variables and create dynamic sql, it runs just like it does in SQL Developer.
Any suggestion?
BTW, I'm using ODP.
If you are replacing the bind variables with static varibles in sql developer, then you're not really running the same test. Make sure you use the bind varibles, and if it's also slow you're just getting bit by a bad cached execution plan. Updating the stats on that table should resolve it.
However if you are actually using bind variables in sql developers then keep reading. The TLDR version is that parameters that ODP.net run under sometimes cause a slightly more pessimistic approach. Start with updating the stats, but have your dba capture the execution plan under both scenarios and compare to confirm.
I'm reposting my answer from here: https://stackoverflow.com/a/14712992/852208
I considered flagging yours as a duplicate but your title is a little more concise since it identifies the query does run fast in sql developer. I'll welcome advice on handling in another manner.
Adding the following to your config will send odp.net tracing info to a log file:
This will probably only be helpful if you can find a large gap in time. Chances are rows are actually coming in, just at a slower pace.
Try adding "enlist=false" to your connection string. I don't consider this a solution since it effecitively disables distributed transactions but it should help you isolate the issue. You can get a little bit more information from an oracle forumns post:
From an ODP perspective, all we can really point out is that the
behavior occurs when OCI_ATR_EXTERNAL_NAME and OCI_ATR_INTERNAL_NAME
are set on the underlying OCI connection (which is what happens when
distrib tx support is enabled).
I'd guess what you're not seeing is that the execution plan is actually different (meaning the actual performance hit is actually occuring on the server) between the odp.net call and the sql developer call. Have your dba trace the connection and obtain execution plans from both the odp.net call and the call straight from SQL Developer (or with the enlist=false parameter).
If you confirm different execution plans or if you want to take a preemptive shot in the dark, update the statistics on the related tables. In my case this corrected the issue, indicating that execution plan generation doesn't really follow different rules for the different types of connections but that the cost analysis is just slighly more pesimistic when a distributed transaction might be involved. Query hints to force an execution plan are also an option but only as a last resort.
Finally, it could be a network issue. If your odp.net install is using a fresh oracle home (which I would expect unless you did some post-install configuring) then the tnsnames.ora could be different. Host names in tnsnams might not be fully qualified, creating more delays resolving the server. I'd only expect the first attempt (and not subsequent attempts) to be slow in this case so I don't think it's the issue but I thought it should be mentioned.
Are the parameters bound to the correct data type in C#? Are the columns key1 and key2 numbers, but the parameters :key1 and :key2 are strings? If so, the query may return the correct results but will require implicit conversion. That implicit conversion is like using a function to_char(key1), which prevents an index from being used.
Please also check what is the number of rows returned by the query. If the number is big then possibly C# is fetching all rows and the other tool first pocket only. Fetching all rows may require many more disk reads in that case, which is slower. To check this try to run in SQL Developer:
SELECT COUNT(*) FROM (
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
)
The above query should fetch the maximum number of database blocks.
Nice tool in such cases is tkprof utility which shows SQL execution plan which may be different in cases above (however it should not be).
It is also possible that you have accidentally connected to different databases. In such cases it is nice to compare results of queries.
Since you are raising "Bind is slow" I assume you have checked the SQL without binds and it was fast. In 99% using binds makes things better. Please check if query with constants will run fast. If yes than problem may be implicit conversion of key1 or key2 column (ex. t.key1 is a number and :key1 is a string).

How can I easily analyze an Oracle package's execution for performance issues?

I have a pl/sql package in an 11g R2 DB that has a fair number of related procedures & functions. I execute a top level function and it does a whole lot of stuff. It is currently processing < 10 rows per second. I would like to improve the performance, but I am not sure which of the routines to focus on.
This seems like a good job for Oracle's "PL/SQL hierarchical profiler (DBMS_HPROF)" Being the lazy person that I am I was hoping that either SQLDeveloper v3.2.20.09 or Enterprise Manager would be able do what I want with a point & click interface. I cannot find this.
Is there an "easy" way to analyze the actual PL/SQL code in a procedure/package?
I have optimized the queries in the package using the "Top Activity" section of Enterprise Manager, looking at all of the queries that were taking a long time. Now all I have is a single "Execute PL/SQL" showing up, and I need to break that down into at least the procedures & functions that are called, and what percentage of the time they are taking.
The PL/SQL Hierarchical Profiler, documented here, is actually very easy to run, assuming that you have the necessary privileges.
Basically you just need to run a PL/SQL block like this:
begin
dbms_hprof.start_profiling('PLSHPROF_DIR', 'test.trc');
your_top_level_procedure;
dbms_hprof.stop_profiling;
end;
The plshprof utility will generate HTML reports from the raw profiler output file (test.trc in the example above).

How can I utilize Oracle bind variables with Delphi's SimpleDataSet?

I have an Oracle 9 database from which my Delphi 2006 application reads data into a TSimpleDataSet using a SQL statement like this one (in reality it is more complex, of course):
select * from myschema.mytable where ID in (1, 2, 4)
My applications starts up and executes this query quite often during the course of the day, each time with different values in the in clause.
My DBAs have notified me that this is creating execessive load on the database server, as the query is re-parsed on every run. They suggested to use bind variables instead of building the SQL statement on the client.
I am familiar with using parameterized queries in Delphi, but from the article linked to above I get the feeling that is not exactly what bind variables are. Also, I would need theses prepared statements to work across different runs of the application.
Is there a way to prepare a statement containing an in clause once in the database and then have it executed with different parameters passed in from a TSimpleDataSet so it won't need to be reparsed every time my application is run?
My answer is not directly related to Delphi, but this problem in general. Your problem is that of the variable-sized in-list. Tom Kyte of Oracle has some recommendations which you can use. Essentially, you are creating too many unique queries, causing the database to do a bunch of hard-parsing. This will spike the CPU consumption (and DBA blood pressures) unnecessarily.
By making your query static, it can get by with a soft-parse or perhaps no parse at all! The DB can then cache the execution plan, the DBAs can deal with a more "stable" SQL, and overall performance should be improved.

Oracle PL/SQL stored procedure compiler vs PostgreSQL PGSQL stored procedure compiler

I noticed that Oracle takes a while to compile a stored procedure but it runs much faster than its PostgreSQL PGSQL counterpart.
With PostgreSQL, the same procedure (i.e. it's all in SQL-92 format with functions in the select and where clauses) takes less time to compile but longer to run.
Is there a metrics web site where I can find side by side Oracle vs PostgreSQL performance metrics on stored procedures, SQL parsing, views and ref_cursors?
Is the PostgreSQL PGSQL compiler lacking in optimization routines? What are the main differences between how Oracle handles stored procedures and how PostgreSQL handles them?
I'm thinking of writing a PGSQL function library that will allow PL/SQL code to be compiled and run in PGSQL. (i.e. DECODE, NVL, GREATEST, TO_CHAR, TO_NUMBER, all
PL/SQL functions) but I don't know if this is feasible.
This is a hard question to answer fully because it can run pretty deep, but I'll add my 2 cents towards a high level contribution to an answer. First off I really like PostgreSQL and I really like Oracle. However PL/SQL is so much more deep of a language/environment than PL/PGSQL provides or really any other database engines procedure language that I have ever ran into for that matter. Oracle since at least 10G uses a optimizing compiler for PL/SQL. Which most likely contributes to why it compiles slower in your use cases. PL/SQL also has native compilation. You can compile the PL/SQL code down to machine code with a simple compiler directive. This is good for computation intensive logic not for SQL logic. My point of all this is Oracle has spent lots of resources on making PL/SQL a real treat from a functionality standpoint and a performance stand point and I only touched on two of many examples.PL/SQL is light years ahead of PG/SQL is what it sums up to and I don't imagine as nice as PG/SQL is catching up to Oracle any time soon.
I doubt you will find a side by side comparison, though I think this would be really nice. The effort to do so wouldn't probably be worth most people's time.
Also I wouldn't re-write what is already out there.
http://www.pgsql.cz/index.php/Oracle_functionality_(en)
There is no official benchmark for stored procedures like TPC for SQL (see tpc.org). I'm also not aware of any database application with specific PL/SQL and pgSQL implementations which could be used as benchmark.
Both languages are compiled and optimized into intermediate code and then ran by an interpreter. PL/SQL can be compiled to machine code which doesn't improve overall performance as much as one might think, because the interpreter is quite efficient and typical applications spend most time in the SQL engine and not in the procedural code (see AskTom article).
When procedural code calls SQL it happens just like in any other program, using statements and bind parameters for input and output. Oracle is able to keep these SQLs "prepared" which means that the cursors are ready to be used again without an additional SQL "soft parse" (usually a SQL "hard parse" happens only when the database runs a SQL for the first time since it was started).
When functions are used in select or where clauses, the database has to switch back-and-forth between the SQL and procedural engines. This can consume more processing time then the code itself.
A major difference between the two compilers is that Oracle maintains a dependency tree, which causes PL/SQL to automaticly recompile when underlying objects are changed. Compilation errors are detected without actually running the code, which is not the case with Postgres (see Documentation)
Is there a metrics web site where I can find side by side Oracle vs
PostgreSQL performance metrics on stored procedures, SQL parsing,
views and ref_cursors?
Publicly benchmarking Oracle performance is probably a breach of your licensing terms. If you're a corporate user make sure legal check it out before you do anything like that.
I'm thinking of writing a PGSQL function library that will allow
PL/SQL code to be compiled and run in PGSQL. (i.e. DECODE, NVL,
GREATEST, TO_CHAR, TO_NUMBER, all PL/SQL functions) but I don't know
if this is feasible.
Do check the manual and make sure you need all of these, since some are already implemented as built-in functions. Also, I seem to recall an add-on pack of functions to improve Oracle compatibility.
Lastly, don't forget PostgreSQL offers choices on how to write functions. Simple ones can be written in pure SQL (and automatically inlined in some cases) and for heavy lifting, I prefer to use Perl (you might like Python or Java).

Resources