Background:
We have a database environment where views are calling views which are calling views... the logic has become complex and now changes to underlying views can have significant impact on the top view being called by the client application.
Now while we are documenting all the logic and figuring out how to unwind everything development continues on and performance continues to degrade.
Currently I would manually run an explain plan on a client query and dig into tuning it. This is a slow and tedious process and changes may not be examined for ages.
Problem:
I want to generate a report that lists SQL ID and lists changes in actual time/discrepancy between estimated rows and actual rows/changes in buffers/changes in reads in comparison to the average computed over the last month.
I would generally run the following script manually and examine it based just on that day's response.
ALTER SESSION SET statistics_level=all;
set linesize 256;
set pagesize 0;
set serveroutput off;
-- QUERY
SELECT
*
FROM
table (DBMS_XPLAN.display_cursor (NULL, NULL, 'ALLSTATS LAST'));
What I am trying to do is see about automating the explain plan query and inserting the statistics into a table. From there I can run a regression report to detect changes in the performance which can then alert the developers.
I was thinking something like this would be common enough without having to resorting to the OEM. I can't find anything so I wonder if there is a more common approach to this?
Oracle provides functionality for this with the Automatic Workload Repositiory. http://docs.oracle.com/cd/E11882_01/server.112/e16638/autostat.htm
It's an extra license on top of Enterprise Edition though, I believe. It ought to be usable in non-production environments without additional cost, but check with your Oracle sales rep.
It sounds like you are on the road to re-inventing STATSPACK. Oracle still include this in their database but don't document it any more, presumably because it's free, unlike AWR and ASH. You can still find the documentation in the 9i manual.
Active Session History (ASH) is what you are looking for
select * from v$active_session_history where sql_id = :yoursqlid
Related
We got told by dba that our application causes troubles on servers.
There are queries that start like following:
SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring
optimizer_features_enable(default) no_parallel result_cache(snapshot=3600)
OPT_ESTIMATE(#"innerQuery", TABLE, "THIS_#21", SCALE_ROWS=0.0007347778778)
*/ SUM(C1) FROM ...
and they crash server, we receive ORA-12537.
We are using NHibernate, but I am fairly sure those queries are not generated by our application. The queries just have no meaning in business logic, they are some random joins. We don't have sql trace rights, but in logs that dba gives us those queries are executed under our module name.
I googled and found out that DS_SVC is a comment for some service queries that Oracle12 uses in dynamic sampling.
Our queries not exactly complex, couple of left joins with rownum limit 1000.
So the question is - can I say those DS_SVC queries are a problem on dba side? If so, where can I get some docs to prove it?
Looks like a 12c bug. See if changing this helps. Can ask Oracle support as well.
ALTER SESSION SET “_fix_control”=’7452863:0′
https://www.pythian.com/blog/performance-problems-with-dynamic-statistics-in-oracle-12c/
DYNAMIC_SAMPLING hint is used to let CBO collect
cardinality during run time.
Looks like algorithm has been changed in 12c and dynamic sampling is
triggered in a broader set of use cases. This behavior can be disabled
at statement, session or system level using the fix control for the
bug 7452863. For example, ALTER SESSION SET
“_fix_control”=’7452863:0′;
Those queries are generated by the optimizer itself. The feature is called "Dynamic sampling". Until 11g this was by default used only when there were no stats on tables.
Since 12c Dynamic sampling can also be triggered by other new feature "Adaptive execution plans". For example in situations where histograms are missing on columns.
Generally this is quite complex DBA stuff to deal with. There are various ways how to fix "Adaptive exec plans" or to disable them partially/completely.
Best you can do, is to contact Oracle support.
We have added /*+ dynamic_sampling(0) */ hint in our queries. It helped, the exception is gone.
I have an Oracle bind query that is extremely slow (about 2 minutes) when it executes in my C# program but runs very quickly in SQL Developer. It has two parameters that hit the tables index:
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
Also, if I remove the bind variables and create dynamic sql, it runs just like it does in SQL Developer.
Any suggestion?
BTW, I'm using ODP.
If you are replacing the bind variables with static varibles in sql developer, then you're not really running the same test. Make sure you use the bind varibles, and if it's also slow you're just getting bit by a bad cached execution plan. Updating the stats on that table should resolve it.
However if you are actually using bind variables in sql developers then keep reading. The TLDR version is that parameters that ODP.net run under sometimes cause a slightly more pessimistic approach. Start with updating the stats, but have your dba capture the execution plan under both scenarios and compare to confirm.
I'm reposting my answer from here: https://stackoverflow.com/a/14712992/852208
I considered flagging yours as a duplicate but your title is a little more concise since it identifies the query does run fast in sql developer. I'll welcome advice on handling in another manner.
Adding the following to your config will send odp.net tracing info to a log file:
This will probably only be helpful if you can find a large gap in time. Chances are rows are actually coming in, just at a slower pace.
Try adding "enlist=false" to your connection string. I don't consider this a solution since it effecitively disables distributed transactions but it should help you isolate the issue. You can get a little bit more information from an oracle forumns post:
From an ODP perspective, all we can really point out is that the
behavior occurs when OCI_ATR_EXTERNAL_NAME and OCI_ATR_INTERNAL_NAME
are set on the underlying OCI connection (which is what happens when
distrib tx support is enabled).
I'd guess what you're not seeing is that the execution plan is actually different (meaning the actual performance hit is actually occuring on the server) between the odp.net call and the sql developer call. Have your dba trace the connection and obtain execution plans from both the odp.net call and the call straight from SQL Developer (or with the enlist=false parameter).
If you confirm different execution plans or if you want to take a preemptive shot in the dark, update the statistics on the related tables. In my case this corrected the issue, indicating that execution plan generation doesn't really follow different rules for the different types of connections but that the cost analysis is just slighly more pesimistic when a distributed transaction might be involved. Query hints to force an execution plan are also an option but only as a last resort.
Finally, it could be a network issue. If your odp.net install is using a fresh oracle home (which I would expect unless you did some post-install configuring) then the tnsnames.ora could be different. Host names in tnsnams might not be fully qualified, creating more delays resolving the server. I'd only expect the first attempt (and not subsequent attempts) to be slow in this case so I don't think it's the issue but I thought it should be mentioned.
Are the parameters bound to the correct data type in C#? Are the columns key1 and key2 numbers, but the parameters :key1 and :key2 are strings? If so, the query may return the correct results but will require implicit conversion. That implicit conversion is like using a function to_char(key1), which prevents an index from being used.
Please also check what is the number of rows returned by the query. If the number is big then possibly C# is fetching all rows and the other tool first pocket only. Fetching all rows may require many more disk reads in that case, which is slower. To check this try to run in SQL Developer:
SELECT COUNT(*) FROM (
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
)
The above query should fetch the maximum number of database blocks.
Nice tool in such cases is tkprof utility which shows SQL execution plan which may be different in cases above (however it should not be).
It is also possible that you have accidentally connected to different databases. In such cases it is nice to compare results of queries.
Since you are raising "Bind is slow" I assume you have checked the SQL without binds and it was fast. In 99% using binds makes things better. Please check if query with constants will run fast. If yes than problem may be implicit conversion of key1 or key2 column (ex. t.key1 is a number and :key1 is a string).
We had a performance issue in our production environment.
We identified that Oracle was executing queries using a Index which is not correct.
The queries have in their WHERE CLAUSE all the columns of the Primary Key (and nothing else).
After rebuilding of Index and Gather Statistics, Oracle started using the PK_INDEX. And the plan of execution indicated Index Unique Scan.
It worked fine for a while and then Oracle started using the Wrong Index again. The index that it uses now comprise of 2 Columns of which only 1 appears in the WHERE CLAUSE of the query. Now the plan of execution indicates INDEX RANGE SCAN and the system is very slow.
Please let me know how we could get to the root of this issue.
Try gathering stats again. If you get the expected execution plan then it means that the changes made to the table since the last stats gathering made oracle think the least favorite execution plan is better.
so, You'r question here is really "How can I maintain plan stability ?"
You have several options
Use hints in your query to indicate the exact access path.
Use
outlines
I personally don't like these two approaches because if your data will change in the future in such a manner that the execution plan should change, you'll get lousy performance.
So the third option (and my personal favorite) is
enable periodic statistics gathering. Oracle knows to spot the
changes and incrementally update relevant stats.
I have a stored procedure that returns about 50000 records in 10sec using at most 2 cores in SSMS. The SSRS report using the stored procedure was taking 20min and would max out the processor on an 8 core server for the entire time. The report was relatively simple (i.e. no graphs, calculations). The report did not appear to be the issue as I wrote the 50K rows to a temp table and the report could display the data in a few seconds. I tried many different ideas for testing altering the stored procedure each time, but keeping the original code in a separate window to revert back to. After one Alter of the stored procedure, going back to the original code, the report and server utilization started running fast, comparable to the performance of the stored procedure alone. Everything is fine for now, but I am would like to get to the bottom of what caused this in case it happens again. Any ideas?
I'd start with a SQL Profiler trace of both the stored procedure when you execute it normally, and then the same SP when it's called by SSRS. Make sure you include the execution plans involved, so you can see if it's making some bad decisions (though that seems unlikely - the SQL Server should execute an optimal - or at least consistent - plan regardless of the query's source).
We used to have cases where Business Objects would execute stored procs dozens of times for no aparent reason and it lead to occasionally horrible performance, though I've never seen that same behavior with SSRS. It may be somewhere to start, though. You'll also see the execution begin/end times - that will make it clear if it's the database layer that's hanging up, or if the SQL Server hands back the data in 10 seconds and then it's the SSRS service that's choking somewhere.
The primary solution to speeding SSRS reports is to cache the reports. If one does this (either my preloading the cache at 7:30 am for instance) or caches the reports on-hit, one will find massive gains in load speed.
You may also find that monthly restarts of SSRS application domain to resolve your issue.
Please note that I do this daily and professionally and am not simply waxing poetic on SSRS
Caching in SSRS
http://msdn.microsoft.com/en-us/library/ms155927.aspx
Pre-loading the Cache
http://msdn.microsoft.com/en-us/library/ms155876.aspx
If you do not like initial reports taking long and your data is static i.e. a daily general ledger or the like, meaning the data is relatively static over the day, you may increase the cache life-span.
Finally, you may also opt for business managers to instead receive these reports via email subscriptions, which will send them a point in time Excel report which they may find easier and more systematic.
You can also use parameters in SSRS to allow for easy parsing by the user and faster queries. In the query builder type IN(#SSN) under the Filter column that you wish to parameterize, you will then find it created in the parameter folder just above data sources in the upper left of your BIDS GUI.
[If you do not see the data source section in SSRS, hit CTRL+ALT+D.
See a nearly identical question here: Performance Issuses with SSRS
I am stress testing a database table
I am looking for any software that can connect to my database and show me some metrics like no of rows in a table, time for inserts , inserts/time, table fragmentation[logical/physical] etc .
It would be great if the reporting tool can do the following:
1] Report in real time or atleast after some interval so that I do not have to wait for test to finish to get first look at the data
2] Ability to do stuff with the data later, like get 99.99 percentile, avg etc.
Is mostly freely available :)
Does anyone have any suggestion of something I can use with my Oracle table. Any pointers would be great.
I can actually write scripts to logg stuff like select count(*) etc .. but then I will have to spend a lot of time parsing and changing the data reporting rather than the tests.
I think some intelligent thing might already be out there ??
Thanks
Edit:
I am looking at a piece of design for
a new architecture
The tests are
"comparison" tests for different
designs and hence as far as I do it
on same hardware and same schema etc
they are comparable to some
granularity.
I want to monitor index
fragmentation, and response times
etc.
If you think there are other
things that can change please let me
know. I am trying to roll back the
table to particular state[basically
truncate] for each new iteration of
the test
First, Oracle has built-in functionality for telling you the number of rows in a table (either use count(*) or search 'gather statistics oracle' for another option).
But "stress testing a table" sounds to me like you're going down the wrong path. Most of the metrics you're mentioning ("time for inserts , inserts/time, table fragmentation[logical/physical] etc") are highly dependent on many factors:
what OS Oracle's running on
how the OS is tuned (i.e. other services running)
how the specific Oracle instance is configured
what underlying storage architecture Oracle's using (and how tablespaces are configured)
what other queries are being executed in the database at the exact same time as your test
But NONE of them would be related to the table design itself.
Now, if you're wondering if your normalized (or de-normalized) table schema is hurting your application, that's another matter. As is performance being degraded by improper/unneeded/missing indexes, triggers, or a host of other problems.
But if you really want an app that will give you real-time monitoring, check out Quest Software's Spotlight on Oracle. But it's definitely not free.
Just to add to the other comments, I believe what you really want is to stress test the queries you're running and not the table. The table is just a bunch of data blocks on a disk and the query is what will make the difference in performance as far as development is concerned. That will tell you if you need different indexes or need to redesign the query.
On the other hand, if you're looking at it as a DBA or system administrator, you're probably more interested in OS level statistics especially disk latency, memory paging, and CPU utilization.
All this is available in the enterprise manager which is my primary tuning tool for development and DBA. If you don't have that, read up on using sql_trace to profile your queries and your OS specific documentation on how to get those stats.