i'm struggeling with Performance in oracle. Situation is: Subsystem B has a dblink to master DB A. on System B a query completes after 15 seconds over dblink, db plan uses appropriate indexes.
If same query should fill a table in a stored procedure now, Oracle uses another plan with full scans. whatever i try (hints), i can't get rid of these full scans. that's horrible.
What can i do?
The Oracle Query Optimizer tries 2000 different possibilities and chooses the best one in normal situations. But if you think it choose wrong plan, You may suspect the following cases:
1- Your histograms which belongs to querying tables are deprecated.
2- Your indexes can not be used because of your faulty query.
3- You can use index hints to force the indexes to be used.
4- You can use SQL Advisor or run TKProf for performance analysis and decide what's wrong or what caused bad performance. Check network, Disk I/O values etc.
If you share your query we can give you more information.
Look like we are not taking same queries in two different conditions.
First case is Simple select over dblink & Second case is "insert as select over dblink".
can you please share two queries & execution plans here as You may have them handy. If its not possible to past queries due to security limitations, please past execution plans.
-Abhi
after many tries, I could create a new DB Plan with Enterprise Manager. now it's running perfect.
Related
I have a realy strange problem.
I'm using Oracle 11g.
There is a query executed by Business Objects tool, which the optimizer generate different plans for different users.
When my customer run the BO report it's realy slowly, but when I run it, it's fast.
According to the fact that there is a great plan (take seconds), I tried to enforced the optimizer to use that plan.
The problem is that it's not work.
I tried with baseline and sqlsets but the query used bind variables with different values each time, so it not realy help when the query change.
Is there a way to disable a plan for all sql executions?
This is 1 bad plan.. but can come with a lot of queries becasue of the bind variables.
More, I found in the net information about optimizer_secure_view_merging
parameter that could cause such problem.. but I have few users that got the good plan , not only the owner.. Is that still can be that ?
source:
https://oracledb.wordpress.com/2007/04/10/execution-plans-differents-with-different-users/
If there is another idea what to do..
I'd not call this problem a realy strange.
There is a lot of possible causes that diferent user get different behaviour for the same query.
On trivial cause is to query a non-qualified table.
select * from TAB
This query will access different tables for different users.
The next possibility are different Optimizer Initiation Parameters that could cause that the optimizer for one user may use features that are prohibited for other user.
I'd recommend as a simplest way for troubelshooting to perform the Oracle 10053 trace for both queries.
The trace file contains the complete list of the used paramaters and a simple diff could provide a usefull hint.
If the parameters are not the cause, you'll see in the trace the details why different access paths in the execution plan was taken. (A good introduction to understand 10053 trace is the paper of Wolfgang Breitling I linked above).
We got told by dba that our application causes troubles on servers.
There are queries that start like following:
SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring
optimizer_features_enable(default) no_parallel result_cache(snapshot=3600)
OPT_ESTIMATE(#"innerQuery", TABLE, "THIS_#21", SCALE_ROWS=0.0007347778778)
*/ SUM(C1) FROM ...
and they crash server, we receive ORA-12537.
We are using NHibernate, but I am fairly sure those queries are not generated by our application. The queries just have no meaning in business logic, they are some random joins. We don't have sql trace rights, but in logs that dba gives us those queries are executed under our module name.
I googled and found out that DS_SVC is a comment for some service queries that Oracle12 uses in dynamic sampling.
Our queries not exactly complex, couple of left joins with rownum limit 1000.
So the question is - can I say those DS_SVC queries are a problem on dba side? If so, where can I get some docs to prove it?
Looks like a 12c bug. See if changing this helps. Can ask Oracle support as well.
ALTER SESSION SET “_fix_control”=’7452863:0′
https://www.pythian.com/blog/performance-problems-with-dynamic-statistics-in-oracle-12c/
DYNAMIC_SAMPLING hint is used to let CBO collect
cardinality during run time.
Looks like algorithm has been changed in 12c and dynamic sampling is
triggered in a broader set of use cases. This behavior can be disabled
at statement, session or system level using the fix control for the
bug 7452863. For example, ALTER SESSION SET
“_fix_control”=’7452863:0′;
Those queries are generated by the optimizer itself. The feature is called "Dynamic sampling". Until 11g this was by default used only when there were no stats on tables.
Since 12c Dynamic sampling can also be triggered by other new feature "Adaptive execution plans". For example in situations where histograms are missing on columns.
Generally this is quite complex DBA stuff to deal with. There are various ways how to fix "Adaptive exec plans" or to disable them partially/completely.
Best you can do, is to contact Oracle support.
We have added /*+ dynamic_sampling(0) */ hint in our queries. It helped, the exception is gone.
I have an Oracle bind query that is extremely slow (about 2 minutes) when it executes in my C# program but runs very quickly in SQL Developer. It has two parameters that hit the tables index:
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
Also, if I remove the bind variables and create dynamic sql, it runs just like it does in SQL Developer.
Any suggestion?
BTW, I'm using ODP.
If you are replacing the bind variables with static varibles in sql developer, then you're not really running the same test. Make sure you use the bind varibles, and if it's also slow you're just getting bit by a bad cached execution plan. Updating the stats on that table should resolve it.
However if you are actually using bind variables in sql developers then keep reading. The TLDR version is that parameters that ODP.net run under sometimes cause a slightly more pessimistic approach. Start with updating the stats, but have your dba capture the execution plan under both scenarios and compare to confirm.
I'm reposting my answer from here: https://stackoverflow.com/a/14712992/852208
I considered flagging yours as a duplicate but your title is a little more concise since it identifies the query does run fast in sql developer. I'll welcome advice on handling in another manner.
Adding the following to your config will send odp.net tracing info to a log file:
This will probably only be helpful if you can find a large gap in time. Chances are rows are actually coming in, just at a slower pace.
Try adding "enlist=false" to your connection string. I don't consider this a solution since it effecitively disables distributed transactions but it should help you isolate the issue. You can get a little bit more information from an oracle forumns post:
From an ODP perspective, all we can really point out is that the
behavior occurs when OCI_ATR_EXTERNAL_NAME and OCI_ATR_INTERNAL_NAME
are set on the underlying OCI connection (which is what happens when
distrib tx support is enabled).
I'd guess what you're not seeing is that the execution plan is actually different (meaning the actual performance hit is actually occuring on the server) between the odp.net call and the sql developer call. Have your dba trace the connection and obtain execution plans from both the odp.net call and the call straight from SQL Developer (or with the enlist=false parameter).
If you confirm different execution plans or if you want to take a preemptive shot in the dark, update the statistics on the related tables. In my case this corrected the issue, indicating that execution plan generation doesn't really follow different rules for the different types of connections but that the cost analysis is just slighly more pesimistic when a distributed transaction might be involved. Query hints to force an execution plan are also an option but only as a last resort.
Finally, it could be a network issue. If your odp.net install is using a fresh oracle home (which I would expect unless you did some post-install configuring) then the tnsnames.ora could be different. Host names in tnsnams might not be fully qualified, creating more delays resolving the server. I'd only expect the first attempt (and not subsequent attempts) to be slow in this case so I don't think it's the issue but I thought it should be mentioned.
Are the parameters bound to the correct data type in C#? Are the columns key1 and key2 numbers, but the parameters :key1 and :key2 are strings? If so, the query may return the correct results but will require implicit conversion. That implicit conversion is like using a function to_char(key1), which prevents an index from being used.
Please also check what is the number of rows returned by the query. If the number is big then possibly C# is fetching all rows and the other tool first pocket only. Fetching all rows may require many more disk reads in that case, which is slower. To check this try to run in SQL Developer:
SELECT COUNT(*) FROM (
select t.Field1, t.Field2
from theTable t
where t.key1=:key1
and t.key2=:key2
)
The above query should fetch the maximum number of database blocks.
Nice tool in such cases is tkprof utility which shows SQL execution plan which may be different in cases above (however it should not be).
It is also possible that you have accidentally connected to different databases. In such cases it is nice to compare results of queries.
Since you are raising "Bind is slow" I assume you have checked the SQL without binds and it was fast. In 99% using binds makes things better. Please check if query with constants will run fast. If yes than problem may be implicit conversion of key1 or key2 column (ex. t.key1 is a number and :key1 is a string).
We had a performance issue in our production environment.
We identified that Oracle was executing queries using a Index which is not correct.
The queries have in their WHERE CLAUSE all the columns of the Primary Key (and nothing else).
After rebuilding of Index and Gather Statistics, Oracle started using the PK_INDEX. And the plan of execution indicated Index Unique Scan.
It worked fine for a while and then Oracle started using the Wrong Index again. The index that it uses now comprise of 2 Columns of which only 1 appears in the WHERE CLAUSE of the query. Now the plan of execution indicates INDEX RANGE SCAN and the system is very slow.
Please let me know how we could get to the root of this issue.
Try gathering stats again. If you get the expected execution plan then it means that the changes made to the table since the last stats gathering made oracle think the least favorite execution plan is better.
so, You'r question here is really "How can I maintain plan stability ?"
You have several options
Use hints in your query to indicate the exact access path.
Use
outlines
I personally don't like these two approaches because if your data will change in the future in such a manner that the execution plan should change, you'll get lousy performance.
So the third option (and my personal favorite) is
enable periodic statistics gathering. Oracle knows to spot the
changes and incrementally update relevant stats.
I have a database(sql server 2005),now there are about 100000 records in the table called users, when I do query use linq to sql, it is very slower and slower.how can I do some operate to improve the speed?
Analyse your query and add some indexes to your table may help.
To get a more specific answer post more specific information (table stucture, indexes you have, the sql code L2S generates, ...)
You could (in order of preference)
Save your query as a stored procedure
Add indexes to your users
table, for what you are querying for/sorting for
Analyze your query
(if it is complicated), see if there's a less-resource-intensive way
of doing it. There are graphical query analyzers to help you.
As a last resort, not use LINQ, but instead ADO.NET Entity Framework, it's significantly faster. But you'll only see performance improvements for crazy stuff, and only if you've already done all of the above.
Use stored procedures and then use linq to sql to get the desired rows, this will give performance.
The best tools at your disposal for analyzing your database access and seeing what needs to be optimized are:
SQL Server Profiler
Graphical Execution Plans
The first one will allow you to see the exact queries being sent to your database from your application, which is especially useful if it turns out that your application is chattier than you think. The second one will allow you to take those queries and see exactly what the SQL server is doing with them.
In the graphical execution plan, look for steps which use a lot of CPU and paths which transfer a lot of records. Those are what you'll want to optimize. It's possible that you're doing a table scan somewhere, which is slow, or maybe joining on many more records than you need somewhere, which is slow, etc.