Query takes too much time to process - oracle

I just need an approach for this question. Our application connects to Oracle database. It is running fine in production but occasionally some queries take too much time to process (like 5-7 seconds). Sometimes they are update queries and sometimes select queries.
I just want to know how to approach such issues, any tools like nmon would help. Thanks.

90% or more of poor performing queries come down to the execution plan. And probably 90% of those are related to statistics. The place to start would be to get a SQL Monitor report, which will show you the execution plan, together with runtime statistics and identify where time is being spent. Only when you understand the problem, can you come up with the correct solution.
If you are not familiar with SQL Monitor, check out http://www.oracle.com/technetwork/database/manageability/sqlmonitor-084401.html

Related

Running gather table statistics multiple times inside a package causes Performance Issues

We have a big package that ALWAYS encounters performance issues. We get an average of 6-10 tickets raised for this issue in a month. Sometimes the program would run successfully for minutes, sometimes it would run for days just to Error out with an unexplained error.
I started to look deeper into this and found there are a number of possible causes of the performance issues, such as numerous un-tuned SQLs and bad coding practice, etc.
One thing that struck me today is in the code, it's calling Gather Table Statistics multiple times, in multiple places before doing some big operation (such as a huge Select Statement and a lot of DML statements).
This program is run on a daily, weekly and monthly basis, depending on the organization's practices.
Unfortunately, I am unable to replicate the performance issue to know more about this, but I am guessing running Gather Table statistics to multiple tables, multiple times, can cause major performance issues in the program. I am unable to find any resources to back this idea up. Can someone confirm?
Yes, can confirm, have seen code that spends 80% of the runtime gathering stats. Given your constraints, I'd try, in the following order:
I'd have a look at the DELETE statements to check if they can be replaced by TRUNCATE TABLE.
Gather stats once the tables are filled, lock their stats and comment out any other gather_table_stats calls. The assumption is that the data will not differ widely enough from day to day or week to week to cause different query plans.
If that doesn't work, I'd try to have a look at DBA_TAB_MODIFICATIONS to at least check if the tables have been changed enough since the last stats gathering.

How to test tuned oracle sql and how to clear clear system/hardware buffer?

I want to know the way to test right sqls before tuned and after tuned.
but once I executed the original sql, I got results too fast for tuned sql.
I found below...
How to clear all cached items in Oracle
I did flush data buffer cache and shared pool but it still didn't work.
I guess this answer from that question is related to what I want to know more:
Keep in mind that the operating system and hardware also do caching which can skew your results.
Oracle's version is 11g and Server is HP-UX 11.31.
If the server was Linux, I could've tried clearing buffer using '/proc/sys/vm/drop_caches'.(I'm not sure it would works)
I'm searching quite long time for this problem. Is there anyone has this kind of problem?
thanks
If your query is such that the results are being cached in the file system, which your description would suggest, then the query is probably not a "heavy-hitter" overall. But if you were testing in isolation, with not much activity on the database, when the SQL is run in a production environment performance could suffer.
There are several things you can do to determine which version of two queries is better. In fact, entire books have been written on just this topic. But to summarize:
Before you begin, ensure statistics on the tables and indexes are up to date.
See how often the SQL will be executed in the grand scheme of things. If it runs once or twice a day, and takes 2 seconds to run, don't bother trying to tune.
Do a explain plan on both and look at the estimated costs and number of steps.
Turn on tracing for both optimizer steps and execution statistics, and compare.

How can I find in Oracle which queries are taking most of CPU time?

We have an application with huge amount of data, approx 100 tables and most of them have ~8-10 million rows, suddenly we are facing performance issues and it was discovered that CPU usage is too high on Oracle server.
Since requests to the oracle server are coming from different applications is there a way to find out in Oracle server, which queries takes longer time or consumes lot of CPU ?
Will appreciate responses or any pointers to find this out.
Since you are licensed to use the AWR and your DBA knows how to generate an AWR report, ask your DBA to generate an AWR report for the period of time that CPU usage was too high. On the AWR report, there will be a number of different sections one of which is the SQL statements ordered by CPU usage. That will show you which SQL statements used the most CPU over the time period in question.
Sounds like you need to generate some SQL trace files and analyse the results using tkprof.
There's a good Ask Tom question here that should show you how to go about this: http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:969160000346108326
select elapsed_time/1000000 seconds, gv$sql.*
from gv$sql
order by elapsed_time desc;
It does not provide as much information as tools such as AWR, and queries will age out periodically. But it's very quick and easy to run and it requires less privileges.

Why does the same query takes different amount of time to run? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
I have this problem that has been going on for months. I automate reports at my job, we use oracle. I write a procedure, time it, it runs in a few minutes. I then set it up for monthly runs.
And then every month, some report runs for hours. It's all the same queries that ran in a few minutes for months before and all of a sudden they're taking hours to run.
I end up rewriting my procedures every now and then and to me this defeats the purpose of automating. No one here can help me.
What am I doing wrong? How can I ensure that my queries will always take the same amount of time to run.
I did some research and it says that in a correctly setup database with correct statistics you don't even have to use hints, everything should consistently run in about the same time.
Is this true? Or does everyone have this problem and everyone just rewrites their procedures whenever they run?
Sorry for 100 questions, I'm really frustrated about this.
My main question is, why does the same query takes different amount of time (drastic difference, from minutes to hours) to run on different days?
There are three broad reasons that queries take longer at different times. Either you are getting different performance because the system is under a different sort of load, you are getting different performance because of data volume changes, or you are getting different performance because you are getting different query plans.
Different Data Volume
When you generate your initial timings, are you using data volumes that are similar to the volumes that your query will encounter when it is actually run? If you test a query on the first of the month and that query is getting all the data for the current month and performing a bunch of aggregations, you would expect that the query would get slower and slower over the course of the month because it had to process more and more data. Or you may have a query that runs quickly outside of month-end processing because various staging tables that it depends on only get populated at month end. If you are generating your initial timings in a test database, you'l very likely get different performance because test databases frequently have a small subset of the actual production data.
Different System Load
If I take a query and run it during the middle of the day against my data warehouse, there is a good chance that the data warehouse is mostly idle and therefore has lots of resources to give me to process the query. If I'm the only user, my query may run very quickly. If I try to run exactly the same query during the middle of the nightly load process, on the other hand, my query will be competing for resources with a number of other processes. Even if my query has to do exactly the same amount of work, it can easily take many times more clock time to run. If you are writing reports that will run at month end and they're all getting kicked off at roughly the same time, it's entirely possible that they're all competing with each other for the limited system resources available and that your system simply isn't sized for the load it needs to process.
Different system load can also encompass things like differences in what data is cached at any point in time. If I'm testing a particular query in prod and I run it a few times in a row, it is very likely that most of the data I'm interested in will be cached by Oracle, by the operating system, by the SAN, etc. That can make a dramatic difference in performance if every read is coming from one of the caches rather than requiring a disk read. If you run the same query later after other work has flushed out most of the blocks your query is interested in, you may end up doing a ton of physical reads rather than being able to use the nicely warmed up cache. There's not generally much you can do about this sort of thing-- you may be able to cache more data or arrange for processes that need similar data to be run at similar times so that the cache is more efficient ut that is generally expensive and hard to do.
Different Query Plans
Over time, your query plan may also change because statistics have changed (or not changed depending on the statistic in question). Normally, that indicates that Oracle has found a more efficient plan or that your data volumes have changed and Oracle expects a different plan would be more efficient with the new data volume. If, however, you are giving Oracle bad statistics (if, for example, you have tables that get much larger during month-end processing but you gather statistics when the tables are almost empty), you may induce Oracle to choose a very bad query plan. Depending on the version of Oracle, there are various ways to force Oracle to use the same query plan. If you can drill down and figure out what the problem with statistics is, Oracle probably provides a way to give the optimizer better statistics.
If you take a look at AWR/ ASH data (if you have the appropriate licenses) or Statspace data (if your DBA has installed that), you should be able to figure out which camp your problems originate in. Are you getting different query plans for different executions (you may need to capture a query plan from your initial benchmarks and compare it to the current plan or you may need to increase your AWR retention to retain query plans for a few months in order to see this). Are you doing the same number of buffer gets over time but getting vastly different amounts of I/O waits? Do you see a lot of contention for resources from other sessions?If so, that probably indicates that the issue is different load at different times.
One possibility is that your execution plan is cached so it takes a short amount of time to rerun the query, but when the plan is no longer cached (like after the DB is restarted) it might take significantly longer.
I had a similar issue with Oracle a long while ago where a very complex query for a report ran against a very large amount of data, and it would take hours to complete the first time it was run after the DB was restarted, but after that it finished in a few minutes.
this is not an answer, this is a reply to Justin Cave, i couldn't format it in any readable way in the comments.
Different Data Volume
When ….. data.
Yes, I’m using the same archive tables that I then use for months to come. Of course, data changes but it’s a pretty consistent rise, for example, if a table has 10M rows this month – it might gain 100K rows the next, 200K the next, 100K the next and so on. There are no drastic jumps as far as I know. And I’d understand if today the query took 2 minutes and next month it’d take 5. But not 3 hours. However, thank you for the idea, I will start counting rows in tables from month to month as well.
Question though, so how do people code to account for this? let’s say someone works with tables that will get large amounts of data at random times, is there a way to write the query to ensure the run times are at least in the ball park? Or do people just put up with the fact that any month their reports will run 10-20 hours.
Different System Load
If I take a …. to process.
**No, I run my queries on different days and times but I have logs of the days and the times so I will see if I can find a pattern.
Different system load …hard to do.
So are you saying that the fast times I may be getting at the time of the report design might be fast because of the things I ran on my computer previously?
Also, does the cache get stored on my computer or on the database under my login or where?**
Different Query Plans
Over time, your query plan … different load at different times.
Thank you for your explanations, you’ve given me enough to start digging.

Performance optimization for SQL Server: decrease stored procedures execution time or unload the server?

We have a web service which provides search over hotels. There is a problem with performance: a single request to the service takes around 5000 ms. Almost all of the time is spent in database by executing storing procedures. During the request our server (mssql2008) consumes ~90% of the processor time. When 2 requests are made in parallel the average time grows and is around 7000 ms. When number of request is increasing, the average time of response is increasing as well. We have 20-30 requests per minute.
Which kind of optimization is the best in this case having in mind that the goal is to provide stable response time for the service:
1) Try to decrease the stored procedures execution time
2) Try to find the way how to unload the server
It is interesting to hear from people who deal with booking sites.
It is interesting to hear from people
who deal with booking sites. Thanks!
This has nothing to do with booking sites, you have poorly written stored procedures, possibly no indexes, your queries are probably not SARGAble and it has to scan the table every time. Are you statistics up to date?
run some procs from SSMS and look at the execution plans
Also a good idea to run profiler. How about your page life expectancy and buffer cache hit ratio, take a look at Use sys.dm_os_performance_counters to get your Buffer cache hit ratio and Page life expectancy counters to get those numbers
I think the first thing you have to do is to quantify what's going on on the server.
Use SQL Server Profiler to get an accurate picture of the activity on the server.
Identify which procedures / SQL statements take up the most resources
Identify high priority SQL operations consuming a lot of resources / taking time
Prioritize
Fix
Now, when I say "Fix", I mean that you should execute the procedure / statement manually in SSMS - Make sure you have "Show Execution Plan" turned ON.
Review the execution plan for parts that consume the most resources and then figure out how to correct that. You may need to create a new index, rewrite the SQL to be more efficient by using hints, etc.
You provide no detail to solve your problem. In general to increase performance of a stored procedure I look at:
1) remove any cursors or loops with set based operations
2) make sure all queries are using an index and using an efficient execution plan (check this with SET SHOWPLAN_ALL ON)
3) make sure there is no locking or blocking slowing it down (see the query given here)
without more info on the specifics, it is hard to make any suggestions.
Almost all of the time is spent in
database by executing storing
procedures.
how many procedures is the app calling? what do they do? are transactions involved? are the procedures recompiling each call? do you have any indexes? are statistics up to date? etc., etc... You need to give a lot more info, or any help here is a complete guess.

Resources