I was trying to analyze the AWR report generated for a particular process with a duration of one hour. I am trying to find out which query is taking much time
to while running the process.
When I have gone through the report, I can see SQL ordered by Gets,SQL ordered by CPU Time,SQL ordered by Executions,SQL ordered by Parse Calls,
SQL ordered by Sharable Memory,SQL ordered by Elapsed Time etc.
I can see the SQL Text from the table SQL ordered by Elapsed Time.
My question: Is this the right way to identify the expensive query ? Please advise in this regard.
Elapsed Time (s) SQL Text
19,477.05 select abc.....
7,644.04 select def...
SQL Ordered by Elapsed Time , includes SQL statements that took significant execution time during processing.We have to look at Executions,Elapsed time per Exec (s) etc. along with Elapsed time to analyze.
For example,a query has low Executions and high Elapsed time per Exec (s) and this query could be a candidate for troubleshooting or optimizations.
The best reference I found so far: http://www.dbas-oracle.com/2013/05/10-steps-to-analyze-awr-report-in-oracle.html
AWR is used to see database health. So, I think this is not the good tools to trace a process.
You should use other tools like sql_trace (with tkprof) or dbms_profiler. It will concenrate on your own process.
If you are using sql_trace, you need to connect to the server (or ask to the dba tem) to analyse the trace.
In SQL Ordered by Elapsed time you always need to check the Query which is having low Execution numbers and higher Elapsed time . This would always be the problematic Query . Since Elapsed time is the defined task for a respective Query in this case if it is higher with less number of Executions it means that for some reason the Query is performing not up to expectations .
There is some parameter need to check so we are find issue in progress.
Buffer get is less expensive than physical read because database has to work harder (and more) to get the data. Basically time it would have taken if available in buffer cache + time actually taken to find out from physical block.
If you suspect that excessive parsing is hurting your database’s performance:
check “time model statistics” section (hard parse elapsed time, parse time elapsed etc.)
see if there are any signs of library cache contention in the top-5 events
see if CPU is an issue.
Establishing a new database connection is also expensive (and even more expensive in case of audit or triggers).
“Logon storms” are known to create very serious performance problems.
If you suspect that high number of logons is degrading your performance, check “connection management elapsed time” in “Time model statistics”.
Soft Parsing being low indicates bind variable and versioning issues. With 99.25 % for the soft parse meaning that about 0.75 % (100 – soft parse) is happening for hard parsing. Low hard parse is good for us.
If Latch Hit % is <99%, you may have a latch problem. Tune latches to reduce cache contention.
Library hit % is great when it is near 100%. If this was under 95% we would investigate the size of the shared pool.
In this ration is low then we may need to:
• Increase the SHARED_POOL_SIZE init parameter.
• CURSOR_SHARING may need to be set to FORCE.
• SHARED_POOL_RESERVED_SIZE may be too small.
• Inefficient sharing of SQL, PLSQL or JAVA code.
• Insufficient use of bind variables
Related
We have a batch process that executes every day. This week, a job that usually does not past 18 minutes of execution time (real time, as you can see), now is taking more than 45 minutes to finish.
Fullstimmer option is already active, but we don't know why only the real time was increased.
In old documentation there are Fullstimmer stats that could help identify the problem but they do not appear in batch log. (The stats are those down below: Page Faults, Context Switches, Block Operation and so on, as you can see)
It might be an I/O issue. Does anyone know how we can identify if it is really an I/O problem or if it could be some other issue (network, for example)?
To be more specific, this is one of the queries that have increased in time dramatically. As you can see, it is reading from a data base (SQL Server, VAULT schema) and work and writing in work directory.
Number of observations its almost the same:
We asked customer about any change in network traffic, and they said still the same.
Thanks in advance.
For a process to complete, much more needs to be done than the actual calculations on the CPU.
Your data has te be read and your results have to be written.
You might have to wait for other processes to finish first, and if your process includes multiple steps, writing to and reading from disk each time, you will have to wait for the CPU each time too.
In our situation, if real time is much larger than cpu time, we usually see much trafic to our Network File System (nfs).
As a programmer, you might notice that storing intermediate results in WORK is more efficient then on remote libraries.
You might safe much time by creating intermediate results as views instead of tables, IF you only use them once. That is not only possible in SQL, but also in data steps like this
data MY_RESULT / view=MY_RESULT;
set MY_DATA;
where transaction_date between '1jan2022'd and 30jun2022'd;
run;
There is time model statistics in awr report, is the parse time include in DB CPU time or separate from ?
I've found my database exists a large parse time problem,and I would like to estimate the benefits that can be achieved by reducing the parse time.
thanks!
Time Model Statistics
DB Time represents total time in user calls
DB CPU represents CPU time of foreground processes
Total CPU Time represents foreground and background processes
Statistics including the word "background" measure background process time, therefore do not contribute to the DB time statistic
Ordered by % of DB time in descending order, followed by Statistic Name
we have identify that DB cpu is taking more time .
We will CPU ORDERED by CPU Time. There will how queries are taking time along with Execution time .Select query and check tunning advisor and find recommendation on same.
I hope i have clear your answer.
I wanted to know if there was a way to measure the performance of a function. In parts.
I know that you are able to measure the total time it takes to complete the function but is there a way to measure the individual queires within a function?
Just wanted to know because I can not find the bottleneck for my function's performance.
Most of the time when you see a major difference between the estimated and the actual execution plans, it is because your statistics have not been (ever) updated. The SQL Server therefore has no idea which tables have little data, which ones are huge, and so on, and is more likely to generate bogus plans (both estimated and actual), or to miscalculate estimated plan costs. The actual plan is based on real, accurate costs of the plan, but when the plan is very far from an optimal one, this accuracy is of very little value for determining bottlenecks.
To correct this, issue the UPDATE STATISTICS statement or execute the sp_updatestats procedure.
Seeing 100% actuals for your function might well be an effect of empty or almost empty database, regardless of whether you have uptodate statistics or not.
When optimizing for performance, make sure that your database is populated quasi-realistically with lots of data (put twice as much records to each table than what you expect for production; but do maintain the expected rough proportions). There is not much point in looking for a performance bottleneck using an empty or an entirely, disproportionately overblown database; query plans will be different and even if the plan will happen to be the same one, the bottleneck may be elsewhere than in production.
I have a big query on for tables and I want to optimize it.
The weird part is that when I get the execution plan without statistics it says something like 1.2M. however if I get statistics for one of the tables involved in the query, my cost lowers to 4k. But if I ask for statistics in the other tables the cost grows to 50k, so I am not sure what's happening.
Can anyone explain a reason why giving more statistics actually increases query cost?
The Cost Based Optimiser uses as much information as you can give it in order to calculate the cost of a plan. If you update (i.e. change) the statistics it uses, then obviously that will change the calculated cost of the plan.
It's not actually the gathering of stats that causes the cost to grow - it's how those stats have changed (whether up or down) that causes the calculated cost to change.
In the absence of statistics, Oracle may use heuristics, guesswork or a quick sample of the data (depending on the settings in your instance).
Generally, the better (more accurate or representative) the statistics, the more accurate the cost calculation.
The cost based optimizer has it's challenges. There are rounding errors that can have quite an impact on decisions that it makes. This is one of the reasons that SQL Plan Stability, introduced in 11g is so nice. Forget about 10g, if you can, or prepare for long debugging sessions.
At the first use, a plan is generated based on the current statistics and executed. If SQL is repeated, the SQL and the plan are stored in a baseline. In the maintenance window, the most expensive plans are re evaluated and in many cases, a better plan can be provided. This is possible because at runtime, the optimizer is limited in the time it is given to search for a plan. In the maintenance window, a lot more time can be spent to find the best plan.
In 11g the peeking is also fixed and a single SQL can now have multiple plans, based on the values of the bind variables.
The query cost is based on many factors, where IO is a very important factor.
How are your tables filled and where are the high water marks located? A table that is filled and emptied constantly can have it's high watermark far away....
There are lots of bugs in the optimizer, lots of options, controlled by hidden parameters. You could try to use them to tweek the behaviour. Upgrading to 11g might be a lot smarter as it solves lots of performance problems for many applications.
We have a web service which provides search over hotels. There is a problem with performance: a single request to the service takes around 5000 ms. Almost all of the time is spent in database by executing storing procedures. During the request our server (mssql2008) consumes ~90% of the processor time. When 2 requests are made in parallel the average time grows and is around 7000 ms. When number of request is increasing, the average time of response is increasing as well. We have 20-30 requests per minute.
Which kind of optimization is the best in this case having in mind that the goal is to provide stable response time for the service:
1) Try to decrease the stored procedures execution time
2) Try to find the way how to unload the server
It is interesting to hear from people who deal with booking sites.
It is interesting to hear from people
who deal with booking sites. Thanks!
This has nothing to do with booking sites, you have poorly written stored procedures, possibly no indexes, your queries are probably not SARGAble and it has to scan the table every time. Are you statistics up to date?
run some procs from SSMS and look at the execution plans
Also a good idea to run profiler. How about your page life expectancy and buffer cache hit ratio, take a look at Use sys.dm_os_performance_counters to get your Buffer cache hit ratio and Page life expectancy counters to get those numbers
I think the first thing you have to do is to quantify what's going on on the server.
Use SQL Server Profiler to get an accurate picture of the activity on the server.
Identify which procedures / SQL statements take up the most resources
Identify high priority SQL operations consuming a lot of resources / taking time
Prioritize
Fix
Now, when I say "Fix", I mean that you should execute the procedure / statement manually in SSMS - Make sure you have "Show Execution Plan" turned ON.
Review the execution plan for parts that consume the most resources and then figure out how to correct that. You may need to create a new index, rewrite the SQL to be more efficient by using hints, etc.
You provide no detail to solve your problem. In general to increase performance of a stored procedure I look at:
1) remove any cursors or loops with set based operations
2) make sure all queries are using an index and using an efficient execution plan (check this with SET SHOWPLAN_ALL ON)
3) make sure there is no locking or blocking slowing it down (see the query given here)
without more info on the specifics, it is hard to make any suggestions.
Almost all of the time is spent in
database by executing storing
procedures.
how many procedures is the app calling? what do they do? are transactions involved? are the procedures recompiling each call? do you have any indexes? are statistics up to date? etc., etc... You need to give a lot more info, or any help here is a complete guess.