Oracle: interpreting Toad "Session Browser" data - performance

It's composed by the following information:
IO
Waits
Current Statement (explain plan)
Open Cursor
Access
Locks
RBS Usage
Long Ops
Statistics
I'm studying Waits and Current Statement Explain Plan. Locks and Long Ops are pretty intuitive, but which are the most important factor that I should consider to monitor the execution on a Query?
This is a query example:

At the query level, you are generally interest in the event that has the highest Time Waited. However sometimes you have a query that runs quickly 99% of the time and badly 1% of the time. In that case the explain plan may give a clue as to why that might be the case.
At the session level, it depends WHY you are monitoring the sessions. You might be interested in ones related to long running transactions (potentially blocking other sessions), ones that are blocked, ones using more than a 'fair' share of CPU resources....

Related

Why real time is much higher than "user" and "system" CPU TIME combined?

We have a batch process that executes every day. This week, a job that usually does not past 18 minutes of execution time (real time, as you can see), now is taking more than 45 minutes to finish.
Fullstimmer option is already active, but we don't know why only the real time was increased.
In old documentation there are Fullstimmer stats that could help identify the problem but they do not appear in batch log. (The stats are those down below: Page Faults, Context Switches, Block Operation and so on, as you can see)
It might be an I/O issue. Does anyone know how we can identify if it is really an I/O problem or if it could be some other issue (network, for example)?
To be more specific, this is one of the queries that have increased in time dramatically. As you can see, it is reading from a data base (SQL Server, VAULT schema) and work and writing in work directory.
Number of observations its almost the same:
We asked customer about any change in network traffic, and they said still the same.
Thanks in advance.
For a process to complete, much more needs to be done than the actual calculations on the CPU.
Your data has te be read and your results have to be written.
You might have to wait for other processes to finish first, and if your process includes multiple steps, writing to and reading from disk each time, you will have to wait for the CPU each time too.
In our situation, if real time is much larger than cpu time, we usually see much trafic to our Network File System (nfs).
As a programmer, you might notice that storing intermediate results in WORK is more efficient then on remote libraries.
You might safe much time by creating intermediate results as views instead of tables, IF you only use them once. That is not only possible in SQL, but also in data steps like this
data MY_RESULT / view=MY_RESULT;
set MY_DATA;
where transaction_date between '1jan2022'd and 30jun2022'd;
run;

How to test tuned oracle sql and how to clear clear system/hardware buffer?

I want to know the way to test right sqls before tuned and after tuned.
but once I executed the original sql, I got results too fast for tuned sql.
I found below...
How to clear all cached items in Oracle
I did flush data buffer cache and shared pool but it still didn't work.
I guess this answer from that question is related to what I want to know more:
Keep in mind that the operating system and hardware also do caching which can skew your results.
Oracle's version is 11g and Server is HP-UX 11.31.
If the server was Linux, I could've tried clearing buffer using '/proc/sys/vm/drop_caches'.(I'm not sure it would works)
I'm searching quite long time for this problem. Is there anyone has this kind of problem?
thanks
If your query is such that the results are being cached in the file system, which your description would suggest, then the query is probably not a "heavy-hitter" overall. But if you were testing in isolation, with not much activity on the database, when the SQL is run in a production environment performance could suffer.
There are several things you can do to determine which version of two queries is better. In fact, entire books have been written on just this topic. But to summarize:
Before you begin, ensure statistics on the tables and indexes are up to date.
See how often the SQL will be executed in the grand scheme of things. If it runs once or twice a day, and takes 2 seconds to run, don't bother trying to tune.
Do a explain plan on both and look at the estimated costs and number of steps.
Turn on tracing for both optimizer steps and execution statistics, and compare.

How do I correctly performance test SELECT queries with Oracle?

I would like to test two queries to find out their performance as apposed to just looking at the execution plan. I have seen Tom Kyte do this all the time on his website as a way to gather evidence on his theories.
I believe there are many pitfalls in performance testing, for example, when i run a query in SQL developer for the first time, that query might return some fair number. Running that exact same query again, returns instantaneously. There must be some sort of caching on the server or client going on and I understand this is important - however I am only interested in non cached performance.
What are the guidelines to performance test? AND how do I write a performance test which repeats the query? Do i just write an anonymous block & loop? How do i get timing information, averages, medians, std deviations?
Oracle (and other databases) cache queries, which is where you see the behavior you describe. A "hard" parse means there's no query plan for the query, which leaves Oracle to figure out the query plan based on indexes and statistics. A "soft" parse is what happens when you run the identical query afterwards, and receive an instantaneous result, because the query plan exists & Oracle re-uses it. See the Ask Tom question about it for more details.
Be aware of the EXPLAIN output:
With the cost-based optimizer, execution plans can and do change as the underlying costs change. EXPLAIN PLAN output shows how Oracle runs the SQL statement when the statement was explained. This can differ from the plan during actual execution for a SQL statement, because of differences in the execution environment and explain plan environment.
Focusing on the non-cached performance gives a worst-case scenario, but given that caching will occur - non-cached benchmarks aren't realistic in everyday use.
To build off OMG Ponies answer, tuning based on timing is something that's possible, but not realistic. You'd have to start either with a fully-cached buffer cache in every case, or a fully-empty buffer cache, and neither of those is going to be representative of reality - especially if there's no competing load.
When I'm tuning, it's generally against a live system with activity, and I focus on tuning logical I/Os, either through using the extended SQL trace (dbms_monitor.session_trace_enable / dbms_monitor.session_trace_disable) and the tkprof utility, or using SQL*Plus and set autotrace traceonly - which does all the work of the query, but throws the output away, because I'm usually not interested in watching a jillion rows scroll by.
The exact mechanism usually involves bound SQL, using something like the following:
variable :my_bind1 number;
variable :my_bind2 varchar2(30);
begin
:my_bind1 := 42;
:my_bind2 := 'some meaningful string';
end;
/
set timing on;
set autotrace traceonly;
[godawful query with binds]
set autotrace off;
Within the results, I'm looking for the plan I'd expect, a comparative value for sorts - assuming any exist - and most importantly, the number of consistent I/Os. That's how many blocks Oracle had to read in consistent mode to satisfy the query. I can't find the original source of the quote, but I think it's Cary Milsap of Method R.
"Tune your logical I/Os, and your physical I/Os will follow."
In performance tuning, if the only piece of data you look at is wall-clock time, you will only be getting a small part of the whole picture. You need to at least look at the execution plan, as well as IO stats, in order to work out how best to tune the query.
Also, you need to eliminate other causes of performance issues - e.g. if there is a general performance issue across many queries, it might not be the fault of just one of them - it might be an architecture problem, or significant concurrent activity on the database, or even an underlying hardware issue.
I've had similar issues to what you describe before; e.g. a certain type of query which should be very fast was taking 30 seconds to run on the first time, then would settle down to a second or two. As soon as I looked at the execution plan, however, it was obvious that it was using a full table scan, because it couldn't use the unique index that had been created. The first time the query ran, most of the data was loaded into the cache (in fact, there were two levels of cache involved - the database buffer cache, as well as a storage-level cache over the disks) so subsequent full table scans were extremely fast.
What is correctly ?
Since 11g there are a few extra complications to take into account. The optimizer pre peeking has become a lot smarter and sql plan stability has a BIG influence. These two features make the database auto tuning but can also have unexpected effects during performance tests, for example because not all variations of the plans are known and accepted at the beginning of the tests.
This might be the cause that a second test run, the day after the first run, suddenly runs much quicker, without any apparent changes.
Since 11g performance testing is less important, compared to writing logically correct code. For example a Cartesian product and filtering out one distinct value van be functional correct but is in most of the cases wrong code because it fetches more data than logically needed.
If the queries fetches the data that is really needed and is in the correct control structure, have the database processes tune the code during the maintenance windows. In many cases the differences between the test environment and production are such that a comparison can not be safely made.
Don't get me wrong, testing is important but mostly for the logic compared to performance testing before 11g, there are extra steps to be taken.
For nice reading see Oracle® Database 2 Day + Performance Tuning Guide 11g Release 2 (11.2)

Performance optimization for SQL Server: decrease stored procedures execution time or unload the server?

We have a web service which provides search over hotels. There is a problem with performance: a single request to the service takes around 5000 ms. Almost all of the time is spent in database by executing storing procedures. During the request our server (mssql2008) consumes ~90% of the processor time. When 2 requests are made in parallel the average time grows and is around 7000 ms. When number of request is increasing, the average time of response is increasing as well. We have 20-30 requests per minute.
Which kind of optimization is the best in this case having in mind that the goal is to provide stable response time for the service:
1) Try to decrease the stored procedures execution time
2) Try to find the way how to unload the server
It is interesting to hear from people who deal with booking sites.
It is interesting to hear from people
who deal with booking sites. Thanks!
This has nothing to do with booking sites, you have poorly written stored procedures, possibly no indexes, your queries are probably not SARGAble and it has to scan the table every time. Are you statistics up to date?
run some procs from SSMS and look at the execution plans
Also a good idea to run profiler. How about your page life expectancy and buffer cache hit ratio, take a look at Use sys.dm_os_performance_counters to get your Buffer cache hit ratio and Page life expectancy counters to get those numbers
I think the first thing you have to do is to quantify what's going on on the server.
Use SQL Server Profiler to get an accurate picture of the activity on the server.
Identify which procedures / SQL statements take up the most resources
Identify high priority SQL operations consuming a lot of resources / taking time
Prioritize
Fix
Now, when I say "Fix", I mean that you should execute the procedure / statement manually in SSMS - Make sure you have "Show Execution Plan" turned ON.
Review the execution plan for parts that consume the most resources and then figure out how to correct that. You may need to create a new index, rewrite the SQL to be more efficient by using hints, etc.
You provide no detail to solve your problem. In general to increase performance of a stored procedure I look at:
1) remove any cursors or loops with set based operations
2) make sure all queries are using an index and using an efficient execution plan (check this with SET SHOWPLAN_ALL ON)
3) make sure there is no locking or blocking slowing it down (see the query given here)
without more info on the specifics, it is hard to make any suggestions.
Almost all of the time is spent in
database by executing storing
procedures.
how many procedures is the app calling? what do they do? are transactions involved? are the procedures recompiling each call? do you have any indexes? are statistics up to date? etc., etc... You need to give a lot more info, or any help here is a complete guess.

How can Oracle User Profiles be put to practical use?

Oracle 10g has Profiles that allow various resources to be limited. Here are some references for clarity - orafaq.com, Oracle Documentation. I am particularly interested in limiting CPU_PER_CALL, LOGICAL_READS_PER_CALL, and COMPOSITE_LIMIT with the goal of preventing a poorly formed statement from destroying performance for every other session.
My problem is that I don't know what values to use for these parameters that will allow your typical long running resource intensive operations while preventing the truly bad ones. I realize that the values will differ based on the hardware, tolerance levels, and queries involved, which is why I am more interested in a method to follow to determine what values are best.
There are a variety of approaches, depending on the situation. The simplest possible approach that has any hope of working is to ask how long the longest running realistic operation would run (that's obviously system-dependent, and depends on whether this is a system you're building or something existing) and to back in to a CPU_PER_CALL based on that time limit and the degree of parallelism. Assuming single-threaded operation, if you can reasonably say that if a query hasn't returned in 30 minutes you want to kill it, you can set CPU_PER_CALL to allocate 30 minutes worth of CPU (obviously most queries aren't going to use 100% constantly, so that 30 minute limit gives you some amount of breathing room).
If this is an existing system, you (or your DBA) can go through AWR/ statspack reports for a reasonable number of days (some systems will need to make sure to look at reports from month/ quarter/ year-end where additional processing may be done) and find the real statements that use the most CPU and I/O. You can then set your profile limits appropriately (i.e. the maximum CPU recorded for a statement in the past month + 30% of breathing room).
Of course, for any limit you pick, someone has to monitor the system to make sure that the limits keep pace. If queries get more and more expensive over time because of increases in data volume, for example, that max + 30% limit might be insufficient in 6 months. You don't want to find that out when the nightly processing aborts, someone has to keep on top of that.
If you are using the enterprise edition, you may be better served looking at Resource Manager rather than profiles. While profiles allow you to kill runaway sessions, Resource Manager allows you to change session priority based on a variety of factors. Rather than killing a query that has used more than 30 minutes of CPU, it may be better to make it lower priority so that it doesn't interfere with other sessions without killing it, in case it is just running long.

Resources