In the Client Statistics Window in Sql Server Management Studio, I get the total
execution time.
However, this time is often muss less then the time the query actually took.
So what is the additional time spend for?
For example, here I got ~5,6 seconds of total execution time, but my query took 13 seconds to finish
The total execution time is the time until the result is available for display. But then, depending on the result set size and the way you display the data, the time until everything has been rendered is usually much higher.
Related
I have checking the performance one APIs which is performing in two systems therefore as the api has been migrated to new system i am doing the performance comparison from old system
Statistics as shown below:
New System:
Thread -25
Ramp-up ~25
Avg -8sec
Median - 7.8
95th percentile - 8.8 sec
Throughput - 0.39
Old System:
Thread -25
Ramp-up ~25
Avg -10 sec
Median - 10
95th percentile - 10
Throughput - 0.74
Here we can observe that the New System has taken less time for 25 Threads than old system but throughput is more Old System.
But Old System has Taken more time
I am confused about the throughput which system is more efficient ?
One which has taken less time should have more throughput but here the lesser time taken has less throughput which makes me confused to understand the performance??
can anyone help me here???
As per JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time)
So double check total test duration for both test executions, my expectation is that the "new system test" took longer.
With regards to the reason I cannot state anything meaningful without seeing the full .jtl results files for both executions, I can only assume that it could be one very long request in the "new system" test or you're having a Timer with random think time somewhere in your test.
I'm getting WORKLOAD REPOSITORY COMPARE PERIOD REPORT that says
Load Profile
1st per sec
DB time: 1.3
I'm confused, Db time should be in time units, doesn't it?
BELOW are context and history of what's I've researched about AWRs and how I came to the answer I posted eventually.
I have ARW report that says
Elapsed Time (min) DB time (min)
60 80
That I read e.g. here https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:229943800346471782 it's explained how DB time can exceed elapsed time. And Db time is time, it's measured in time units (min = minute?), so far so good.
Then Load Profile says:
1st per sec
DB time: 1.3
If DB time is 80 minutes in 60 minutes, than per sec by math should be 80/60/60, where that division by 60 to get per second go?
EDIT: my guess now as the question have been posted that this metric is in seconds, although units are not mentioned in AWR and I could not find about it in web by awr db time in sec search. Please provide link where it's confimed for sure (if it is so).
EDIT 2: WORKLOAD REPOSITORY report says, DB Time(s): per sec in Load profile section, where as WORKLOAD REPOSITORY COMPARE PERIOD REPORT just says Db time per sec. So now with like 99% assurance I can guess compare report uses same units, it's still not 100% sure fact. I actually get the reports via automated system, so cannot be sure they not mangled along the way...
P.S. by the way, I tried to do pretty formating of output, wanting to insert tabs, but could not find how, e,g. here Tab space in Markdown it says for similar it's not possible in markdown. Please add in comment if it can be done on stackoverflow.
My guess is that due to lots of info to fit on one line of compare AWR developers of the report decided to skip (s): which is present on the same place in ordinary (not compare) AWR.
I've looked at WORKLOAD REPOSITORY report, it says: DB Time(s): per sec 1.4 in Load profile section, where as WORKLOAD REPOSITORY COMPARE PERIOD REPORT just says Db time per sec and states 2nd 1.4. So now with like 99% assurance I can guess compare report uses same units - seconds for per seconds metric. Not 100% sure, but what are things we are 100% sure anyway?
There is time model statistics in awr report, is the parse time include in DB CPU time or separate from ?
I've found my database exists a large parse time problem,and I would like to estimate the benefits that can be achieved by reducing the parse time.
thanks!
Time Model Statistics
DB Time represents total time in user calls
DB CPU represents CPU time of foreground processes
Total CPU Time represents foreground and background processes
Statistics including the word "background" measure background process time, therefore do not contribute to the DB time statistic
Ordered by % of DB time in descending order, followed by Statistic Name
we have identify that DB cpu is taking more time .
We will CPU ORDERED by CPU Time. There will how queries are taking time along with Execution time .Select query and check tunning advisor and find recommendation on same.
I hope i have clear your answer.
The definition of CONNECT_TIME for profile , As per oracle documentation
CONNECT_TIME
Specify the total elapsed time limit for a session,
expressed in minutes.
I guess what they mean by connect time is the time execution of the procedure overall.
Is there a way to limit the connect time for the execution of the query in the procedure? For example, if procedure has 3 execution queries , and any query exceeds the specified time limit, then the session should be aborted or killed.
I guess what they mean by connect time is the time execution of the procedure overall.
No, it is the maximum lifetime of the session, whether it is executing anything or not.
Is there a way to limit the ... execution of the query in the procedure?
No, for two reasons.
First, all the profile limits are by call not by statement with in a call. In your example, the three statements would all use the same limits. Their total combined usage (of CPU or whatever) would not be able to exceed the limit.
Second, none of the profile options let you specify time per call. You can specify I/O per call and CPU per call -- usually that's what people care about. If a query is not consuming any CPU or I/O resources -- for example, if it is blocked waiting for a lock to clear -- what do you care how long it takes?
I was trying to analyze the AWR report generated for a particular process with a duration of one hour. I am trying to find out which query is taking much time
to while running the process.
When I have gone through the report, I can see SQL ordered by Gets,SQL ordered by CPU Time,SQL ordered by Executions,SQL ordered by Parse Calls,
SQL ordered by Sharable Memory,SQL ordered by Elapsed Time etc.
I can see the SQL Text from the table SQL ordered by Elapsed Time.
My question: Is this the right way to identify the expensive query ? Please advise in this regard.
Elapsed Time (s) SQL Text
19,477.05 select abc.....
7,644.04 select def...
SQL Ordered by Elapsed Time , includes SQL statements that took significant execution time during processing.We have to look at Executions,Elapsed time per Exec (s) etc. along with Elapsed time to analyze.
For example,a query has low Executions and high Elapsed time per Exec (s) and this query could be a candidate for troubleshooting or optimizations.
The best reference I found so far: http://www.dbas-oracle.com/2013/05/10-steps-to-analyze-awr-report-in-oracle.html
AWR is used to see database health. So, I think this is not the good tools to trace a process.
You should use other tools like sql_trace (with tkprof) or dbms_profiler. It will concenrate on your own process.
If you are using sql_trace, you need to connect to the server (or ask to the dba tem) to analyse the trace.
In SQL Ordered by Elapsed time you always need to check the Query which is having low Execution numbers and higher Elapsed time . This would always be the problematic Query . Since Elapsed time is the defined task for a respective Query in this case if it is higher with less number of Executions it means that for some reason the Query is performing not up to expectations .
There is some parameter need to check so we are find issue in progress.
Buffer get is less expensive than physical read because database has to work harder (and more) to get the data. Basically time it would have taken if available in buffer cache + time actually taken to find out from physical block.
If you suspect that excessive parsing is hurting your database’s performance:
check “time model statistics” section (hard parse elapsed time, parse time elapsed etc.)
see if there are any signs of library cache contention in the top-5 events
see if CPU is an issue.
Establishing a new database connection is also expensive (and even more expensive in case of audit or triggers).
“Logon storms” are known to create very serious performance problems.
If you suspect that high number of logons is degrading your performance, check “connection management elapsed time” in “Time model statistics”.
Soft Parsing being low indicates bind variable and versioning issues. With 99.25 % for the soft parse meaning that about 0.75 % (100 – soft parse) is happening for hard parsing. Low hard parse is good for us.
If Latch Hit % is <99%, you may have a latch problem. Tune latches to reduce cache contention.
Library hit % is great when it is near 100%. If this was under 95% we would investigate the size of the shared pool.
In this ration is low then we may need to:
• Increase the SHARED_POOL_SIZE init parameter.
• CURSOR_SHARING may need to be set to FORCE.
• SHARED_POOL_RESERVED_SIZE may be too small.
• Inefficient sharing of SQL, PLSQL or JAVA code.
• Insufficient use of bind variables