Oracle AWR reports : whether parse time include in DB CPU? - oracle

There is time model statistics in awr report, is the parse time include in DB CPU time or separate from ?
I've found my database exists a large parse time problem,and I would like to estimate the benefits that can be achieved by reducing the parse time.
thanks!

Time Model Statistics
DB Time represents total time in user calls
DB CPU represents CPU time of foreground processes
Total CPU Time represents foreground and background processes
Statistics including the word "background" measure background process time, therefore do not contribute to the DB time statistic
Ordered by % of DB time in descending order, followed by Statistic Name
we have identify that DB cpu is taking more time .
We will CPU ORDERED by CPU Time. There will how queries are taking time along with Execution time .Select query and check tunning advisor and find recommendation on same.
I hope i have clear your answer.

Related

Round Robin Memory Scheduling with CPU & Memory Visualisations

For a Round Robin implementation, I have 5 processes with their arrival & duration times and the memory needed to be processed, as shown below.
5 Processes accessing the CPU
The total memory of the system is 512K and the time quantum used is 3. Based on the Round Robin theory I created the following Gantt graph.
Gantt Graph creation
I have to represent on the following table, the visualisation of the memory and the CPU until time interval t=10 by showcasing the processes that are on CPU queue (which I did on the graph), which parts of the memory have been occupied by the processes and which are free, by using i) a system with variable diviations without compaction and ii) with compaction.
Table of Results to be created
I suppose that I have to adjust the memory usage of each process accordingly with the quantum time given at 3. For example, for process P1 the duration is equal to the quantum time and thus, the whole 85K of it will be used. If I that assumption is correct, the system I am using runs without compaction? How can I proceed the next steps with compaction?
Thank you in advance

Algorithms for establishing baselines from time series data

In my app I collect a lot of metrics: hardware/native system metrics (such as CPU load, available memory, swap memory, network IO in terms of packets and bytes sent/received, etc.) as well as JVM metrics (garbage collectins, heap size, thread utilization, etc.) as well as app-level metrics (instrumentations that only have meaning to my app, e.g. # orders per minute, etc.).
Throughout the week, month, year I see trends/patterns in these metrics. For instance when cron jobs all kick off at midnight I see CPU and disk thrashing as reports are being generated, etc.
I'm looking for a way to assess/evaluate metrics as healthy/normal vs unhealthy/abnormal but that takes these patterns into consideration. For instance, if CPU spikes around (+/- 5 minutes) midnight each night, that should be considered "normal" and not set off alerts. But if CPU pins during a "low tide" in the day, say between 11:00 AM and noon, that should definitely cause some red flags to trigger.
I have the ability to store my metrics in a time-series database, if that helps kickstart this analytical process at all, but I don't have the foggiest clue as to what algorithms, methods and strategies I could leverage to establish these cyclical "baselines" that act as a function of time. Obviously, such a system would need to be pre-seeded or even trained with historical data that was mapped to normal/abnormal values (which is why I'm learning towards a time-series DB as the underlying store) but this is new territory for me and I don't even know what to begin Googling so as to get back meaningful/relevant/educated solution candidates in the search results. Any ideas?
You could categorize each metric (CPU load, available memory, swap memory, network IO) with the day and time as bad or good for each metric.
Come up with a set of data for a given time frame with metric values and whether they are good or bad. Train a model using 70% of the data with the good and bad answers in the data.
Then test the trained model using the other 30% of data without the answers to see if you get the predicted results (good,bad) from the model. You could use a classification algorithm.

Oracle Profile definition CONNECT_TIME

The definition of CONNECT_TIME for profile , As per oracle documentation
CONNECT_TIME
Specify the total elapsed time limit for a session,
expressed in minutes.
I guess what they mean by connect time is the time execution of the procedure overall.
Is there a way to limit the connect time for the execution of the query in the procedure? For example, if procedure has 3 execution queries , and any query exceeds the specified time limit, then the session should be aborted or killed.
I guess what they mean by connect time is the time execution of the procedure overall.
No, it is the maximum lifetime of the session, whether it is executing anything or not.
Is there a way to limit the ... execution of the query in the procedure?
No, for two reasons.
First, all the profile limits are by call not by statement with in a call. In your example, the three statements would all use the same limits. Their total combined usage (of CPU or whatever) would not be able to exceed the limit.
Second, none of the profile options let you specify time per call. You can specify I/O per call and CPU per call -- usually that's what people care about. If a query is not consuming any CPU or I/O resources -- for example, if it is blocked waiting for a lock to clear -- what do you care how long it takes?

SSMS Client Statistics: Total Execution time vs. Real Execution Time?

In the Client Statistics Window in Sql Server Management Studio, I get the total
execution time.
However, this time is often muss less then the time the query actually took.
So what is the additional time spend for?
For example, here I got ~5,6 seconds of total execution time, but my query took 13 seconds to finish
The total execution time is the time until the result is available for display. But then, depending on the result set size and the way you display the data, the time until everything has been rendered is usually much higher.

SQL ordered by Elapsed Time in AWR report

I was trying to analyze the AWR report generated for a particular process with a duration of one hour. I am trying to find out which query is taking much time
to while running the process.
When I have gone through the report, I can see SQL ordered by Gets,SQL ordered by CPU Time,SQL ordered by Executions,SQL ordered by Parse Calls,
SQL ordered by Sharable Memory,SQL ordered by Elapsed Time etc.
I can see the SQL Text from the table SQL ordered by Elapsed Time.
My question: Is this the right way to identify the expensive query ? Please advise in this regard.
Elapsed Time (s) SQL Text
19,477.05 select abc.....
7,644.04 select def...
SQL Ordered by Elapsed Time , includes SQL statements that took significant execution time during processing.We have to look at Executions,Elapsed time per Exec (s) etc. along with Elapsed time to analyze.
For example,a query has low Executions and high Elapsed time per Exec (s) and this query could be a candidate for troubleshooting or optimizations.
The best reference I found so far: http://www.dbas-oracle.com/2013/05/10-steps-to-analyze-awr-report-in-oracle.html
AWR is used to see database health. So, I think this is not the good tools to trace a process.
You should use other tools like sql_trace (with tkprof) or dbms_profiler. It will concenrate on your own process.
If you are using sql_trace, you need to connect to the server (or ask to the dba tem) to analyse the trace.
In SQL Ordered by Elapsed time you always need to check the Query which is having low Execution numbers and higher Elapsed time . This would always be the problematic Query . Since Elapsed time is the defined task for a respective Query in this case if it is higher with less number of Executions it means that for some reason the Query is performing not up to expectations .
There is some parameter need to check so we are find issue in progress.
Buffer get is less expensive than physical read because database has to work harder (and more) to get the data. Basically time it would have taken if available in buffer cache + time actually taken to find out from physical block.
If you suspect that excessive parsing is hurting your database’s performance:
check “time model statistics” section (hard parse elapsed time, parse time elapsed etc.)
see if there are any signs of library cache contention in the top-5 events
see if CPU is an issue.
Establishing a new database connection is also expensive (and even more expensive in case of audit or triggers).
“Logon storms” are known to create very serious performance problems.
If you suspect that high number of logons is degrading your performance, check “connection management elapsed time” in “Time model statistics”.
Soft Parsing being low indicates bind variable and versioning issues. With 99.25 % for the soft parse meaning that about 0.75 % (100 – soft parse) is happening for hard parsing. Low hard parse is good for us.
If Latch Hit % is <99%, you may have a latch problem. Tune latches to reduce cache contention.
Library hit % is great when it is near 100%. If this was under 95% we would investigate the size of the shared pool.
In this ration is low then we may need to:
• Increase the SHARED_POOL_SIZE init parameter.
• CURSOR_SHARING may need to be set to FORCE.
• SHARED_POOL_RESERVED_SIZE may be too small.
• Inefficient sharing of SQL, PLSQL or JAVA code.
• Insufficient use of bind variables

Resources