How to take AWR report from dynatrace? - jmeter

I am new to performance analysis of DB.
How to pull AWR report from dynatrace?
What parameters we are monitoring in AWR report?

You don't. You pull it from ORACLE.

Dynatrace enables monitoring of your entire infrastructure including your hosts, processes, and network. You can schedule AWR report on email trigger.
2. What parameters we are monitoring in AWR report?
You have check combination of AWR report. using single AWR report willl not give clear idea.
dependen on issue you have to see different section.
The main sections in an AWR report include:
Report Summary: overall summary of the instance during the snapshot period, and it contains important aggregate summary information.
Cache Sizes (end): size of each SGA region after AMM has changed them. This information can be compared to the original init.ora parameters at the end of the AWR report.
Load Profile: shows important rates expressed in units of per second and transactions per second.
Instance Efficiency Percentages: With a target of 100%, these are high-level ratios for activity in the SGA.
Shared Pool Statistics: T good summary of changes to the shared pool during the snapshot period.
Top 5 Timed Events: It shows the top wait events and can quickly show the overall database bottleneck.
Wait Events Statistics Section: shows a breakdown of the main wait events in the database including foreground and background database wait events as well as time model, operating system, service, and wait classes statistics.
Wait Events: This AWR report section provides more detailed wait event information for foreground user processes which includes Top 5 wait events and many other wait events that occurred during the snapshot interval.
Background Wait Events: This section is relevant to the background process wait events.
Time Model Statistics: Time mode statistics report how database-processing time is spent. This section contains detailed timing information on particular components participating in database processing.
Operating System Statistics: The stress on the Oracle server is important, and this section shows the main external resources including I/O, CPU, memory, and network usage.
Service Statistics: The service statistics section gives information about how particular services configured in the database are operating.

Related

Experiments Feature stuck on collecting data

I am trying to split traffic from a given flow into different versions to measure statistical performance over time using the Experiment feature. However, it always shows the state "Collecting Data".
Here are the steps to reproduce the issue --
Create an Experiment on a flow and select different versions
Select Auto rollout and Select Steps option
Add steps for gradual progress of traffic increase and minimum duration
Save and Start the Experiment
Send queries to chatbot triggering the configured flow for the Experiment
The experiment should show some results in the Status tab and compare the performance of multiple flow versions. However, It does not produce any results. Always show the status as "Collecting Data" and Auto Rollout as "Not Started".
The only prerequisite for the Experiments feature to work is to enable the Interaction logs which are already enabled on my virtual agent.
About 2.5K sessions (~4K interactions ) were created in the last 48 hours. Are there any minimum requirements for it to generate results like the minimum number of sessions etc.?

Oracle auto-gather statistics & stale statistics

Starting in Oracle 11g, GATHER_STATS_JOB is no longer valid, and has been replace by "auto optimizer stats collection".
This job supposedly runs during the "maintenance windows" and gathers statistics for objects which have changed 10% or more, or have stale stats. If this is true, then why when I run a query checking "stale_stats='YES'", I still get objects?
Maybe I'm not understanding how the job executes......
Two broad possibilities
Oracle updates stale_statistics to "YES" in dba_tab_statistics periodically throughout the day as tables undergo changes. It is entirely possible that a table had just under the threshold amount of changes when stats were gathered this morning and that stale_stats flipped to "YES" during the day today when a few more changes were made.
Depending on how many objects had stale stats when the job ran and how much data those tables contained, how large your maintenance window is, and how powerful your server is, it is possible that the statistics job had to be aborted before it could re-gather all the stale statistics. If the job was aborted, that abort would be logged in the job history. If this happened because there happened to be a large number of changes one day (say you ran an annual purge process that deleted a large amount of data from almost every table in the database), the stale statistics would be updated over the course of several days worth of statistics job runs until the job caught up.

what is the criteria that oracle follows to auto collect stats on user schema tables in 12c

As per Oracle documentation its said it collects statistics for "all objects" in database. But, it does not specify anywhere that it collects for user specific schemas.
1) What is criteria it follows for auto collection of statistics on user specific schemas.
2) Is there any detailed explanation in metalink which explains how it is done.
Appreciate your valuable response on it.
Thanks,
Mir
The default statistics gathering process works for all schemas, including user schemas. Statistics collection is difficult but basically boils down to when it gathers statistics and what statistics it gathers:
WHEN AutoTasks gather statistics during specified maintenance windows (usually 10PM every day).
WHAT The STALE_PERCENT preference determines when to gather statistics on a table or index. By default, if 10% of the rows change statistics will be gathered.
But there are lots of exceptions. Fixed object stats, dictionary object stats, and system statistics (about system performance), are only gathered manually. And tables can be locked to not have their statistics altered.
You can read more details in the Optimizer Statistics section of the Database Concepts Guide, or the Optimizer Statistics part of the SQL TUning Guide.
There are several ways to determine when statistics were last gathered. Per object, you can look for the LAST_ANALYZED date column in views like DBA_TABLES and DBA_INDEXES.
To see when statistics auto tasks should run, there are lots of DBA_AUTOTASK_* views. Those views are difficult to understand, there are many ways a task can be disabled. (I wish Oracle had just used DBMS_SCHEDULER). TO see when statistics tasks were run, see the views DBA_OPTSTAT_*.
It's a huge topic, and this answer is only a high level overview.

Oracle db table data loading is too slow in DBeaver

I'm using DBeaver to connect to an Oracle database. Database connection and table properties view functions are working fine without any delay. But fetching table data is too slow(sometimes around 50 seconds).
Any settings to speed up fetching table data in DBeaver?
Changing following settings in your oracle db connection will be faster fetching table data than it's not set.
Right click on your db connection --> Edit Connection --> Oracle properties --> tick on 'Use RULE hint for system catalog queries'
(by default this is not set)
UPDATE
In the newer version (21.0.0) of DBeaver, many more performance options appear here. Turning on them significantly improves the performance for me
I've never used DBeaver, but I often see applications which use too small an "array fetch size"**, which often poses fetch issues.
** Array fetch size note:
As per the Oracle documentation the Fetch Buffer Size is an application side memory setting that affects the number of rows returned by a single fetch. Generally you balance the number of rows returned with a single fetch (a.k.a. array fetch size) with the number of rows needed to be fetched.
A low array fetch size compared to the number of rows needed to be returned will manifest as delays from increased network and client side processing needed to process each fetch (i.e. the high cost of each network round trip [SQL*Net protocol]).
If this is the case, you will likely see very high waits on “SQLNet message from client” [in gv$session or elsewhere].
SQLNet message from client
This wait event is posted by the session when it is waiting for a message from the client to arrive. Generally, this means that the session is just sitting idle, however, in a Client/Server environment it could also means that either the client process is running slow or there are network latency delays. The database performance is not degraded by high wait times for this wait event.

How can I see aggregates over the traces in New Relic

We use New Relic to gather performance information from our production environment and we have added some custom instrumentation. In the Web Transactions screens, we can see which transactions use most time and we can even drill down into the specific traces of the slowest transactions. This is all working fine. However, the slowest transactions are not always representative for the operation as a whole. They are often edge cases (cache expired, warming requests after an update, etc...).
I would be interested to see the very same data that we can see in the Trace Details in a more aggregate way. Preferably also in the hierarchical way that is used in Trace Details (although this will not always be possible, as multiple instances may have different traces). Is the Breakdown Table on the overview page for one Web Transaction type actually what I am looking for? I am not sure. What does that show exactly?
The Breakdown Table in New Relic's Web Transactions tab is designed to give you an aggregate of performance data along with historical comparisons. This may not provide the specific level of detail you're looking for.
New Relic has a new feature available for the Python and Java agents called X-Ray Sessions. After you start an x-ray session, New Relic will collect up to 100 transaction traces and a thread profile for your transaction. Collection automatically stops at 100 traces or 24 hours, whichever comes first. The results are displayed in a hierarchical waterfall chart like transaction traces, but the data is aggregated. Here is an overview:
https://newrelic.com/docs/transactions-dashboards/xray-sessions
While I can't say if or when this feature will be rolled out to the other language agents, I suggest keeping an eye the following for updates:
https://newrelic.com/docs/features/new-noteworthy

Resources