Oracle database reads monitor or counter - oracle

I'm not very familiar with Oracle DB. Is there a way to see all of the record reads on all tables across the DB? A tool/utility of some sort maybe, or somehow built into Oracle Sql Developer?
I'm doing this to see if there's any inefficient queries in the application, or to see who's the biggest "hog" of DB reads across the application.

There are a lot of different possibilities here depending on how detailed (and pretty) you want to get, your licensing, what infrastructure you have in place, etc.
The simplest, lowest common denominator option from within SQL Developer would be to open the Reports menu (View | Reports), go to All Reports | Data Dictionary Reports | Database Administration | Top SQL and select one of the reports there. Probably Top SQL by Buffer Gets or Top SQL by CPU. Those will open a report that will show the top SQL by logical I/O or CPU that is currently in the plan cache.

Related

How can we do data analysis for DB replication project

We are facing one issue in our project i.e. Data verification issue.
The project is about Replication of data from Sybase to oracle DBs.
The table structures for Table A across Sybase, Oracle is same.
Same column and primary key combination across all the databases.
e.g. If Sybase has Table A with columns a, b and C
same table with same name and same columns will be available in different databses.
We are done with replication stuff part.But we faced some silent failure like data discrepancy just wondering if there will any tool already available for this.
Any information on his would be helpful. Thanks.
Sybase (now SAP) has a couple products that can be used for data comparisons and reconciliation:
rs_subcmp - an older, 32-bit tool that comes with the Sybase Replication Server product that can be used to compare data between
source and target; SQL reconciliation scripts can be generated from
the differences and then applied to the target to bring it in sync
with the source; if your tables are more than 1GB in size you can
still use rs_subcmp but you'll need to create multiple comparison
jobs (via where clauses) to work on different subsets of your tables
[I don't recall if rs_subcmp can be use for heterogeneous
replication setsup, eg, ASE-Oracle.]
Data Assurance (DA) - the newer, 64-bit product ... also from
Sybase ... which can also compare data and (re)sync the target(s)
from the source (either via SQL reconciliation scripts or directly);
DA is capable of handling comparisons between a handful of
different RDBMS products (eg, ASE-Oracle); I'm currently working on a
project where one of the requirements is to validate (and reconcile
where needed) 200+TB of data being migrated from Oracle to HANA and
I'm using DA for the validation/reconciliation portion of the project
As #TenG has hinted at with his answer, there's a good bit of effort involved to compare data and generate code to reconcile the differences. Rolling your own code is doable but will entail a lot of work. If you've got the money you'll likely find 3rd party tools can get most/all of the work done for you.
If you used a 3rd party product to replicate your data from Sybase to Oracle, you may want to see if the same vendor has a comparison/validation/reconciliation tool you could use.
I've worked on a few migration projects and a key part has always been data reconciliation.
I can only talk about the approaches we took, based on constraints around tools available and minimising downtime, and constraints of available space.
In all cases I took to writing scripts that worked on two levels - summary view and "deep dive". We couldn't find any tools readily available that did what we wanted in a timely enough manner. In fact even the migration tools we found had limitations (datapump, sqlloader, golden gate, etc) and hand coded scripts to handle the bits that we found to be lacking or too slow in the standard tools.
The summary view varied from project to project. It was part functional based (do the accounting figures for transactions match) for the users to verify, and part technical. For smaller tables we could just write simple reports and the diff was straight forward.
For larger tables we wrote technical reports that looked at bands of data (e.g group the PK into 1000s) collect all the column data and produce checksum, generating a report for each table like:
PK ID Range Start Checksum
----------------- -----------
100000 22773377829
200000 38938938282
.
.
Corresponding table pairs from each database were then were "diff"d against each other to highlight discrepancies. Any differences that were found could then be looked at in more detail.
The scripts were written in such a way to allow them to run in parallel looking at discrete bands. Te band ranges were tunable as well to get the best throughput. This obviously sped things up.
The scripts were shell scripts firing off sqlplus reports, and similar for the source database.
On one project there wasn't enough diskspace to do these reports, so I wrote a Java program that queried the two databases side by side, using block queues to fetch and compare rowsets. Being in memory meant this was super fast.
For the "deep dive" we looked at the details for key tables, or for tables that reports a checksum difference.
For the user reports, the users would specify what they wanted to see, and we wrote the reports accordingly.
On the last project, the only discrepancies found were caused by character set conversion issues (people names with accents weren't handled correctly).
On projects where the overall dataset was smaller we extracted the data to XML files and wrote a Java tool to processes pairs and report differences.
The SAP/Sybase rs_subcmp tool is pretty powerful and also pretty hard to use. For details see:
https://help.sap.com/viewer/075940003f1549159206fcc89d020515/16.0.3.3/en-US/feb58db1bd1c1014b134ef4efef25563.html?q=rs_subcmp
You have to pass it key field information, but once you do that, it can retry/restart the compare streams after transient differences. Pretty fancy.
rs_subcmp expects to work on Sybase data source. So to compare against Oracle, you'd probably have to setup one of those Sybase-to-Oracle gateway products ($$$$$).
Could you install the Oracle ODBC drivers and configure them to allow Sybase clients to access Oracle? I'm guessing not (but that's outside the range of my experience).
Note the "-h" option for rs_subcmp. The docs just say it runs a "fast comparison", but what it's actually doing is running queries using the hashbytes() function. Something like:
select keyfield1,keyfield2, hashbytes("Md5",datacol1,datacol2,datacol3)
from mytable
So this sort of query might be good for the "summary view" type comparison discussed above (if the Oracle STANDARD_HASH() function output matches up with the Sybase hashbytes() function (again, outside my experience))
Note, as of ASE 16, there was a bug with the hash() & hashbytes() functions running the Md5 hash option against large varbinary columns where they could use up all procedure cache, potentially crashing the server (CR 811073)

Dynamically List contents of a table in database that continously updates

It's kinda real-world problem and I believe the solution exists but couldn't find one.
So We, have a Database called Transactions that contains tables such as Positions, Securities, Bogies, Accounts, Commodities and so on being updated continuously every second whenever a new transaction happens. For the time being, We have replicated master database Transaction to a new database with name TRN on which we do all the querying and updating stuff.
We want a sort of monitoring system ( like htop process viewer in Linux) for Database that dynamically lists updated rows in tables of the database at any time.
TL;DR Is there any way to get a continuous updating list of rows in any table in the database?
Currently we are working on Sybase & Oracle DBMS on Linux (Ubuntu) platform but we would like to receive generic answers that concern most of the platform as well as DBMS's(including MySQL) and any tools, utilities or scripts that can do so that It can help us in future to easily migrate to other platforms and or DBMS as well.
To list updated rows, you conceptually need either of the two things:
The updating statement's effect on the table.
A previous version of the table to compare with.
How you get them and in what form is completely up to you.
The 1st option allows you to list updates with statement granularity while the 2nd is more suitable for time-based granularity.
Some options from the top of my head:
Write to a temporary table
Add a field with transaction id/timestamp
Make clones of the table regularly
AFAICS, Oracle doesn't have built-in facilities to get the affected rows, only their count.
Not a lot of details in the question so not sure how much of this will be of use ...
'Sybase' is mentioned but nothing is said about which Sybase RDBMS product (ASE? SQLAnywhere? IQ? Advantage?)
by 'replicated master database transaction' I'm assuming this means the primary database is being replicated (as opposed to the database called 'master' in a Sybase ASE instance)
no mention is made of what products/tools are being used to 'replicate' the transactions to the 'new database' named 'TRN'
So, assuming part of your environment includes Sybase(SAP) ASE ...
MDA tables can be used to capture counters of DML operations (eg, insert/update/delete) over a given time period
MDA tables can capture some SQL text, though the volume/quality could be in doubt if a) MDA is not configured properly and/or b) the DML operations are wrapped up in prepared statements, stored procs and triggers
auditing could be enabled to capture some commands but again, volume/quality could be in doubt based on how the DML commands are executed
also keep in mind that there's a performance hit for using MDA tables and/or auditing, with the level of performance degradation based on individual config settings and the volume of DML activity
Assuming you're using the Sybase(SAP) Replication Server product, those replicated transactions sent through repserver likely have all the info you need to know which tables/rows are being affected; so you have a couple options:
route a copy of the transactions to another database where you can capture the transactions in whatever format you need [you'll need to design the database and/or any customized repserver function strings]
consider using the Sybase(SAP) Real Time Data Streaming product (yeah, additional li$ence is required) which is specifically designed for scenarios like yours, ie, pull transactions off the repserver queues and format for use in downstream systems (eg, tibco/mqs, custom apps)
I'm not aware of any 'generic' products that work, out of the box, as per your (limited) requirements. You're likely looking at some different solutions and/or customized code to cover your particular situation.

Oracle Apex Interactive Report bad performance while loading

I have an interactive report in one of my APEX application. The SQL query used in the IR runs pretty fine when executed in SQL Developer.
But, at times in the application it gets stuck and requires more time than usual to load the IR. (Usually it takes less than 5 secs to load but at times more than 50 secs).
What might be the possible reasons for it to load slow ?
The query is well tuned and IR has default settings with no modification. I have also checked the stats on the tables and it is fresh.
The SQL query used in IR fetches 10k records.
If you go into Component View and then click Interactive Report under Regions, there is a setting near the bottom under the Performance heading called Maximum Rows To Process. Also limiting the number of rows to display sped things up for me.
Sorry but i can't write comments. Is there any database view in your query?
I have similar situation where query from database view with 6 mil. records take around 3 min to complete in Oracle Apex IR and 10-15 seconds in SQL Developer. So after some research i try to put sql from view directly into IR and result was almost same as this in SQL Developer.
Also You can remove pagination from IR or change it from "x to y from z" to be only "x to y".
I hope this can help you.
Query response time in SQL Developer versus any other Web browser cannot be compared directly. Some of the reasons for its slugishness could be related to server setup, server load, current user traffic, page load processes, page and region rendering, number of regions,components and plugins, navigation menu query, report query, number or columns and rows being displayed, row content length, apex items especially LOV with SQL queries, etc.
From your question, it looks like performance issue is not consistent and so, I think issue may be related to server setup or traffic. Try to check if you see any difference in load time after bouncing the server, if that's an option. Try to isolate the problem and if the issue is specific to interactive report, build a classic report and compare times.
Another thing that has helped me in past is to compare and verify compute times using APEX Debugger, here is the screenshot.
Also look at network and timeline tabs in Chrome debugger,
Implement indexes on your tables
Verify with your DBA if you have database locks
Verify the amount of logs in Database
Switch to classic reports.
Regards

oracle user_constraints, user_tables etc views for production

Is it ok to use that views in production? I mean if queries to dictionary is intended to be frequently called or it is designed just for very rare usage with tools like sql navigator, sql developer etc.
It depends on your definition of "frequently", the size of those objects in your database, and why you need to query them.
In general, it's fine to query data dictionary tables on a regular basis in production-- tons of database monitoring tools, for example, will regularly query a bunch of data dictionary tables to gather performance data. At the same time, though, you can easily configure most of these tools to put a tremendous load on your database by gathering too much data too frequently so your performance monitoring tool becomes the source of performance problems. Normally, you can just dial back the amount of data getting captured and the frequency at which it is captured to get 99% of the monitoring benefit without creating a bunch of issues.
I'm not sure why any tool would frequently need to query user_tables-- since tables aren't getting created or destroyed at runtime in a proper system, there aren't too many reasons why you'd really need to query that particular view all that frequently.

How to monitor web application DB query execution plans?

Is there a way in TOAD or some other tool to monitor queries being executed by your web app?
I'd like to examine the explain/execution plans for the web app queries.
I'm debugging why the webapp queries are slower than when run from sqlplus.
Generally you can track and anlyse from three points.
Firstly SQL, mostly through the v$sql view.
Secondly through session (starting with v$session).
Finally through time (measuring, normally at either a system or session level, for a period of time).
If a particular SQL statement, such as SELECT * FROM table WHERE type = :val, is executed then the database will make a quick hash of it and see if there is a matching statement in the cache. The statement not only has to match on the text, but on certain environmental settings too (such as Parsing user, Optimizer Goal, bind variable types, NLS settings...).
If there is no matching statement, then the database will feed it to the optimizer to come up with a query plan. If there is a match, then the plan already determined for that statement will be used.
So I would suggest your first step is to take an SQL which has been executed by both the web-app and from sqlplus and see if it is using the same plan. You should be able to look in v$sql for the statement of interest and see how many occurrences it has).
If you have multiple occurrences, especially with different MODULE/ACTION/SERVICE values, then you can look at the plans to see if they differ (DBMS_XPLAN.DISPLAY_CURSOR). If you have only one occurrence then the SQL is being shared and you need to take a different approach to isolating the web-app executions from the sqlplus executions.
One way to do that would be to trace the execution of the SQL through both a web-app session and sqlplus session (DBMS_MONITOR). Then tkprof or similar on the trace files and look for differences.
can't help you with doing it through TOAD, but you can't go wrong in getting an understanding of the underlying tools and techniques.
Yes. There is a way to monitor a web app callings to queries to DB in Oracle TOAD.
START -> All Programs -> Quest Software -> TOAD for Oracle -> Tools -> SQL Monitor
With this tool you select the process ([TOAD, Web_dev (I dont remember the name of debug)] "running" in this case, "debug" too). This tool shows what stored procedure or function is calling the app.

Resources