I need to do a task but I have no idea how to do it.
Here is the problem:
I have about 1000 tables on a Oracle Database and many processes.
Each process does one or more SELECT on one or many tables.
Because it's almost impossible to look in the source code to find which process does which SELECT on which tables, I would like to have some kind of trigger on SELECT on every table.
The idea is that I will launch the processes one by one to be able to see which tables will query.
I know that there is no trigger on SELECT, but is there anything else?
I need to do this in a one shot, just to recover the necessary info, it will not run every day.
You could activate auditing. You can audit all SELECT with:
AUDIT SELECT TABLE;
You can specify BY SESSION so that only one record will be written to the audit trail per table accessed per session.
Your AUDIT_TRAIL parameter must be set to either DB or OS. If it is set to DB, the audit trail will be written to the SYS.AUD$ table.
Assuming that you can map a "process" in your terminology to a particular Oracle session, you could trace the Oracle session. That would show you all the SQL statements executed by that session.
You could also potentially do a SQL*Net trace from whatever the client machine is (note that the "client machine" in a three-tier environment is the application server). A SQL*Net trace tends not to be nearly as easy to work with, however.
Related
I'm looking for a simple way to communicate between two databases, there currently exists a database link between both database.
I want to process a job on database 1 for a batch of records (batch code for each batch of records), once the process has finished on database 1 and all the batches of records have been processed. I want database 2 to see that database 1 has processed a number of batches (batch codes) either by querying a oracle table or an Oracle advanced queue which sits on either database 1 or database 2.
Database 2 will process the batches of records that are on database 1 through a database linked view using each batch code and update the status of that batch to complete.
I want to be able to update the Oracle Advanced Queue or database table of its batch no, progress status ('S' started, 'C' completed), status date
Table name.
batch_records
Table columns
Batch No,
Status,
status date
Questions:
Can this be done by a simple database table rather than a complex Oracle Advanced Queue?
Can a table be updated over a database link?
Are there any examples of this?
To answer your question first:
yes, I believe so
yes, it can. But, if there are many rows involved, it can be pretty slow
probably
Database link is the way to communicate between two databases. If those jobs run on the database 1 (DB1), I'd suggest you to keep it there - in the DB1. Doing stuff over a database link calls for problems of different kinds. Might be slow, you can't do everything over the database link (LOBs, for example). One option is to schedule a job (using DBMS_SCHEDULER or DBMS_JOB (which is quite OK for simple things)). Let the procedure maintain job status in some table (that would be a "simple table" from your 1st question) in DB1 which will be read by the DB2.
How? Do it directly, or create a materialized view which will be refreshed in a scheduled manner (e.g. every morning at 07:00) or on demand (not that good idea) or on commit (once the DB1 procedure does the job and commits changes, materialized view will be refreshed).
If there aren't that many rows involved, I'd probably read the DB1 status table directly, and think of other options later (if necessary).
I’ve been tasked with doing some housekeeping on an Oracle schema I have access to. In a nutshell, I’d like to drop any tables that have not been ‘used’ in the last 3 months (tables that haven’t been queried or had data manipulated in the last 3 months). I have read/write access to the schema but I’m not a DBA; I run relatively basic DML/DDL queries in Oracle.
I’m trying to figure out if there’s a way for me to identify old/redundant tables; here’s what I’ve tried (mostly unsuccessfully)
USER_TABLES was my first port of call, but the LAST_ANALYZED date in this table doesn’t seem to be the last modified/queried date I’m looking for
Googling has brought DBA_Hist tables to my attention, I’ve tried querying some of these (i.e. DBA_HIST_SYSSTAT) but I’m confronted with (ORA-00942: table or view does not exist)
I’ve also tried querying V$SESSION_WAIT, V$ACTIVE_SESSION_HISTORY and V$SEGMENT_STATISTICS, but I get the same ORA-00942 error
I’d be grateful for any advice about whether the options above actually offer the sort of information I need about tables, and if so what I can do to work around the errors I’m getting. Alternatively, are there any other options that I could explore?
Probably the easiest thing to do, to be 100% sure, is to enable auditing on the Oracle tables that you're interested in (possibly all of them). Once enabled, Oracle has an audit table (dba_audit_trail) that you can query to find if the table(s) have been accessed. You can enable auditing by issuing: AUDIT on . BY SESSION;
I chose "by session" so that you only get a single record per session, no matter how many times the session performs the operation (to minimize the records in the audit table).
Example:
audit select on bob.inventory by session;
Then you can query the dba_audit_trail after some time passes to see if any records show up for that table.
You can disable auditing by issuing the "noaudit" command.
Hope that helps.
-Jim
I'm new to oracle and I saw Oracle triggers can trigger some action after an update or insert is done on oracle table.
Is it possible to trigger a SAS program after every update or insert on Oracle table.
There's a few different ways to do this but a problem like this is an example of the saying "Just because you can, doesn't mean you should".
So sure, your trigger can be fired on update or insert and that can call a stored procedure in a package which can use the oracle host command to call an operating system command which can call SAS.
Here are some questions:
do you really want to install SAS on the same machine as your Oracle database?
do you really want every transaction that inserts or updates to have to wait until the host command completes? What if SAS is down? Do you want the transaction to complete or.....?
do you really want the account that runs the database to have privileges to start up or send information to other executables? Think security risks.
if an insert does one record the action is clear. What if an update affects a thousand records? What message do you want to send to SAS? One thousand update statements? One update statement?
There are better ways to do this but a complete answer needs more details from you as to the end goal and business logic involved. Some ways I have used include:
trigger inserts data into an Oracle advanced queue. At predetermined intervals take the changes off the queue and write them to a flat file. Write a file watcher to look for the files and send the info to SAS.
write a Java program to take the changes and ship them
use the APEX web service and expose the changes as a series of JSON or REST packets.
Recently, a very strange scenario has been reported from one of of our sites.
Based on our fields, we found that there should be some kind of delete that must have happenend for that scenario
In our application code, there is no delete for that table itself. So we checked in gv$sqlarea(since we use RAC) table whether there are any delete sql for this table. We found nothing.
Then we tried to do the same kind of delete through our PL/SQL Developer. We are able to track all delete through gv$sqlarea or gv$session. But when we use below query, lock, edit and commit in plsql developer, there is no trace
select t.*, t.rowid
from <table>
Something which we are able to find is sys.mon_mods$ has the count of deletes. But it is not stored for a long time, so that we can trace by timestamp
Can anyone help me out to track this down
Oracle Version: 11.1.0.7.0
Type : RAC (5 instances)
gv$sqlarea just shows the SQL statements that are in the shared pool. If the statement is only executed once, depending on how large the shared pool and how many distinct SQL statements are executed, a statement might not be in the shared pool very long. I certainly wouldn't expect that a one-time statement would still be in the shared pool of a reasonably active system after a couple hours.
Assuming that you didn't enable auditing and that you don't have triggers that record deletes, is the system in ARCHIVELOG mode? Do you have the archived logs from the point in time where the row was deleted? If so, you could potentially use LogMiner to look through the archived logs to find the statement in question.
I'm using (have to) a badly designed Oracle(10) DB, for which I don't have admin rights (although I can create tables, triggers, etc in my scheme).
Now I had run into a problem: this DB connected with several users/programs. I must find out who updates a certain row, when, and if possible: with what kind of statement. Is it possible?
Thanks in advance!
It would be easier to do this if you had admin rights to enable auditing. Without the power of auditing you are left with the use of triggers to handle the logging of inserts/updates/delete. In your case since you are interested in only update, you can put a trigger on the table to fire after the update which logs to another table what was changed, by whom, from where and to what and on what day.
I would create a journal table for the table you are working with. It will show you the operation type and the oracle user...as well as a bunch of other data if you need it.