Oracle Advanced Queues versus a Small Oracle Database Table - oracle

I'm looking for a simple way to communicate between two databases, there currently exists a database link between both database.
I want to process a job on database 1 for a batch of records (batch code for each batch of records), once the process has finished on database 1 and all the batches of records have been processed. I want database 2 to see that database 1 has processed a number of batches (batch codes) either by querying a oracle table or an Oracle advanced queue which sits on either database 1 or database 2.
Database 2 will process the batches of records that are on database 1 through a database linked view using each batch code and update the status of that batch to complete.
I want to be able to update the Oracle Advanced Queue or database table of its batch no, progress status ('S' started, 'C' completed), status date
Table name.
batch_records
Table columns
Batch No,
Status,
status date
Questions:
Can this be done by a simple database table rather than a complex Oracle Advanced Queue?
Can a table be updated over a database link?
Are there any examples of this?

To answer your question first:
yes, I believe so
yes, it can. But, if there are many rows involved, it can be pretty slow
probably
Database link is the way to communicate between two databases. If those jobs run on the database 1 (DB1), I'd suggest you to keep it there - in the DB1. Doing stuff over a database link calls for problems of different kinds. Might be slow, you can't do everything over the database link (LOBs, for example). One option is to schedule a job (using DBMS_SCHEDULER or DBMS_JOB (which is quite OK for simple things)). Let the procedure maintain job status in some table (that would be a "simple table" from your 1st question) in DB1 which will be read by the DB2.
How? Do it directly, or create a materialized view which will be refreshed in a scheduled manner (e.g. every morning at 07:00) or on demand (not that good idea) or on commit (once the DB1 procedure does the job and commits changes, materialized view will be refreshed).
If there aren't that many rows involved, I'd probably read the DB1 status table directly, and think of other options later (if necessary).

Related

How to avoid concurrency error for multiple files loading into a single table in ODI 12c

I have a scenario where there are 3 files of 5 million lines each with 3 weeks of data is bulk mapped to a staging table. They run parallelly. If the data transfer for a file fails with a concurrency error, what will be the best way to load the data of the 3 files effectively into the stage table.
(I was asked this in an interview.)
In ODI 12c, Parallel Target Table Load can be achieved by selecting the Use Unique Temporary Object Names checkbox in the Physical tab of a mapping. The work tables for each concurrent session will have a unique name.
I discuss it in more details in the Parallel Target Table Load section of this blog on ODI 12c new features

The proper way to record DML and DDL changes to specified tables, schemas or entire oracle database

I am finding a resolution to record DML and DDL changes made to specified Oracle schemas or tables dynamically, which meaning that schemas and tables monitored can be changed in application run time.
In a word, I am going to achieve an Oracle database probe, not for synchronizing databases.
Updated
For example, I set a monitor to a table test for database db. I want to retrieve all changes made to test, such as drop/add/modify a column or insert/update/delete records and so on, I need to analyze and send all changes to a blockchain such as table test added a column field1,that's why I want to get all executed SQL for the monitored tables.
I have read Oracle docs about data guard and streams.
Data guard doc says:
SQL Apply (logical standby databases only)
Reconstitutes SQL statements from the redo received from the primary database and executes the SQL statements against the logical standby database.
Logical standby databases can be opened in read/write mode, but the target tables being maintained by the logical standby database are opened in read-only mode for reporting purposes (providing the database guard was set appropriately). SQL Apply enables you to use the logical standby database for reporting activities, even while SQL statements are being applied.
Stream doc says:
Oracle Streams provides two ways to capture database changes implicitly: capture processes and synchronous captures. A capture process can capture DML changes made to tables, schemas, or an entire database, and DDL changes. A synchronous capture can capture DML changes made to tables. Rules determine which changes are captured by a capture process or synchronous capture.
And before this, I have already tried to get SQL change by analyzing redo log with oracle LogMinner and finally did it.
The Oracle stream seems to be the most appropriate way of achieving my purpose, but it implements steps are too complicated and manually. And in fact, there is an open-source for MySQL published by Alibaba which named canal, canal pretends itself as a slave so that MySQL will dump binlog and push it to canal service, and then canal reconstitutes the original SQL from binlog.
I think Oracle standby database is like MySQL slave so that the probe can be implemented in a similar way. So I want to use the data guard way, but I don't want to analyze the redo log myself since it needs root privilege to shut down the database and enable some functions, however, in production I only have a read-only user. I want to use logical standby database, but the problem is that I didn't see how to get the Reconstitutes SQL statements described above.
So, are there any pros can make some suggestions?
Anyway thanks a lot.

Dynamic Audit Trigger

I want to keep logs of all tables into 1 single log table. Suppose if any DML operation is going on any table inside DB. Than that should be logged in 1 single tables.
But there should be a dynamic trigger which will not hard coded the column names for every table.
Is there any solution for this.
Regards,
Somdutt Harihar
"Is there any solution for this"
No. This is not how databases work. Strongly enforced data structures is what they do, and that applies to audit tables just as much as transaction tables.
The reason is quite clear: the time you save not writing audit code specific to each transactional table is the time you will spend writing a query to retrieve the audit records. The difference is, when you're trying to get the audit records out you will have your boss standing over your shoulder demanding to know when you can tell them what happened to the payroll records last month. Or asking how long it will take you to produce that report for the regulators, are you trying to make the company look like a bunch of clowns? You get the picture. This is not where you want to be.
Also, the performance of a single table to store all the changes to all the tables in the database? That is going to be so slow, you have no idea.
The point is, we can generate the auditing code. It is easy to write some SQL which interrogates the data dictionary and produces DDL for the target tables and triggers to populate those tables.
In fact it gets even easier in 11.2.0.4 and later because we can use FLASHBACK DATA ARCHIVE (formerly Oracle Total Recall) to build and maintain such journalling functionality automatically, and query it automatically with the as of syntax. Find out more.
Okay, so technically there is a solution. You could have a trigger on each table which executes some dynamic PL/SQL to interrogate the data dictionary and assembles a piece of JSON which you stuff into your single table. The single table could be partitioned by day range and sub-partitioned by table name (assuming you have licensed the Partitioning option) to mitigate the performance of querying it.
But that is extremely complex. Running dynamic PL/SQL for every DML statement will have a bad effect on performance, which the users will notice. And this still doesn't solve the fundamental problem of retrieving the audit trail when you need it.
To audit DML actions on any table just enable such audit by using following code:
audit insert table, update table, delete table;
All actions with tables will then be logged to sys.dba_audit_object table.
Audit will only log timestamp, user, host and other params, not exact copies of new or old rows.

Is it possible in oracle to trigger a SAS program after an insert or update on oracle table?

I'm new to oracle and I saw Oracle triggers can trigger some action after an update or insert is done on oracle table.
Is it possible to trigger a SAS program after every update or insert on Oracle table.
There's a few different ways to do this but a problem like this is an example of the saying "Just because you can, doesn't mean you should".
So sure, your trigger can be fired on update or insert and that can call a stored procedure in a package which can use the oracle host command to call an operating system command which can call SAS.
Here are some questions:
do you really want to install SAS on the same machine as your Oracle database?
do you really want every transaction that inserts or updates to have to wait until the host command completes? What if SAS is down? Do you want the transaction to complete or.....?
do you really want the account that runs the database to have privileges to start up or send information to other executables? Think security risks.
if an insert does one record the action is clear. What if an update affects a thousand records? What message do you want to send to SAS? One thousand update statements? One update statement?
There are better ways to do this but a complete answer needs more details from you as to the end goal and business logic involved. Some ways I have used include:
trigger inserts data into an Oracle advanced queue. At predetermined intervals take the changes off the queue and write them to a flat file. Write a file watcher to look for the files and send the info to SAS.
write a Java program to take the changes and ship them
use the APEX web service and expose the changes as a series of JSON or REST packets.

Oracle trace all SELECTS

I need to do a task but I have no idea how to do it.
Here is the problem:
I have about 1000 tables on a Oracle Database and many processes.
Each process does one or more SELECT on one or many tables.
Because it's almost impossible to look in the source code to find which process does which SELECT on which tables, I would like to have some kind of trigger on SELECT on every table.
The idea is that I will launch the processes one by one to be able to see which tables will query.
I know that there is no trigger on SELECT, but is there anything else?
I need to do this in a one shot, just to recover the necessary info, it will not run every day.
You could activate auditing. You can audit all SELECT with:
AUDIT SELECT TABLE;
You can specify BY SESSION so that only one record will be written to the audit trail per table accessed per session.
Your AUDIT_TRAIL parameter must be set to either DB or OS. If it is set to DB, the audit trail will be written to the SYS.AUD$ table.
Assuming that you can map a "process" in your terminology to a particular Oracle session, you could trace the Oracle session. That would show you all the SQL statements executed by that session.
You could also potentially do a SQL*Net trace from whatever the client machine is (note that the "client machine" in a three-tier environment is the application server). A SQL*Net trace tends not to be nearly as easy to work with, however.

Resources