configured oracle goldengate DML replication both are oracle database - oracle

I had configure oracle D ML replication using oracle golden gate successfully but is there any query to check source and target is in sync or not or how to verify IT.

No replication tool has the functionality to check if the database is in sync. The idea of asynchronous replication is that it is never fully in sync - the target is always late compared to the source database. Only fully synchronized disk replication allows a full in sync copy of the data.
You might want check if the "not recently changed" data is the same using a compare-every-row technology. Oracle has a product called Veridata which can do the job.
You might also want to check if the replication stream is working (it is not stopped). But this check does not check if the data is in sync. Someone might modify the target data and you might not be able to check that. The heartbeat technology just checks if the replication stream is not broken. OGG 12.2 has special build in commands for that.
Please check:
ADD HEARTBEATTABLE command for ggsci
ENABLE_HEARTBEAT_TABLE parameter for processes

Related

Is it possible to apply archivelogs from standby DB to Production DB?

I've lost two hard disks yesterday . One contains an Oracle Datafile and another contains part of archive logs generated in the past 2 days. (say, arch_5.dbf and arch_6.dbf are lost, in the set of arch_1 to arch_10).
I have switched over to my standby site as part of business continuity plan.
Now, I have to recover the missed datafile. It requires the missed two archive log files for recovery.
Is is possible to apply the same set of archivelogs from standby to production, in order to recover the datafile ?
Kindly advise.
~SK~
It might prove a bit easier to
Use RMAN Incremental Backups to Refresh a Standby Database
you could use the archives that are transported to the standby site but they won't help with the recovery of the lost datafiles, unless their creation is logged in the archives.
Using the incremental backup option is easier.

Manually logging database event in datastage job

i have a parallel job that writes in oracle table. I want to manually write warnings in Datastage's log if some event occur. For example if a certain value for a certain column is inserted i want to track this information in the log. Could this be achieved somehow?
To write custom messages into the logs for a particular jobs data stream, you can use a combination of a copy stage, transformer, and peak stage. The peak stage is the one that writes to the logs. I like to set the peak stage to run in sequential mode, so that your messages are kept together in single entries in the log, instead across nodes.
Also, you can peak the rejects of the oracle stage. maybe combine this with the above option (using a funnel stage and a standard column schema).
Lastly, if you'd actually like to query the logs themselves and write those logs out somewhere else or use them in a job (amoungst allother data kept about jobs in the repository). You can directly query the DSODB schema in the XMETA database. I.e. the DataStage repository (by default DB2).
You would need to have the DataStage Operations Console up and running for that (not sure what version of DataStage you're running). If DataStage is running on a single tier and using the default DB2 database. You can simply catalog the DSODB database so that it's available as a connection in the DB2 connector. Else you'd need to install a DB2 client on the DataStage engine tier and catalog the database there.
All the best!
Twitter: #InforgeAcademy
DataStage tips and Tricks: https://www.inforgeacademy.com/blog/

Oracle -> Postgresql Log-Based replication

(I do not code on my own, to make things clear)
I am looking for a solution that would allow to replicate data between a, master, Oracle 11g DB and a new PostgreSQL DB. Those are 2 different applications but the need to exchange data in real-time. There are some trigger-based ways but there is quite a big concern that this can affect the master DB efficiency - which we can't do.
I have also come across some log-based solutions, like HVR, but the cost is way too high for 500MB of data to be replicated.
Maybe anyone of You had a similar issue and found a way to deal with it?
Any kind of tips and help will be really appreciated as I am quite short on time
Oracle Archive Logs have different format than Postgres Write Ahead Logs. Despite the general similarity in concept of Oracle Streams, SQL Log Shipping, Postgres Streaming Replication etc, transaction logs <> redo logs <> xlogs and you can't use one provider logs to roll on the other provider engine.
Moreover you can't roll logs over same DB provider different version because of difference in binary format.
Something alike logical replication you can get with Postgres Logical Decoding, Oracle GoldenGate, Heterogeneous Database Replication, AWS DMS. But none of above gives you "Log-Based replication" between different db vendors
You can use a product that specializes in change data capture based data integration. Striim, GoldenGate, Attunity allow you to do CDC from Oracle. Striim also allows you to do CDC from PostgreSQL and write to Oracle as well.
https://striim.com
https://attunity.com

How to check that the H2 DataBase is Fully not corrupted?

H2 Database is not very stable (But very Fast wich is very good for DEV), especialy during the developpement process, i hope that the number of corruption is du to the immediat shutdown of the Server (during debuging).
How to ensure that a H2 DataBase is not corrupted?
In order garant that a backup is good.
Probably the best way to check if everything is OK is to create a SQL script from the database, using the SCRIPT statement. If that works, then the data is fully readable. The index data might still be corrupt, but indexes can be re-created.
Another option is to always backup the data in the form of a SQL script. This will make a separate check unnecessary; but backup is a bit slower and can't be done online (while updates are happening).
By the way: if a database file gets corrupt, it's due to misconfiguration or wrong usage (H2 supports disabling the transaction log), due to hardware failure, or due to a bug in the database engine itself.

Locking entire database while running a delayed job

My delayed job has something to do with exporting slightly edited version of most of the tables in the app's database, and while doing so, it is critical that none of the current data is being edited.
Is it possible to lock the entire database while running this delayed job?
More Information:
The database to be exported is in PostgreSQL, Heroku's postgresql database, to be more specific.
The flow is something like (all below should be done automatically by the code):
site would be put in maintenance mode,
freeze then export the database, then
when exporting is complete, re-activate the site back
Given there is not a lot of information with your question, I am going to answer you as best I can.
1) What is the database type and model? Is it a standalone DB like MS Access or Informix SE?
2) If not a standalone engine, does this database support replication. I used to work a lot with MS SQL Server, and replication had implications while the database was live and being edited. That is the implications were whether edited data was replicated. In this case, consult the docs. Is it an option to use replication to preserve the current database?
3) What kind of task is this? It sounds like maintenance. Our Informix SE databases lock when being imported or exported. On the production server, it is my job to make sure no local server applications are trying to access the locked DB, and that our external payments web site cannot interfere while the db is locked.
4) If this is a production site that is not in maintenance mode, then I suggest you probably do not want to lock an entire database.
I am sorry for not answering your question directly, but more information is needed like are you asking if this can be done from the Ruby DB interface on some model of db.

Resources