Debezium Oracle Connectors CDC usage on top of Kafka and Backup / DataGuard - oracle

We are trying to use the Oracle Debezium connector (1.9b) on top of Kafka.
We tried to use 2 things regarding snapsot_mode: schema_only and initial.
We use "log.mining.strategy":"online_catalog" (should be the default)
We are using a PDB/CDB Oracle instance on Oracle 19c.
My understanding is that;
The connector create a session to the PDB
It add a shared lock to ensure the structure will not change (shared) for a short duration
the DDL structure is retrieved from the PDB
It create a session to the CDB
It retrieve the last LSN event from CDB
if snapshot == initial, it will use a "JDBC query to retrieve the whole data" from PDB
it does NOT seems to release the initiated session (or rather process) to the PDB
it continues to mines new events from CDB
x. ... it seems to work for a couple of minutes
After a couple of minutes, the number of process increase drastically
The Oracle database freeze, due to an excess of process (that you can follow using v$process)
We had a lot of errors message; like:
A. Failed to resolve Oracle database
B. IO Error: Got minus one from a read call
C. ORA-04025: maximum allowed library object lock allocated
D. None of log files contains offset SCN: xxxxxxx
The message in point D. said it tries to use a offset which was part of "an old" archived log.
Every 30min (or before, it we have more activity), the log is switched from a file to another.
And a backup is occuring every 30minutes which will read the logs, backup it and then: delete it .
It seems to me that Debezium tried to reach past archived log whose was deleted by the backup process.
The process of "deleting previous archived logs" seems "correct" to me, isn't it ?
Why Debezium tries to pass through the archived logs ? because when snapshot==schema_only it should only catch the news events, therefore why using the archived one ?
How can we manage it ?
I hope that if this point is resolved in my use-case, the debezium will stop to "loop" creating new process and ultimately will stop blocking the Oracle DB.
If you have any clues or opinions, don't hesitate to share it. Many thanks
We try to use shared lock and none
We try to limite the number of tables in the scope
I cannot ask to stop the backup, in production it's not a good idea and in test, it seems that the backup is only there to clean the archived logs and avoid ending with completely used storage.

Related

IMPDP uses more disk space than expected

Background:
I've been tasked with importing a large amount of data from a production database to a test database (Oracle 12c release 2 running on RHEL) and I'm using Data Pump.
The first time I imported the tables, The tables were created and the data was imported as planned, but - due to an issue in my data pump parameter file - the constraints were not imported.
My subsequent attempts did not go as well, however. Data Pump began to freeze part way and the STATUS command showed that no bytes were being processed.
My Solution Attempts:
I tried using TABLE_EXISTS_ACTION=REPLACE and dropping the tables directly after an attempt. I also dropped the master tables of any data pump jobs I was unable to terminate from the utility.
Still, it seemed to hang earlier and earlier in the process as I continuously tried to import the tables. df -h returned 100% disk usage every time it hanged.
The dump file itself is on a separate drive so it's no longer taking up room. I've been trying to clear out space but it keeps filling up when I run a job and I can't tell where. Oracle flashbacks are disabled and I made sure to purge the oracle recycle bin.
tl;dr:
Running impdp jobs seems to use up disk space beyond the added tables and the job master tables. Where is this memory getting used up and what can I do to clear it for a succesful import?
I figured out the problem:
The database was in archivelog mode in order to set up streams and recovery manager backups. As a result, impdp was causing a flood of archived database changes.
In order to clear out the old archives I ran the following in rman for every database in noarchivelog mode on the server.
connect target /
run {
allocate channel c1 type disk;
delete force noprompt archivelog until time 'SYSDATE-30';
release channel c1;
}
This cleared up 60 gigabytes. I also added the parameter transform=disable_archive_logging:Y to my impdp parameter file. This suppresses archive creation when running data pump imports.

database stopped on running 500 quires per second

I built a chat application in which chatting page is loaded per every 1second through AJAX,
And i used DB2 express-c database for storing messages.
one day 500 user at a time used this app at a that time database is stopped working.
Is their any effect on database by running 500 quires at a time in one second.
please tell how to run quires for every second without effecting the database functionality.
The red mark on the DB2 icon means that the instance stop working. This issue should be related to a memory problem or something else.
You have to check the db2diag.log file, and check for message. It is highly probable that you have information at the time when the instance stopped. The first failrue data capture feature allows to recopile all that information when a crash occurs, in the diag directory.
In order to fix the problem, you just need to restart DB2. You can create a task that check if the instance is up, and if not, try to restarted. However, this is the wrong way to keep DB2 up.
You should see what happened at the time when DB2 crashed. Probably, the memory for the 500 agents was too high, and DB2 could not reserve more memory.
Are you running other processes in the same DB2 server? probably one of them corrupt the DB2 memory.

How does Oracle manage Redo logs?

Can any body give me an idea about Redo logs? An example would be most appreciated.
As Oracle changes data in a datafile, it writes out information to the redo log. In the event of a database failure, you can use this information to get the database back to the point it was before the database failure.
In a disaster recovery scenario, you could restore your last full database backup, and then apply the redo logs taken since that last backup to get the database recovered. Without those redo logs, you could only recover to the last full backup, and changes made since then would be lost.
In Oracle, you can also run in "no archive log mode", which basically means, "redo logs can be overwritten without being saved". This is generally only acceptable for a development database where you don't care about losing data since the last backup. You wouldn't typically run in this mode in a production environment, as it could be disastrous.
Here's a reference link with more info, and also an example of how you can find out the amount of generated redo.
http://www.adp-gmbh.ch/ora/concepts/redo_log.html
A definitive answer from the documentation: http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/onlineredo.htm#sthref850
To expand on #dcp's answer: Technically, #dcp is referring to archived redo logs. These are optional, and as stated are only produced when running the database in archivelog mode. Every Oracle database has at least two mandatory online redo log files. These track all changes to the database. They are essential for recovery if the database crashes unexpectedly, whereas archived logs are not. Oracle uses the online redo log files to transparently bring the database back to the most recently committed state in the event of a system crash. Archived logs are used during recovery from a backup - the backup is restored, then archived logs are applied to the backup to bring the database back to it's current state or some prior point in time.
The online logs are written to in circular fashion - as one fills the next one is "swtiched" to. If archive log mode is set, then these older logs are written to the archive log destination(s). If not, they are overwritten as needed, once the changes they track are written to the datafiles.
This overview of backup and recovery at Oracle's site is pretty good to give one an idea of how the whole thing is put together.

Rolling forward the archivelog and online redo logs to the restored database

I'm currently using Oracle db11g on Red Hat Enterprise Linux 5.0.
I make an incremental level 0 one time a week and incremental level 1 everyday.
I can restore this backup on my new Linux server without any problems because I have all archive logs generated after level 1 backup.
However, if online redo log is not yet filled (I mean that I have some redo info in the online log), how can I use this online log to roll forward to my restored database on the new Linux server?
I don't want to lose the valuable information that is not yet archived.
Best regards,
Sarith
Restore your backed up files.
Copy your current online redo log files (from the "damaged" production instance) to the new server.
RECOVER DATABASE;
This scenario assumes you have total continuity with archived logs and online logs. In doing the recovery, Oracle will apply necessary archived redo, then move to the online redo logs to recover to the point of failure. Important! Don't restore online redo logs from the backup you have! Use the current online logs from your crashed instance.
Finally, don't believe anything you read without practicing it for yourself!
Yes you can use the unarchived logs - if you applying the archive logs via "recover database using backup controlfile", just supply the redo log name instead of the suggested archive log name that the recovery process provides when it comes to that point (i.e. "runs out" of archive logs).
So you mean you duplicate the database to another server using RMAN?
Online redo logs are only used for disaster recovery. For instance : you lose a datafile, restore the datafile from your latest backup, and apply archivelogs and finaly the online redo logs. This makes the restored datafile have the same SCN (System change number) as the controlfile (and other datafiles). Distaster recovery complete.
When you use your backups to duplicate the database on another server you can only roll forward using your archived logs. It does a incomplete recovery by defenition (creates a new controlfile and redologs).
Do a
SQL> Alter system switch logfile
before backup?
But no matter what restore is behind the source database if it stays open. I don't now your business case exactly but DataGuard might be an option for you.
Rob

Oracle recovery problem

I execute the following command and the error occurs:
recover database until time '2009-01-09 12:26';
ORA-00283: recovery session canceled due to errors
ORA-19907: recovery time or SCN does not belong to recovered incarnation;
How can I solve this problem?
There's a suggested action here
I'm not an Oracle user, so that's as far as I can go. Just Googling the code found that though.
You didn't say if you're trying this recovery on the original "in-place" database or on a restored copy somewhere else. Some of your options depend on which situation applies.
One quick thing to do to give yourself some direction is to startup and mount the database and then issue a "recover database until cancel" (optionally specifying "using backup controlfile" if you restored a controlfile as part of the restore). Then examine the details (time and change# ) of the first archive log that Oracle asks you to apply. Compare that to the information in the V$DATABASE view - remember that you can query the V$ views in the mounted state.

Resources