Oracle recovery problem - oracle

I execute the following command and the error occurs:
recover database until time '2009-01-09 12:26';
ORA-00283: recovery session canceled due to errors
ORA-19907: recovery time or SCN does not belong to recovered incarnation;
How can I solve this problem?

There's a suggested action here
I'm not an Oracle user, so that's as far as I can go. Just Googling the code found that though.

You didn't say if you're trying this recovery on the original "in-place" database or on a restored copy somewhere else. Some of your options depend on which situation applies.
One quick thing to do to give yourself some direction is to startup and mount the database and then issue a "recover database until cancel" (optionally specifying "using backup controlfile" if you restored a controlfile as part of the restore). Then examine the details (time and change# ) of the first archive log that Oracle asks you to apply. Compare that to the information in the V$DATABASE view - remember that you can query the V$ views in the mounted state.

Related

Debezium Oracle Connectors CDC usage on top of Kafka and Backup / DataGuard

We are trying to use the Oracle Debezium connector (1.9b) on top of Kafka.
We tried to use 2 things regarding snapsot_mode: schema_only and initial.
We use "log.mining.strategy":"online_catalog" (should be the default)
We are using a PDB/CDB Oracle instance on Oracle 19c.
My understanding is that;
The connector create a session to the PDB
It add a shared lock to ensure the structure will not change (shared) for a short duration
the DDL structure is retrieved from the PDB
It create a session to the CDB
It retrieve the last LSN event from CDB
if snapshot == initial, it will use a "JDBC query to retrieve the whole data" from PDB
it does NOT seems to release the initiated session (or rather process) to the PDB
it continues to mines new events from CDB
x. ... it seems to work for a couple of minutes
After a couple of minutes, the number of process increase drastically
The Oracle database freeze, due to an excess of process (that you can follow using v$process)
We had a lot of errors message; like:
A. Failed to resolve Oracle database
B. IO Error: Got minus one from a read call
C. ORA-04025: maximum allowed library object lock allocated
D. None of log files contains offset SCN: xxxxxxx
The message in point D. said it tries to use a offset which was part of "an old" archived log.
Every 30min (or before, it we have more activity), the log is switched from a file to another.
And a backup is occuring every 30minutes which will read the logs, backup it and then: delete it .
It seems to me that Debezium tried to reach past archived log whose was deleted by the backup process.
The process of "deleting previous archived logs" seems "correct" to me, isn't it ?
Why Debezium tries to pass through the archived logs ? because when snapshot==schema_only it should only catch the news events, therefore why using the archived one ?
How can we manage it ?
I hope that if this point is resolved in my use-case, the debezium will stop to "loop" creating new process and ultimately will stop blocking the Oracle DB.
If you have any clues or opinions, don't hesitate to share it. Many thanks
We try to use shared lock and none
We try to limite the number of tables in the scope
I cannot ask to stop the backup, in production it's not a good idea and in test, it seems that the backup is only there to clean the archived logs and avoid ending with completely used storage.

oracle online recovery does not change archived db

created db with dbca choosing asm, fast recovery area, archived mode.
Now I want to backup it preform some tests that would change the content of the db and If need comes then recover it from the backup.
I know of export/import utilities but need to use rman in case I need to move the db.
I followed the following tutorial with some caveats with most of the commands succeeding:
https://www.thegeekstuff.com/2013/08/oracle-rman-backup/
https://www.thegeekstuff.com/2014/11/oracle-rman-restore/
RMAN> BACKUP DATABASE PLUS ARCHIVELOG;
SQL> SHUTDOWN
RMAN> STARTUP NOMOUNT;
RMAN> RESTORE CONTROLFILE FROM "+DG1/<DB_NAME>/CONTROLFILE/CURRENT.<3_DIGIT_NUMBER>.<10_DIGIT_NUMBER>"
(before one of these commands) mounted the db with SQL> STARTUP MOUNT because needed exclusive type
RMAN> RESTORE DATABASE;
RMAN> RECOVER DATABASE;
RMAN> ALTER DATABASE OPEN RESETLOGS;
*last one didn't run successfully, outputted that only if badly recovered can be run
the change I've made to check the backup and recovery, was droping a table on one occurrence and inserting a record in another.
the problem is after checking the db didnt change to its previous state.
"only if badly recovered can be run" is not a known error message.
NOthing changed, because you didn't put a limitation on the 'recover' step. So it recovered right back through all the online redo - right back to where it was an instant before you shut it down to do the restore/recover. You need to look at the SET UNTIL command in the rman manuals. SET UNTIL a point in time or scn prior to when you did the activity that you expected to be gone after the restore/recover.
This is exactly as expected, and exactly what you would do in case of a disaster recovery where you do not want any data loss. In your case you do not want complete recovery, but a Point In Time (PIT) recovery.

Oracle Database Copy Failed Using SQL Developer

Few days ago while i tried perform database copy from remote server to local server i got some warnings and one of them was like this
"Error occured executing DDL for TABLE:MASTER_DATA".
And then i clicked yes, but the result of database copy was unexpected, there were only few tables has been copied.
When i tried to see DDL from SQL section/tab on one of table, i got this kind of information
-- Unable to render TABLE DDL for object COMPANY_DB_PROD.MASTER_DATA with DBMS_METADATA attempting internal generator.
I also got this message and i believe this message showed up because there's something wrong with DDL on my database so tables won't be created.
ORA-00942: table or view does not exist
I've never encountered this problem before and i always perform database copy every day since two years ago.
For the record before this problem occurred, i have removed old .arch files manually not by RMAN and i never using any RMAN commands. I also have removed old .xml log files, because these two type of files have made my remote server storage full.
How to trace and fix this kind of problem? Is there any corruption on my Oracle?
Thanks in advance.
The problem was caused by datafile has reached its max size though. I have resolved the problem by following the answer of this discussion ORA-01652: unable to extend temp segment by 128 in tablespace SYSTEM: How to extend?
Anyway, thank you everyone for the help.

How does Oracle manage Redo logs?

Can any body give me an idea about Redo logs? An example would be most appreciated.
As Oracle changes data in a datafile, it writes out information to the redo log. In the event of a database failure, you can use this information to get the database back to the point it was before the database failure.
In a disaster recovery scenario, you could restore your last full database backup, and then apply the redo logs taken since that last backup to get the database recovered. Without those redo logs, you could only recover to the last full backup, and changes made since then would be lost.
In Oracle, you can also run in "no archive log mode", which basically means, "redo logs can be overwritten without being saved". This is generally only acceptable for a development database where you don't care about losing data since the last backup. You wouldn't typically run in this mode in a production environment, as it could be disastrous.
Here's a reference link with more info, and also an example of how you can find out the amount of generated redo.
http://www.adp-gmbh.ch/ora/concepts/redo_log.html
A definitive answer from the documentation: http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/onlineredo.htm#sthref850
To expand on #dcp's answer: Technically, #dcp is referring to archived redo logs. These are optional, and as stated are only produced when running the database in archivelog mode. Every Oracle database has at least two mandatory online redo log files. These track all changes to the database. They are essential for recovery if the database crashes unexpectedly, whereas archived logs are not. Oracle uses the online redo log files to transparently bring the database back to the most recently committed state in the event of a system crash. Archived logs are used during recovery from a backup - the backup is restored, then archived logs are applied to the backup to bring the database back to it's current state or some prior point in time.
The online logs are written to in circular fashion - as one fills the next one is "swtiched" to. If archive log mode is set, then these older logs are written to the archive log destination(s). If not, they are overwritten as needed, once the changes they track are written to the datafiles.
This overview of backup and recovery at Oracle's site is pretty good to give one an idea of how the whole thing is put together.

Rolling forward the archivelog and online redo logs to the restored database

I'm currently using Oracle db11g on Red Hat Enterprise Linux 5.0.
I make an incremental level 0 one time a week and incremental level 1 everyday.
I can restore this backup on my new Linux server without any problems because I have all archive logs generated after level 1 backup.
However, if online redo log is not yet filled (I mean that I have some redo info in the online log), how can I use this online log to roll forward to my restored database on the new Linux server?
I don't want to lose the valuable information that is not yet archived.
Best regards,
Sarith
Restore your backed up files.
Copy your current online redo log files (from the "damaged" production instance) to the new server.
RECOVER DATABASE;
This scenario assumes you have total continuity with archived logs and online logs. In doing the recovery, Oracle will apply necessary archived redo, then move to the online redo logs to recover to the point of failure. Important! Don't restore online redo logs from the backup you have! Use the current online logs from your crashed instance.
Finally, don't believe anything you read without practicing it for yourself!
Yes you can use the unarchived logs - if you applying the archive logs via "recover database using backup controlfile", just supply the redo log name instead of the suggested archive log name that the recovery process provides when it comes to that point (i.e. "runs out" of archive logs).
So you mean you duplicate the database to another server using RMAN?
Online redo logs are only used for disaster recovery. For instance : you lose a datafile, restore the datafile from your latest backup, and apply archivelogs and finaly the online redo logs. This makes the restored datafile have the same SCN (System change number) as the controlfile (and other datafiles). Distaster recovery complete.
When you use your backups to duplicate the database on another server you can only roll forward using your archived logs. It does a incomplete recovery by defenition (creates a new controlfile and redologs).
Do a
SQL> Alter system switch logfile
before backup?
But no matter what restore is behind the source database if it stays open. I don't now your business case exactly but DataGuard might be an option for you.
Rob

Resources