I have written script for oracle backup and restore using RMAN.
Note i took backup database + archive logs
Now, I did some sql statement in oracle but not commited transaction then it may be somewhere in redo logs i am not sure about it.
Now, In above situation i took backup database + archive log and did restore.
Non-commited data was not present.
I am confuse about this scenario, Does this scenario is correct or it is missing my data or i missed somewhere.
This is perfectly fine. Your transaction is in fact at redo. But since you didn't commit it the recover process rolled it back after reapplying it because it couldn't find a commit statement at the end of the redo stream. This is by design. The opposite would be a problem, if you had committed a statement, no matter what happened with the server (power loss, crashed) you should be able to see it after restoring the server and applying all of redo/archives.
The reason for that is that once you commit, all of the work to reexecute your transaction should be stored at disk (redo log file). There are other types of commit (COMMIT WRITE NOWAIT, for example) that bypass this behaviour and should be avoided.
Hope this helps.
Related
This question is almost exaclty like
oracle rman simple backup
but there isn't an acceptable answer there, and this question is about 11g. So I'll ask:
I'd like to do some table initialization DDL tests on an oracle shema, and I'd like to revert the database to the prior-test state between runs. I'm executing the following in RMAN:
shutdown immediate;
startup mount
backup database;
sql 'alter database open';
As I see it works fine, list backup shows backups.
Than I made some modifications (Added some users, added some tables, adding data) and I tried to restore backup:
shutdown immediate;
startup mount
restore database;
recover database;
sql 'alter database open resetlogs';
Expected result: the database should be restored to the exact state as to when the initial backup was taken.
Actual result: all the new tables and users I created in my test DDL continue to exist. I verified this by closing connections, restarting sessions, and then even selecting from the tables! The tables still exist even after the restore!
What is the deal with this? In MSSQL and Postgres, a backup means you save the state of the db, and restoring it means you go back to when the backup was. But in RMAN for oracle 11g, it 'claims' the restore was successful, but the evidence clearly shows otherwise.
How can I get oracle to save the state of the database exactly as it is, and then make changes, and when I restore, i want the database to be exactly as it was when I backed it up?
Is this possible in Oracle?
Yes it is possible - you have several options:
create a cold backup of the database (datafiles, controlfiles, online redo logs) and then to restore them when it is necessary
Perform so called "point in time recovery" (assuming your DB is in archivelog mode). Take DB backup with RMAN note the "time" or "SCN" or "archivelog sequence" after a while you can restore DB and recover until previous noted time/SCN/LOG SEQUENCE
Special designed by Oracle for this purposes and I recommend it in your case "Flashback Database" (brows Oracle docs to see what this is).
Oracle always "try" to restore/recover your database up to last committed transaction if it is possible, that is why you get the result you described above, but if you want to restore up to specific time/SCN/SEQUENCE just tell Oracle about this :)
H2 Database is not very stable (But very Fast wich is very good for DEV), especialy during the developpement process, i hope that the number of corruption is du to the immediat shutdown of the Server (during debuging).
How to ensure that a H2 DataBase is not corrupted?
In order garant that a backup is good.
Probably the best way to check if everything is OK is to create a SQL script from the database, using the SCRIPT statement. If that works, then the data is fully readable. The index data might still be corrupt, but indexes can be re-created.
Another option is to always backup the data in the form of a SQL script. This will make a separate check unnecessary; but backup is a bit slower and can't be done online (while updates are happening).
By the way: if a database file gets corrupt, it's due to misconfiguration or wrong usage (H2 supports disabling the transaction log), due to hardware failure, or due to a bug in the database engine itself.
My app to recovers automatically from failures. I test it as follows:
Start app
In the middle of processing, kill the application server host (shutdown -r -f)
On host reboot, application server restarts (as a windows service)
Application restarts
Application tries to process, but is blocked by incomplete 2-phase commit transaction in Oracle DB from previous session.
Somewhere between 10 and 30 minutes later the DB resolves the prior txn and processing continues OK.
I need it to continue processing faster than this. My DBA advises that I should prefix my statement with
ALTER SESSION ADVISE COMMIT;
But he can't give me guarantees or details about the potential for data loss doing this.
Luckily the statement in question is simply updating a datetime value to SYSDATE every second or so, so if there was some data corruption it would last < 1 second before it was overwritten.
But, to my question. What exactly does the statement above do? How does Oracle resolve data synchronisation issues when it is used?
Can you clarify the role of the 'local' and 'remote' databases in your scenario.
Generally a multi-db transaction does the following
Starts the transaction
Makes a change on on database
Makes a change on the other database
Gets the other database to 'promise to commit'
Commits locally
Gets the remote db to commit
In doubt transactions happen if step 4 is completed and then something fails. The general practice is to get the remote database back up and confirm if it committed. If so, step (5) goes ahead. If the remote component of the transaction can't be committed, the local component is rolled back.
Your description seems to refer to an app server failure which is a different kettle of fish. In your case, I think the scenario is as follows :
App server takes a connection and starts a transaction
App server dies without committing
App server restarts and make a new database connection
App server starts a new transaction on the new connection
New transaction get 'stuck' waiting for a lock held by the old connection/transaction
After 20 minutes, dead connection is terminated and transaction rolled back
New transaction then continues
In which case the solution is to kill off the old connection quicker, with a shorter timeout (eg SQLNET_EXPIRE_TIME in the sqlnet.ora of the server) or a manual ALTER SYSTEM KILL SESSION.
Can any body give me an idea about Redo logs? An example would be most appreciated.
As Oracle changes data in a datafile, it writes out information to the redo log. In the event of a database failure, you can use this information to get the database back to the point it was before the database failure.
In a disaster recovery scenario, you could restore your last full database backup, and then apply the redo logs taken since that last backup to get the database recovered. Without those redo logs, you could only recover to the last full backup, and changes made since then would be lost.
In Oracle, you can also run in "no archive log mode", which basically means, "redo logs can be overwritten without being saved". This is generally only acceptable for a development database where you don't care about losing data since the last backup. You wouldn't typically run in this mode in a production environment, as it could be disastrous.
Here's a reference link with more info, and also an example of how you can find out the amount of generated redo.
http://www.adp-gmbh.ch/ora/concepts/redo_log.html
A definitive answer from the documentation: http://download.oracle.com/docs/cd/B19306_01/server.102/b14231/onlineredo.htm#sthref850
To expand on #dcp's answer: Technically, #dcp is referring to archived redo logs. These are optional, and as stated are only produced when running the database in archivelog mode. Every Oracle database has at least two mandatory online redo log files. These track all changes to the database. They are essential for recovery if the database crashes unexpectedly, whereas archived logs are not. Oracle uses the online redo log files to transparently bring the database back to the most recently committed state in the event of a system crash. Archived logs are used during recovery from a backup - the backup is restored, then archived logs are applied to the backup to bring the database back to it's current state or some prior point in time.
The online logs are written to in circular fashion - as one fills the next one is "swtiched" to. If archive log mode is set, then these older logs are written to the archive log destination(s). If not, they are overwritten as needed, once the changes they track are written to the datafiles.
This overview of backup and recovery at Oracle's site is pretty good to give one an idea of how the whole thing is put together.
I'm currently using Oracle db11g on Red Hat Enterprise Linux 5.0.
I make an incremental level 0 one time a week and incremental level 1 everyday.
I can restore this backup on my new Linux server without any problems because I have all archive logs generated after level 1 backup.
However, if online redo log is not yet filled (I mean that I have some redo info in the online log), how can I use this online log to roll forward to my restored database on the new Linux server?
I don't want to lose the valuable information that is not yet archived.
Best regards,
Sarith
Restore your backed up files.
Copy your current online redo log files (from the "damaged" production instance) to the new server.
RECOVER DATABASE;
This scenario assumes you have total continuity with archived logs and online logs. In doing the recovery, Oracle will apply necessary archived redo, then move to the online redo logs to recover to the point of failure. Important! Don't restore online redo logs from the backup you have! Use the current online logs from your crashed instance.
Finally, don't believe anything you read without practicing it for yourself!
Yes you can use the unarchived logs - if you applying the archive logs via "recover database using backup controlfile", just supply the redo log name instead of the suggested archive log name that the recovery process provides when it comes to that point (i.e. "runs out" of archive logs).
So you mean you duplicate the database to another server using RMAN?
Online redo logs are only used for disaster recovery. For instance : you lose a datafile, restore the datafile from your latest backup, and apply archivelogs and finaly the online redo logs. This makes the restored datafile have the same SCN (System change number) as the controlfile (and other datafiles). Distaster recovery complete.
When you use your backups to duplicate the database on another server you can only roll forward using your archived logs. It does a incomplete recovery by defenition (creates a new controlfile and redologs).
Do a
SQL> Alter system switch logfile
before backup?
But no matter what restore is behind the source database if it stays open. I don't now your business case exactly but DataGuard might be an option for you.
Rob