I know this seems kind of a strange request, but does RMAN have any history of the changes to config parameters? Seems like one of the team members has been changing the config parameters to a number of instances, and of course no one is claiming responsibility nor communicating with the team on why they are changing the parameters.
Is there any history on the changes? Or, if we wanted to put some before update triggers in place, can that be done?
If you save the rman backup log for every backup, then before backup - rman will push all current RMAN settings (identical to rman> show all) to that file, then backup result.
Just open daily rman log backup and compare, at least you will get the date of (some) rman settings has changed.
RMAN just keep notion of original (by # default), but don't mention when it was changed!
Related
How to check that Vertica backup (or database) is in valid state and I can restore database from it without problems?
P.S.
Several days ago I have a very negative experience with Vertica backup: vbr create backups from broken database. When I tried to restore database from such "backup" the vbr utility restore it but I couldn't to start database (Vertica run recovering process and finish it with error). Seems like vbr doesn't check state of database before backup.
As of 7.2.2, vbr can do integrity checks on backups. From the documentation:
A quick check gathers all backup metadata from the backup location specified in the configuration file and compares that metadata to the backup manifest. A quick check does not verify the objects themselves. Instead, this task outputs an exceptions list of any discrepancies between objects in the backup location and objects listed in the backup manifest.
[...] A full check verifies all objects listed in the backup manifest against file system metadata. A full check includes the same steps as a quick check.
The command is:
vbr -t [quick-check | full-check] -c configfile.ini --report-file=path/filename
You can also look at Talena software (my company) if you are interested in granular backup and recovery of your vertica database as well as other databases such as Cassandra, Hadoop, etc.
I've just started a job and have identified an issue in which the database isn't currently being backed up properly so to speak. We are doing one back up every 6 hours that uses the Oracle native backup utility, but we were also sold a process by a company in which they stated that they could in essence perform "warm" backups of our database by simply taking file system copies of our database files and when we needed to restore we'd simply shut down Oracle and then copy over the files that had been copied, restart Oracle and the world would be whole again. The challenge is the fact that we have not gotten this to work just yet. I need to spend some more time reviewing the message that Oracle is giving, but my primary question is, "Is it possible" to take copies of Oracle files while Oracle is still running and to use those files at a later date to restore the database? I know that it works if the database is shut down, and then copies are made, but this is the first that I've heard that a copy (file system) can be made while the database is running. Any guidance would be greatly appreciated. Here is the error that we are getting.
ORA-00314: log 3 of thread 1, expected sequence# 1939 doesn't match 1944
ORA-00312: online log 3 thread 1: 'E:\ORACLE\ORADATA\ITMS\REDO03.LOG'
Yes, it is possible, but you have to put all the tablespaces into backup mode first and take them out afterwards (e.g. ALTER TABLESPACE x BEGIN BACKUP and ALTER TABLESPACE x END BACKUP; you'll need to check the syntax and make sure it's appropriate for your situation!). Oversimplifying hugely, this tells Oracle not to write to any of the data files, so they're all kept in a consistent state.
The two main problems you get otherwise are that individual files are updated while you're copying them so a single file can be corrupted; and more visibly that different files have different internal timestamps and sequences so Oracle won't allow them to be used.
If you're using a process you've bought in then it should already be taking care of all that though. It sounds the backup is OK and it's the restore that you haven't got working.
I haven't been involved in a restore from a hot backup for some time so someone else will need to give the detail on the actual error. My read of it is that you've tried to open with the restored data files but the later live redo logs. When restoring I think you either have to RECOVER the database using the redo logs generated since the backup was taken; or if you're trying to restore to that point in time then you can open the data with the RESETLOGS directive and lose all the changes from all the redo logs that came later. But really take more informed advice than this...
As far as I know, there are two ways that you can "copy" datafiles from a running Oracle instance.
The datafiles are copied for a
tablespace when the tablespace is in
"BEGIN BACKUP" mode.
You are using a high-end journalling
filesystem such as Veritas that can
snapshot and track block
changes on the filesystem while the
copy is taking place.
It is possible. You must must be in ARCHIVELOG mode.
An example script would be for manual:
Alter tablespace USERS begin backup;
host cp -p /u02/oradata/PROD/users01.dbf /u03/backup/PROD/
host cp -p /u02/oradata/PROD/users02.dbf /u03/backup/PROD/
Alter tablespace USERS end backup;
However, I would recommend just using RMAN. RMAN is QUITE ROBUST, included free, and will do the hot backup, as well as cold. It will clone to another instance, clone as a point in time, recover to a certain point in time, etc. Any manual backup procedure should be migrated to using the RMAN.
If you wanted to backup the entire database while it is open (I prefer as Oracle with DBA so you can avoid passwords in scripts, but ymmv):
$ ORAENV_ASK=NO
$ ORACLE_SID=PROD
$ . oraenv
$ rman target=/
Recovery Manager: Release 10.2.0.4.0 - Production on Thu Oct 28 14:23:29 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
connected to target database: PROD (DBID=x)
RMAN> backup as compressed backupset database plus archivelog;
...
Backup Complete.
I've effectively done this with a non-mission-critical database running on Amazon EC2. My backup strategy is to periodically take a snapshot of the EBS volume. To restore a backup, I create a new EBS volume from the snapshot, start up the instance using it, then run RECOVER DATABASE.
This loses any transactions that were in-flight at the time when the snapshot was taken, of course.
I would like to backup an Oracle 10G as simple as possible.
It is in NOARCHIVELOG mode and I can shut down for backup (it is only a development server).
After reading tons of documentation abour rman I tried this way in rman:
shutdown immediate;
startup mount
backup database;
sql 'alter database open';
As I see it works fine, list backup shows backups.
Than I made some modifications (droping some tables, adding data) and I tried to restore backup:
shutdown immediate;
startup mount
restore database;
recover database;
sql 'alter database open';
It also seems to work fine but I can't get back the previous state of the database. I don't understand why. I also don't understand why need to use recover.
Thanks
Hubidubi
The "restore database;" command will read the backup from the backup media media so that your database files are exactly like they were when the last backup was taken. It does not restore control files.
The "recover database;" command will apply incremental backups (not applicable - your example only has a full backup) and apply archive logs (also not applicable, you're in "NOARCHIVELOG" mode.) It may also write to the control files - if it does, you can see why it's required.
After the restore/recover/open commands you issued in your question your database is as it was at the time of the backup. Any transactions committed after the backup are lost and can't be recovered because you're in "NOARCHIVELOG" mode. You need to be in "ARCHIVELOG" mode to do a complete "point in time" recovery.
byw, what files, if any did you delete, rename or move to really simulate a true media failure? I'll bet you didn't delete one of your control files. You need to practice that scenario.
I'm currently using Oracle db11g on Red Hat Enterprise Linux 5.0.
I make an incremental level 0 one time a week and incremental level 1 everyday.
I can restore this backup on my new Linux server without any problems because I have all archive logs generated after level 1 backup.
However, if online redo log is not yet filled (I mean that I have some redo info in the online log), how can I use this online log to roll forward to my restored database on the new Linux server?
I don't want to lose the valuable information that is not yet archived.
Best regards,
Sarith
Restore your backed up files.
Copy your current online redo log files (from the "damaged" production instance) to the new server.
RECOVER DATABASE;
This scenario assumes you have total continuity with archived logs and online logs. In doing the recovery, Oracle will apply necessary archived redo, then move to the online redo logs to recover to the point of failure. Important! Don't restore online redo logs from the backup you have! Use the current online logs from your crashed instance.
Finally, don't believe anything you read without practicing it for yourself!
Yes you can use the unarchived logs - if you applying the archive logs via "recover database using backup controlfile", just supply the redo log name instead of the suggested archive log name that the recovery process provides when it comes to that point (i.e. "runs out" of archive logs).
So you mean you duplicate the database to another server using RMAN?
Online redo logs are only used for disaster recovery. For instance : you lose a datafile, restore the datafile from your latest backup, and apply archivelogs and finaly the online redo logs. This makes the restored datafile have the same SCN (System change number) as the controlfile (and other datafiles). Distaster recovery complete.
When you use your backups to duplicate the database on another server you can only roll forward using your archived logs. It does a incomplete recovery by defenition (creates a new controlfile and redologs).
Do a
SQL> Alter system switch logfile
before backup?
But no matter what restore is behind the source database if it stays open. I don't now your business case exactly but DataGuard might be an option for you.
Rob
How can I duplicate an Oracle instance? Does anyone have any idea how to do so?
Assuming you want the schema and data duplicated, use the exp and imp commands to export your database, then import it as another user using the FROMUSER and TOUSER parameters.
Well, presumably you have a backup (surely!), so just test your backup recovery on your test server.
To be slightly more serious, it depends what version you are using, newer versions of RMAN make it pretty easy I believe, older versions, you basically do it as a backup recover.
How I've done it in the past, is basically
copy backup data files
create init file
create a new controlfile is the command 'CREATE CONTROLFILE SET DATABASE "TEST" RESETLOGS ARCHIVELOG'
Apply archivelogs and then open with resetlogs
Here is an article which explains the process with a bit more detail
A minor comment on your terminology - "instance" is actually the set of processes running on the database server host and you want to duplicate the "database".
As someone else mentioned, the best way is to start with an RMAN backup of the original database. However, since Oracle 9 RMAN has had the "DUPLICATE DATABASE" command, which takes care of a lot of housekeeping that used to be necessary if you just made a copy by restoring a production backup (e.g. resetting DBID, changing data and log file locations in the control file, setting database GLOBAL_NAME, etc.).
If you're not using RMAN, and the database is on the small side, you can script something that puts each tablespace in hot backup mode, copies the datafiles for that tablespace to a backup location, and then takes the tablespace out of hot backup mode. You now have a recoverable backup that can be moved to another host for archive log application. This definitely has a performance impact on the original database and should be your last resort.
Create a template based on your existing instance. You can then create other instances.