What is the best practice to manually backup Autonomous Database? - database-backups

What best practice shall one follow to create a backup or replication policy that can live longer than the 60 days retention for Autonomous Database (Shared)?

The current retention for automated and manual backups on Autonomous Database (Shared) is 60 days. 
You can use Data Pump Export for archival purposes or “backups” for longer than 60 days.
Ref - I am a product manager on the Oracle Autonomous Database.

Related

Is it possible to apply archivelogs from standby DB to Production DB?

I've lost two hard disks yesterday . One contains an Oracle Datafile and another contains part of archive logs generated in the past 2 days. (say, arch_5.dbf and arch_6.dbf are lost, in the set of arch_1 to arch_10).
I have switched over to my standby site as part of business continuity plan.
Now, I have to recover the missed datafile. It requires the missed two archive log files for recovery.
Is is possible to apply the same set of archivelogs from standby to production, in order to recover the datafile ?
Kindly advise.
~SK~
It might prove a bit easier to
Use RMAN Incremental Backups to Refresh a Standby Database
you could use the archives that are transported to the standby site but they won't help with the recovery of the lost datafiles, unless their creation is logged in the archives.
Using the incremental backup option is easier.

Database Migration from Oracle RAC to AWS Amazon Aurora

I am working on a task to make a data migration plan to migrate Oracle RAC to AWS Amazon Aurora.
The current in-house production database is based on a 10TB, 8 Node Oracle RAC Cluster with single node
standby in DR site. The database has 2 main schemas, comprising of 500 tables, 300 Packages and Triggers, 20 Partitioned tables, 5000 concurrent session of which 100 are active at a given time and has an IOPS requirement of 50K read and 30K write IOPS. The development database is 1/10th of the production capacity.
I did research and found that DMS (Data Migration Service) and SCT (Schema Conversion Tool) takes care of all the migration process. So do we need to work on any individual specifications mentioned in the task or will DMS and SCT take care of the whole migration process?
The tools you mention (DMS and SCT) are powerful and useful, but will they take care of the whole migration process? Very unlikely unless you have a very simple data model.
There will likely be some objects and code that cannot be converted automatically and will need manual input/development from you. Migrating a database is usually not a simple thing and even with tools like SCT and DMS you need to be prepared to plan, review and test.
SCT can produce an assessment report for you. I would start here. Your question is next to impossible to answer on a forum like this without intricate knowledge of the system you are migrating.

How to rollback database change in production after few hours or days

We are using Oracle 12c in production. Lets say there was release that went to production on Sunday and then some hours or some days later(e.g. Tuesday) we realized that we need to rollback the changes we did, assume there were DDL schema changes, along with DML changes which could be inserts, updates, deletes.
What is the best practice to rollback the changes? we can not restore database from backup because backup was from Sunday and there is data from Sunday to lets say Tuesday.
Just want to know what is the best practice for rolling back database changes in Oracle 12c.
When you are making a rollout to Production, the best technique to go back is FLASHBACK DATABASE.
You can read more here
https://docs.oracle.com/database/121/SQLRF/statements_9012.htm#SQLRF01801
The idea is to create a restore point flashback guarantee that you can go back to just by running a restore command
create restore point my_save_point guarantee flashback database;
Then you do your changes, you verify whatever you want to verify and if you need to rollback you just run
flashback database to restore point my_save_point ;

Which is the fastest way to replicate oracle database deployed in RDS?

For example: Lets say i have two database DB1 and DB2. Now my requirement is to refresh data from DB1 to DB2 every night. DB1 is live database and DB2 is for non business users for data analysis.
My questions:
1) What must be the tool i should use for my requirement? I need a solution that is fast, since the database copy has to be done everyday.
2) Does AWS have any tool to automate the backup and restore the data?
There's a load of ways to do this and the answer comes down to what storage you're using, are they on the same server and then ultimately the size of the database.
RMAN's more a backup / recovery tool but definitely a runner for cloning. If you're not sure what RMAN does then I wouldn't even start to implement it as it's very tricky if you aren't super comfortable with Oracle DB's.
My recommendation is just use Oracle datapump, export the schema's you need to a dump file then ship it over and import them into the other database making sure to overwrite/drop the existing schemas.
Other than doing a differential clone at a SAN level this is probably the quickest and definitely easiest way to get it done

Refreshing tablespace using RMAN incremental backup from one DB to Other

If I have two DB's having same database structure and every schema has its separate tablespace then can I use RMAN to take tablespace level backups and apply them on other DB's tablespace?
Example: say I have DB schema 'scott' which have been assigned tablespace 'scott_ts' (on both databases), I take backup of scott_ts tablespace and restore it on other DB and after that to refresh this schema/tablespace I apply daily incremental level backups on it?
(Please note that I've done some research on other options like data pump, golden gate oracle streams etc. I just specifically want to know whether RMAN would help me in this case or not).
Oracle Database 10G on Windows Server 2003.
RMAN is a backup&recovery tool. You can't use it for that purpose. You can use it only as part of "transportable tablespace" process in this context. You can try to use logical standby DB for that purpose but it's little bit overkill.

Resources