Oracle database migration from 11g to 12c - oracle

I need to do a database migration from Oracle 11g to 12c. But I cannot do a direct export and import kind
of a migration since there are a lot of schema changes which are going to happen. I already have the column mappings
in a sparedsheet with old columns and new columns with all details such as data type, constraints, etc.
There are new columns added to many tables are the default values that should be populated are also known.
So what should be the best approach to do this migration?

There are more ways to do this. Start with getting a dba involved.
To minimize production downtime, you could check if making a logical standby database is feasible in your situation. In that case, make the target database a 12c one, that saves for upgrade time.This target database is in sync with the source database at all times and makes it very valuable. Clone the target database and use that clone to test the migration steps. If the migration fails, you can easily re create a new clone to correct the migration process on.
Working in this way could even enable bi-directional replication, replication from the migrated database back to the source database that could make it possible to revert to the original database in the unlikely event that after production start on the new database things don't work as expected.
Start with adding a dba to the project, a good dba can help minimize downtime and reduce risk.

Related

Sybase to Oracle table Migration via Migration Wizard offline

How can I create a script of inserts for my sybase to oracle Migration? The Migration wizard only gives me the option to migrate procedures and triggers and such. But there is no select for just tables. When I try to migrate tables offline and move data. the datamove/ folder is empty. I would also want to only migrate specific tables (ones with long identifiers) because i was able to migrate the rest with Copy to Oracle.
I must also note that i do not want to upgrade to an new version of oracle. Currently on ~12.1 so i need to limit the identifiers.
How can I get the offline scripts for table inserts?
You (probably!) don't want INSERTs for offline migration scripts. If you're just running INSERTs, then the online method would probably suffice.
The point of the Offline strategy is to take the data from your Sybase instance to flat, delimited text files (using BCP), which we can THEN use to load back into an Oracle Database using SQLLDR or External Tables which will be EXPONENTIALLY faster than using INSERT scripts.
Take a look at this whitepaper where I go into offline Sybase migrations in detail.
You can consider DCO-based Sybase-to-Oracle replication via the Sybase Rep Server. This way, not only will you have all data moved, but you will also be able to have DML updates propagated online, which will make your system switchable live.

Which is the fastest way to create a test database(with all data) from a production database which is quite big in size (400 GB)?

I am a java person and not so much familiar with Oracle available features. Please help me.
The requirement is that, we are looking for some virtual(replica/mirror/view) database to be created from Production database just for testing purpose. Once we are done with executing all the automation test cases, delete the virtual database created. So are there any such concepts in Oracle ?
We are on Oracle 12c.
Many apps use same DB(its huge)
PS: We also use docker for deployment and also AWS.
use Rman duplicate to duplicate the test database from production.
https://oracle-base.com/articles/11g/duplicate-database-using-rman-11gr2
you can duplicate from backups or duplicate from active database
You can probably ask your database admin to export the table space to a new test machine which has the same oracle version installed. May require If there are only very few tables, then you can spool your tables out and use sqlloader to load them to a test database ( you will need to manually create the structure of the tables in test environment before hand.
In both cases, you might want to scrub out the sensitive information as per your requirements and standards.

Which is the fastest way to replicate oracle database deployed in RDS?

For example: Lets say i have two database DB1 and DB2. Now my requirement is to refresh data from DB1 to DB2 every night. DB1 is live database and DB2 is for non business users for data analysis.
My questions:
1) What must be the tool i should use for my requirement? I need a solution that is fast, since the database copy has to be done everyday.
2) Does AWS have any tool to automate the backup and restore the data?
There's a load of ways to do this and the answer comes down to what storage you're using, are they on the same server and then ultimately the size of the database.
RMAN's more a backup / recovery tool but definitely a runner for cloning. If you're not sure what RMAN does then I wouldn't even start to implement it as it's very tricky if you aren't super comfortable with Oracle DB's.
My recommendation is just use Oracle datapump, export the schema's you need to a dump file then ship it over and import them into the other database making sure to overwrite/drop the existing schemas.
Other than doing a differential clone at a SAN level this is probably the quickest and definitely easiest way to get it done

Oracle 11g to 12c migration gotchas?

I am embarking on an 11g to 12c Oracle DB migration. I will need to do it at least twice, once for testing, a 2nd time fro production. My initial thought is to use expdp/impdp. I export "full" the DB nightly using expdp.
My problem in the past when importing a full DB is it can get squirrely regarding the system schema/users. A full import tries to muck with system schemas (sys, system, sysman...). My new 12c DB is a portable DB, and obviously I want none of the settings or data from the system schemas, that may hose my new DB.
I do however want all of the non system schemas and users, of which there are 5 or so real schemas, and 30 or so "users."
I have been looking for some blogs or documents that address this issue, and can't find any. A pointer to documentation on how to avoid the problems described above would be great.
Also if there are any other gotchas when doing the migration, a heads up on that would be useful as well.

Oracle - side-by-side schema update technology...is there any?

Is there any technology out there that will allow you to do side-by-side updates of production schemas?
The goal is to have zero down time when applying updates to a schema in production.
Weblogic 10 has a similar feature for their Java EE apps where by you deploy the new version of the app and new connections go to the new app, while the existing connections continue to the old app. When all the old connections complete/timeout, the old app is retired and the new app continues on...zero down time.
Is there something similar in Oracle?
Yes. There is online redefinition package.
DBMS_Redefinition
But I doubt this will give you zero downtime, this doesn't account for every possible change to a schema. This lets you do some table changes. I think you need to define zero and how extensive the changes you want to make. Usually if you change the database, you have to change your client as well. If you changed your database, how would the client switch automatically from the old proc signature to the new proc signature - Instantaneously?
Databases don't work like apps. There either is a FK from tableA to tableB or there isn't... it can't not be there for current connection and exist only for new connection in the same manner as your application can. Databases just aren't the same.
That being said, there is rumor that Oracle is working on package versioning... so you could connect to a specific version of a package to make such a migration simpler. But again... that would work for packages, DBMS_redef would work for tables... but that's not the sum total of your database.
Oracle release today 11gr2, it has edition-based redefinition: http://download.oracle.com/docs/cd/E11882_01/server.112/e10881/chapter1.htm#NEWFTCH1
Depends what you mean, or include, in "schema".
If you want to add or drop an index, that can be done "in-flight", although it will require a lock which may halt activity for a time. In the latest Oracle versions, it doesn't need to hold the lock for the entire time it takes to build the index, just for a moment to lock in the change. If you have short-duration transactions it shouldn't be noticeable.
In some cases that applies to tables as well (eg adding a nullable or default column).
If you use PL/SQL (especially packages), things can be a little more complicated. Enhancements were mooted for 11gR1 to enable the in-flight application upgrade, but it got pushed out and is now expected in 11gR2 (probably out first half next year).
In the meantime, a workaround is a multi-schema solution. Say your data sits in one schema ("yellow") and your current application code is running in "blue" schema, you load your new application into "green schema". You switch your connections, one by one, from blue to green. Once your connections are all using "green", you can retire "blue" until your next upgrade (when "blue" becomes the new app and "green" is retired).
If you have a genuine 24/7 system, you'll probably always have to stage some upgrades. For example, add a new column as optional, upgrade the application to set it, then make it mandatory (possibly with some data change script for pre-existing rows).

Resources