My colleague running Oracle Database (11g) in AIX and they would like to move this DB to RHEL. I already found Link. However I would like to check if someone have already migrated or used any other best tools.
you have several options. As pointed out before, Oracle Data Pump is the easiest approach. It would lift you from every version >=10g upwards (or even back when you use the VERSION= parameter).
The caveat is:
Size of the database - and your downtime requirements.
In terms of larger databases, Transportable Tablespaces is the usual choice. More work as you will have to rebuild meta information such as synonyms, view, plsql, sequences etc - and in your case you'll have to either CONVERT the tablespaces as you are coming from a Big Endiann platform and going to a Little Endiann. DBMS_FILE_TRANSFER could assist you here as it can restore and covert at the same time whereas RMAN will need a 2-phase operation with staging space for it.
You can speed up transportable tablespaces with RMAN Incremental Backups to avoid most of the copy/convert time. And you can ease it with Full Transportable Export/Import (minimum source: 11.2.0.3 - minimum destination: 12.1.0.1) where Data Pump does the manual work of transportable tablespaces.
And of course there are other techniques such as Create-Table-As-Select or Insert-Append-Select options via Database Links and such.
Just see the big slide deck "Upgrade / Migrate / Consolidate to 12.2" for customer examples - and the "Migrate >230Tb in <24 hours" decks on my page: https://mikedietrichde.com/slides/
Cheers,
Mike
Is there some reason you can't just use Oracle Database Pump?
Create the database on RHEL, make sure you use a compatible character set.
https://docs.oracle.com/cd/B19306_01/server.102/b14215/dp_overview.htm
Related
I've got to update the data of an Oracle DB but I'm not the owner and ain't got any Oracle License.
The goal is to explain to my interlocutor how to create a dump from his Oracle DB, to find out how to restore this dump in a DB (a free version of Oracle or something else), update some data in some tables, and then make another dump to send it back to my interlocutor.
So the differents questions I have are:
1- Is it possible to create a dump (maybe in SQL format) without any specifics dependencies to Oracle ?
2- Is there a way to restore this dump in a free lightweight Oracle, or another kind of DB like Postgresql ?
3- Does Oracle, is able to handle any kind of dump et restore it in an Oracle DB or is there any constraints to respect ?
I am very new to Oracle and ain't got, on my personal computer, any possibility to try out the dump/restore by myself; that's why, any help will be appreciated !
1- Is it possible to create a dump (maybe in SQL format) without any specifics dependencies to Oracle ?
Oracle offers Data Pump utilities (export and import) for such purposes. You'd export table (or schema) - result is a ".dmp" file, readable by import utility. You'd then move that file to your own server (see #2 for the rest)
2- Is there a way to restore this dump in a free lightweight Oracle, or another kind of DB like Postgresql ?
On your own server (which could be a laptop; no problem), you'd install a free Oracle Express Edition (XE) database. Currently, the last version is 21c, but some others should still be available in Oracle Technology Network's Download section.
3- Does Oracle, is able to handle any kind of dump et restore it in an Oracle DB or is there any constraints to respect ?
XE database has its limits - from your point of view, the most important is that it can handle up to 12GB of user data. Therefore, if the .dmp file doesn't contain more data than that, you should be able to import it.
Another constraint is the compatibility. Not all exports can be imported into all databases. There's a matrix which shows which versions match; it is available on My Oracle Support site (but you have to have access to it, which you - as you said - don't. Though, generally speaking, "close" Oracle database versions can interchange .dmp files. It would be best if these two database versions match, of course.
For example: Lets say i have two database DB1 and DB2. Now my requirement is to refresh data from DB1 to DB2 every night. DB1 is live database and DB2 is for non business users for data analysis.
My questions:
1) What must be the tool i should use for my requirement? I need a solution that is fast, since the database copy has to be done everyday.
2) Does AWS have any tool to automate the backup and restore the data?
There's a load of ways to do this and the answer comes down to what storage you're using, are they on the same server and then ultimately the size of the database.
RMAN's more a backup / recovery tool but definitely a runner for cloning. If you're not sure what RMAN does then I wouldn't even start to implement it as it's very tricky if you aren't super comfortable with Oracle DB's.
My recommendation is just use Oracle datapump, export the schema's you need to a dump file then ship it over and import them into the other database making sure to overwrite/drop the existing schemas.
Other than doing a differential clone at a SAN level this is probably the quickest and definitely easiest way to get it done
I am embarking on an 11g to 12c Oracle DB migration. I will need to do it at least twice, once for testing, a 2nd time fro production. My initial thought is to use expdp/impdp. I export "full" the DB nightly using expdp.
My problem in the past when importing a full DB is it can get squirrely regarding the system schema/users. A full import tries to muck with system schemas (sys, system, sysman...). My new 12c DB is a portable DB, and obviously I want none of the settings or data from the system schemas, that may hose my new DB.
I do however want all of the non system schemas and users, of which there are 5 or so real schemas, and 30 or so "users."
I have been looking for some blogs or documents that address this issue, and can't find any. A pointer to documentation on how to avoid the problems described above would be great.
Also if there are any other gotchas when doing the migration, a heads up on that would be useful as well.
We are developing a large data migration from Oracle DB (12c) to another system with SSIS. The developers are using a production copy database but the problem is that, due to the complexity of the data transformation, we have to do things in stages by preprocessing data into intermediate helper tables which are then used further downstream. The problem is that all developers are using the same database and screw each other up by running things simultaneously. Does Oracle DB offer anything in terms of developer sandboxing? We could build a mechanism to handle this (e.g. have dev ID in the helper tables, then query views that map to the dev), but I'd much rather use built-in functionality. Could I use Oracle Multitenant for this?
We ended up producing a master subset database of select schemas/tables through some fairly elaborate PL/SQL, then made several copies of this master schema so each dev has his/her own sandbox (as Alex suggested). We could have used Oracle Data Masking and Subsetting but it's too expensive. Another option for creating the subset database wouldn have been to use Jailer. I should note that we didn't have a need to mask any sensitive data.
Note. I would think this a fairly common problem so if new tools and solutions arise, please post them here as answers.
I am working for a company that has two Oracle Databases, lets call them LIVE and TEST. An export is performed every night to take a snapshot of the database for each day TEST is then dropped and recreated using existing table creation scripts, with the import finally putting the exported data from LIVE into the new TEST environment.
My questions are,
Is this really the best way to do this?
What better way is there?
Any URL's to demonstrate these ways, would be great
Instead of import/export use Datapump
check Oracle GoldenGate
check Oracle Streams
If you are using Enterprise Edition then you can look into transportable tablespaces as well, which have the advantage of exactly preserving the physical state of the data files so performance testing is more realistic.