I have a specific need for data pump and I am having a hard time searching for a solution.
Currently, I have a exp/imp program that exports tables (selectively based on queries) from one database, and imports that same data into another database. This program and the dump files reside on a common server that can access both the source and destination databases. This is a totally automated process. It works good, albeit slowly.
Due to various reasons, I must migrate this program to use data pump. The biggest change now is the location of the dmp files. I also have very limited access to the database servers themselves, but I can run data pump.
The process will be run from the same common server, but the exported files will now reside on the database server for the source database. No issue there. I can create dmp files using expdp.
My issue is how to get that same data into the destination database. When I run impdp, it is expecting a data_pump_dir in the destination area (not source area). Again, this is automated, and I don't have the luxury of being able to transfer dmp files using scp or ftp or anything like that.
What can I use to overcome this problem using datapump?
No reason you cannot configure an external directory on BOTH databases:
CREATE DIRECTORY mydumpdir AS '/whatever/the/path/is';
Then, impdp and expdp will take the DIRECTORY argument as mydumpdir
Make sure you configure permissions for the Oracle schemas/users to read/write to the directory AND the oracle process account should have OS level rights to read/write to that location also. The expdp server should also have write access as it might be trying to write reports to the locations or you might be using to do file cleanup.
Related
My instructor gave me a username and password and .dbf file and tell me to open it and try to retrieve with sqlplus and oracle database
I tried to open the dbf file from excel mysql and ms server but it i gave me an error
Speaking as a DBA: As Littlefoot stated, you can't just read a data file from an Oracle DB. At best they are proprietary binary file formats, assuming it isn't encrypted on top of that. Nor can you take a data file from one database instance and just plug it in to another database instance. You also can't import it to mySQL or any other database engine: as a stand-alone data file it can only be properly read by its original database installation (i.e. the specific database instance that created it).
Oracle has specific tools available to copy data and/or files from one database to another, but those would generally use the RMAN backup manager (used to make physical backups) or (more likely in your case) the Datapump "Transportable Tablespace" feature.
To restore it from an RMAN backup you would need a complete full backup of the entire source database instance: RMAN backup sets including all data files, redo logs (and perhaps archived logs), control files, parameter files, encryption keys,, and possibly more.
To restore a transportable tablespace dump you would need your own running Oracle database instance, the correct parameters to run the impdp import utility, and the assistance/cooperation of the DBA.
You need to confirm if the file you were given is such an export dump (though the .dbf file extension would suggest not), and how you are expected to access the data. You won't be able to just "open the file".
.DBF extension probably represents datafile; I don't think you can read it with any tool (at least, I don't know of any).
You should find an Oracle DBA who might try to help; in order to restore a database (which is contained in that file), they might need control file(s), redo log files and ... can't name what other files (I'm not a DBA).
Then, if everything goes OK, database might be started up so that you'd be able to connect to it using credentials you were given.
We have an upcoming migration of our Oracle database to an Exadata server. I want to clarify some issues I have thought of:
Will there be any issues with the code - performance issues? Exadata has another type of optimizer, it doesn’t uses indexes, has a columnar optimizer, if I’m not misleading,
Currently there are some import or export files generated on the database server (accessed via Filezilla). I understand that at Exadata the database server is inaccessible, and I suspect that either:
• we will have to move those files to another server - Oracle knows only FTP (which has ports closed at our client) -> how do we write / read from another server? (as far as I understand, they would like to put all the files on the WAS server)
• or we will need to import the files into the table using the java application and process them from there (and the same with the exported files).
Files that come automatically from other applications can be written to the database server? Or we have the same problems as for the manual part.
We have plenty of database jobs that run KSH scripts on the database server - is there a problem with them? I understand they should also be moved to the WAS server, but I do not know how Oracle will call them from there.
Will there be any problems with Jenkins deployments? Anything changed? Here we save the SQL/PLSQL sources in some XML files, from which the whole application is restored (packages, configuration tables, nomenclatures ...) (with the exception of the working data) (the XML files are read through a procedure from an oracle directory).
If you can think of any other issues concerning this migration, any problems you have encountered during or after the migration to Exadata, please share!
Thank you,
Step by step:
On exadata you are going to have the same optimizer behaviour with some improvements because the exadata may improve full table scan performance thanks to smart full scans. Indeed the exadata is able to avoid retrieving data blocks in fts because it knows in advance they do not contain neeeded data.
In the exadata you can export to external servers DBFS file systems, that might be useful for external tables, imports/exports and so on.
You can write your files on the DBFS you can configure.
You could use your DBFS, if you want the ksh files are accessed from outside your exadata.
Let your oracle directory point to a directory in the DBFS file system where you put your xml files and you are done.
Oracle data pump export utility expect a parameter DIRECTORY (DBA_DIRECTORIES) which exist in DB server. Is it possible to map this directory to local machine or is there any other way to export multiple table to local from oracle database?
If using Data Pump, there is no direct way to store a dump file on your local machine. That is the way how Data Pump designed.
However, there is one of possible ways to achieve what you want. A workaround has two steps:
Run expdp as usual, which creates a dump file on server
Use ocp tool to transfer a dump file from a database server to your local machine (and back, if you want to).
An ocp tool stands for "Oracle Copy" and written exactly for the purpose of copying dump files back and forth from/to a database server. It is available here: https://github.com/maxsatula/ocp/releases/download/v0.1/ocp-0.1.tar.gz That is a source distribution, so once downloaded and unpacked, run ./configure && make
(Hopefully you do not have Windows on a client side, because I never tried to compile it there)
That is a simple command-line tool with a simple syntax. For example, this command will pull a file for you:
ocp <connection_string> DATA_PUMP_DIR:remote_file_name.dmp local_file_name.dmp
The tool uses a database connection and a minimum set of database privileges.
Update:
Finally I was able to adjust the source code and build ocp tool for Windows 32-bit:
https://github.com/maxsatula/ocp/releases/download/v0.1/ocp-0.1-win32.zip
Compiled/tested with 32-bit Instant Client 11.2.0.4 available here: http://www.oracle.com/technetwork/topics/winsoft-085727.html
instantclient-basiclite-nt-11.2.0.4.0.zip (20,258,449 bytes)
I believe it will work with a full Oracle Client installation too (just watch for bits, should be 32), however did not check myself.
Unfortunately, Windows build of ocp does not have a fancy progress meter during file transfer. That piece of code had too much *nix-specific stuff, so I had to cut it off.
Also, since it uses popt and zlib libraries, which are compiled as a part of GnuWin project, and available in 32-bit only, ocp for Windows is 32-bit only too. Hopefully, not having of a 64-bit version is not mission critical to you.
Update 2:
Warning! Make sure you always use DEDICATED server connection when download files from server, otherwise (for SHARED server) the downloaded copy of the file will be corrupted with no error messages!
With a bit of a hack you can get data pump to do what you want, but you need to have a database on your local machine.
What you need to do is create a database link on your local machine to the remote machine.
Then in the datapump options, login to the local database as the db link owner, specify the 'network_link' option to be the name of the database link name you created. That way it should export from the remote database through the local database and create the file on your local instance. For example:
expdp directory=<local_dir_object> network_link=<dblinkname on local instance> dumpfile=.. logfile=.. tables/schema=...
No, data pump sucks that way, but Oracle can get faster throughput using the same server the db sits on, so thats the tradeoff. Other enhancements too, but I still think this is a big disadvantage for data pump. Use old exp/imp or third party tools for this purpose.
You should ask yourself: "Why do I want to keep data outside the database - the most secure place for my data? Where backup,restore and recovery is in place.
If you are going to move data from database A to database B, make sure both databases have access to a common file-area where they can access the datadump-files through their directory-object and use the datapump.
If you still want to export data to client side you can use the good old tools exp and imp.
I need to automate a selective table / user object backup I currently am doing via PL / SQL Developer.
The way I currently do it is via Tools/Export Tables and Tools/Export User Objects, manually select tables / objects, then set the options, choose destination and export. I do this from a windows laptop and the database is located in a suse linux server, both are in the same LAN. DB is running 24/7 and can not be shutdown. Also currently my oracle programming skills are very basic as I only do maintenance to this solution. I would like to keep doing the backup process in the windows laptop, but I would consider a server side script solution also and then retrieving the .sql files from server.
Thanks in advance
I wouldn't really call it a backup, but look at exp/imp and expdp/impdp (data pump) in the Utilities manual
As Gary implies exp/imp really isn't a backup solution. If this database is important to you or others, figure out how to use RMAN , which is usually configured to run in a mode that doesn't require the database to be shut down. Although it executes on the database host and for non-tape destinations must write its files to a filesystem attached to the host, it can be launched remotely.
RMAN is aimed at restoring/recovering the entire database, so if what you're looking for is only the ability to recover isolated objects it may not be for you.
we get problem, while trying to export Oracle DB. OS: CentOS ~ 5.2 DB: Oracle 10g.
Exp command exports db files only in location:
/home/oracle/OraHome_1/oradata/master/xxx.dbf
, but tool can't export files in different location (we know about this files after getting trace) like this:
'/disk1/dblog06.dbf',
'/home/disk2/system01.dbf',
Please, advice me, how to get dump file. or buckup it.
Thanks.
You appear to have misunderstood what exp does, and particularly what the file parameter is for. The file is the output dump file, normally given a .dmp extension. Export takes data out of the database instance, it does not work under the hood on the datafiles - you have to tell it which data you want (full, user, tables, or tablespaces) and where to put it, not where it comes from.
If you really did try to exp file=/home/disk2/system01.dbf then what you actually asked it to do was trash your database; you're lucky that it did not overwrite the datafile and cause a catastrophic failure. Oracle seems to have saved you from yourself there, though possibly only thanks to having exclusive locks on the files at the time.
You need to read up on how it works and see if it actually does what you want - as APC notes it's not a backup tool. Looks at the Oracle documentation for your version, or somewhere like http://www.orafaq.com/wiki/Import_Export_FAQ, and also look at using data pump instead of exp.
I am not sure if that is the question, but the exp command will export database objects according to their logical schema (user name, table name). It does not matter which physical database file the data is coming from.
exp works through an Oracle instance, which needs to have mounted the datafiles.
Are these other files part of the Oracle database? Maybe another database? You need to find out which Oracle server uses them, and then run exp against that instance.
EXPORT is not a backup tool. It is meant for transferring data from one database to another, or perhaps from one schema to another.
If you want to recover your data in the event of a database crash or corruption then you need to use the appropriate tool. There are OS solutions to this, but Oracle comes with a sophisticated backup and recovery tool: RMAN. Find out more.