I have a remote server running a huge Oracle 11g database on a Docker that I need to export/import on my local machine.
I have already tried 2 approaches:
an attempt to copy db through SQLDeveloper led to Time Out exception
save running container into an image and load it after also
didn't help as an initialization error occurred. The size of resulting image.tar made up 10.5GB.
By reason of Oracle is commonly used in production environment and designed to cope with big amounts of data I'm sure there must be clear off-the-shelf solution to export/import a db from one host to another.
Could you give me any ideas, please?
Related
I have two databases that have tables with identical schema. I want to compare the two tables. I learned that cross DB queries need a Database Link.
I use SQL developer and here are the properties of the connection that works
Connection Name: MyConn
UserName:SomeUser
password:SomePassword
Connection Type: Basic
Role: default
Host Name: 12.12.12.12
port:2521
SID: xe
I tired this command to create Database link
CREATE DATABASE LINK MyDBLink
CONNECT TO SomeUser
IDENTIFIED BY "SomePassword"
USING '(DESCRIPTION=
(ADDRESS=(PROTOCOL=TCP)(HOST=12.12.12.12)(PORT=2521))
(CONNECT_DATA=(SID=xe)))';
The command creates a link but when I try to test the link, the connection does not work. The connection times out in 60s.
Am I missing something ?
The Database-Link is between the two databases and not "outgoing" from your local sqldeveloper/client.
So even if you can reach the remote database from your local client, the remote database-server where you created the link might (and in this case indeed does) not reach the target of the database link.
The database link itself is created no matter if the remote database is reachable, or the credentials are incorrect, etc.
If you have the possibility you should directly log on the database server of the database where you created the link and check the network connection to the database server you want to reach from there. Using telnet might help you.
The best solution is to look at the network or operating system and open a path between the two servers, and then the database link should work. If that's not possible, you can use your desktop PC as a proxy.
The official way to route around network issues is to use Oracle Connection Manager. But in my experience that program is a bit hard to configure.
Another option is to create a database on your desktop, and create two database links to each server. If this desktop database is only used for linking to others, then the free Express Edition should be good enough. If you go down this route, be careful of performance problems when involving 3 databases. You'll probably want to compare hashes instead of actual data, to avoid network problems.
We have an upcoming migration of our Oracle database to an Exadata server. I want to clarify some issues I have thought of:
Will there be any issues with the code - performance issues? Exadata has another type of optimizer, it doesn’t uses indexes, has a columnar optimizer, if I’m not misleading,
Currently there are some import or export files generated on the database server (accessed via Filezilla). I understand that at Exadata the database server is inaccessible, and I suspect that either:
• we will have to move those files to another server - Oracle knows only FTP (which has ports closed at our client) -> how do we write / read from another server? (as far as I understand, they would like to put all the files on the WAS server)
• or we will need to import the files into the table using the java application and process them from there (and the same with the exported files).
Files that come automatically from other applications can be written to the database server? Or we have the same problems as for the manual part.
We have plenty of database jobs that run KSH scripts on the database server - is there a problem with them? I understand they should also be moved to the WAS server, but I do not know how Oracle will call them from there.
Will there be any problems with Jenkins deployments? Anything changed? Here we save the SQL/PLSQL sources in some XML files, from which the whole application is restored (packages, configuration tables, nomenclatures ...) (with the exception of the working data) (the XML files are read through a procedure from an oracle directory).
If you can think of any other issues concerning this migration, any problems you have encountered during or after the migration to Exadata, please share!
Thank you,
Step by step:
On exadata you are going to have the same optimizer behaviour with some improvements because the exadata may improve full table scan performance thanks to smart full scans. Indeed the exadata is able to avoid retrieving data blocks in fts because it knows in advance they do not contain neeeded data.
In the exadata you can export to external servers DBFS file systems, that might be useful for external tables, imports/exports and so on.
You can write your files on the DBFS you can configure.
You could use your DBFS, if you want the ksh files are accessed from outside your exadata.
Let your oracle directory point to a directory in the DBFS file system where you put your xml files and you are done.
For MySQL, the MXJ connector makes it very easy to launch a managed MySQL instance.
I know that Oracle provides Oracle XE for quick setup, but I've only found an RPM distribution that needs to be installed. Is there a neatly packaged jar that I can just drop in the classpath and start up by calling a specific JDBC url, a la HSQLDB or MXJ?
I'm interested in having developers use this locally for running tests, as well as on our continuous integration server.
The short answer is No. Oracle is a big meaty chunk of database. Amongst other things, it generally expects itself to be run by its own special user rather than the client user.
For simplicity, your best bet is a separate DB server with each of your developers having their own username/password (and hence their own independent schema) in the database.
Although Oracle does not provide an embedded database, spinning up a local Docker container running Oracle XE might be an ideal way to accommodate Oracle-specific local integration tests. Since Docker containers are ephemeral in nature by design, the database could also be completely torn down as desired providing clean sandboxing.
The alexeiled/docker-oracle-xe-11g image on DockerHub I found has particularly clear setup and documentation instructions: https://hub.docker.com/r/alexeiled/docker-oracle-xe-11g/
After spinning up the Docker container, be sure to:
First connect to the APEX web console, login, as per the instructions
Then open Oracle SQL Developer and select Reset Password... first. Otherwise the following error may be thrown java.lang.ArithmeticException when attempting to get connection in Oracle 11.2.0.2.0 (64 bit)
As the documentation describes, the docker run command can also be designed to automatically run SQL scripts on the container's startup, which could also be very valuable in the CI/integration testing workflow.
Hope this helps!
I need to automate a selective table / user object backup I currently am doing via PL / SQL Developer.
The way I currently do it is via Tools/Export Tables and Tools/Export User Objects, manually select tables / objects, then set the options, choose destination and export. I do this from a windows laptop and the database is located in a suse linux server, both are in the same LAN. DB is running 24/7 and can not be shutdown. Also currently my oracle programming skills are very basic as I only do maintenance to this solution. I would like to keep doing the backup process in the windows laptop, but I would consider a server side script solution also and then retrieving the .sql files from server.
Thanks in advance
I wouldn't really call it a backup, but look at exp/imp and expdp/impdp (data pump) in the Utilities manual
As Gary implies exp/imp really isn't a backup solution. If this database is important to you or others, figure out how to use RMAN , which is usually configured to run in a mode that doesn't require the database to be shut down. Although it executes on the database host and for non-tape destinations must write its files to a filesystem attached to the host, it can be launched remotely.
RMAN is aimed at restoring/recovering the entire database, so if what you're looking for is only the ability to recover isolated objects it may not be for you.
I am working on a test server with an Oracle 11g installed. I was wondering if there is anyway I can replicate the database(environment + data) on my local Linux machine. I am using a CentOS 5.3 on Windows XP with SUN Virtual Box. On Windows I am using sqldeveloper client to connect to the 11g database.
There are a number of ways to move the data over:
Restore an RMAN backup on your test server
Export and import the data using exp/expdp/imp/impdp
Export and import using a transportable tablespace (Further Info)
Use database links to duplicate the data using SQL
You can use the Database Configuration Assistant to generate a template from your production database. This will give you all the parameters and tablespaces, among other things. You will need to tweak the configuration somewhat; for instance the file paths may be wrong, and some parameters may need downsizing. You can then feed that template into DBCA to clone the database on you Linux machine.
To get the schemas and data you should use Data Pump (rather than the older Import / Export utlities). This can be run off the command line or from PL/SQL.
Bear in mind that using production data in a development or test environment can cause you to run foul of data protection laws and other compliance issues. It depends on what your application does and what jurisdiction you operate under. But if your production system contains citizens' personal data you need to be very careful. There are products out there which will apply masking as part of a data import process (Oracle sells one) but they tend to be expensive. Rolling your own masking product can be tricky: if this applies to your situation be sure to get your compliance staff (legal team) involved early.
I would suggest you install Oracle XE which is free to use on your local if your development is not something that is related to core database features. You can then use the methods given above to pump data into Oracle XE and compile your code on it, though for development I don't think you would need data as much as that in production.