Does copy pasting Apache derby db files into another system make it work fine - derby

I have developed an application with derby db. I have created DB in my system. I need to deliver the application along with the db. I have deleted all the data from tables. Only the tables(structure with empty data) are remaining. So If I copy the db files (log,seg0,tmp,db.lck,service.properties all these in a single folder) to another system, will it work fine..?

Yes, it will work fine, although for the cleanest flow you should ensure that no application is accessing the database at the time that you copy the database folder.
From the Derby docs: http://db.apache.org/derby/docs/10.10/getstart/cgsintro.html
The on-disk database format used by Derby is portable and platform-independent. You can move Derby databases from machine to machine without needing to modify the data. A Derby application can include a pre-built, populated database if it needs to, and that database will work in any Derby configuration.
For more information about packaging a database with your application, see: http://db.apache.org/derby/docs/10.10/devguide/cdevdeploy32171.html

Related

Oracle application - migration to Exadata server

We have an upcoming migration of our Oracle database to an Exadata server. I want to clarify some issues I have thought of:
Will there be any issues with the code - performance issues? Exadata has another type of optimizer, it doesn’t uses indexes, has a columnar optimizer, if I’m not misleading,
Currently there are some import or export files generated on the database server (accessed via Filezilla). I understand that at Exadata the database server is inaccessible, and I suspect that either:
• we will have to move those files to another server - Oracle knows only FTP (which has ports closed at our client) -> how do we write / read from another server? (as far as I understand, they would like to put all the files on the WAS server)
• or we will need to import the files into the table using the java application and process them from there (and the same with the exported files).
Files that come automatically from other applications can be written to the database server? Or we have the same problems as for the manual part.
We have plenty of database jobs that run KSH scripts on the database server - is there a problem with them? I understand they should also be moved to the WAS server, but I do not know how Oracle will call them from there.
Will there be any problems with Jenkins deployments? Anything changed? Here we save the SQL/PLSQL sources in some XML files, from which the whole application is restored (packages, configuration tables, nomenclatures ...) (with the exception of the working data) (the XML files are read through a procedure from an oracle directory).
If you can think of any other issues concerning this migration, any problems you have encountered during or after the migration to Exadata, please share!
Thank you,
Step by step:
On exadata you are going to have the same optimizer behaviour with some improvements because the exadata may improve full table scan performance thanks to smart full scans. Indeed the exadata is able to avoid retrieving data blocks in fts because it knows in advance they do not contain neeeded data.
In the exadata you can export to external servers DBFS file systems, that might be useful for external tables, imports/exports and so on.
You can write your files on the DBFS you can configure.
You could use your DBFS, if you want the ksh files are accessed from outside your exadata.
Let your oracle directory point to a directory in the DBFS file system where you put your xml files and you are done.

Oracle Data Pump Transfer Between Databases

I have a specific need for data pump and I am having a hard time searching for a solution.
Currently, I have a exp/imp program that exports tables (selectively based on queries) from one database, and imports that same data into another database. This program and the dump files reside on a common server that can access both the source and destination databases. This is a totally automated process. It works good, albeit slowly.
Due to various reasons, I must migrate this program to use data pump. The biggest change now is the location of the dmp files. I also have very limited access to the database servers themselves, but I can run data pump.
The process will be run from the same common server, but the exported files will now reside on the database server for the source database. No issue there. I can create dmp files using expdp.
My issue is how to get that same data into the destination database. When I run impdp, it is expecting a data_pump_dir in the destination area (not source area). Again, this is automated, and I don't have the luxury of being able to transfer dmp files using scp or ftp or anything like that.
What can I use to overcome this problem using datapump?
No reason you cannot configure an external directory on BOTH databases:
CREATE DIRECTORY mydumpdir AS '/whatever/the/path/is';
Then, impdp and expdp will take the DIRECTORY argument as mydumpdir
Make sure you configure permissions for the Oracle schemas/users to read/write to the directory AND the oracle process account should have OS level rights to read/write to that location also. The expdp server should also have write access as it might be trying to write reports to the locations or you might be using to do file cleanup.

How to load a H2 database into memory?

I have written a set of unit tests using H2 in embedded mode. Whatever changes tests make to DB stay there.
I know that the recommended approach is to create a blank in-memory database and create the schema when opening the connection.
However I am looking for an alternative approach. I would like to -
Initialize an in memory database with an embedded database file.
Or use embedded db in a way that all the changes are discarded as soon as the connection is closed.
How can I achieve this?
What I do in cases similar to this is to write the SQL script that creates the database and populates the tables. Then the application applies a database migration using Flyway DB.
Other possibilities are to create the database and load the tables from CSV files. The other would be to create the database with a different application and create a file with the SCRIPT command to create a backup. Your main application would have to run the RUNSCRIPT command to restore the database.
I use SQL scripts that create tables and other objects and/or populate them, and run these scripts at the beginning of the application.
One could also create a copy of the populated on-disk DB, package it into a ZIP/JAR archive, and open it read only, to be used to recreate and populate the in-memory DB.

H2 Database multiple connections

I have the following issue:
Two instances of an application on two different systems should share a small database.
The main problem is that both systems can only exchange data through a network-folder.
I don't have the possibilty to setup a database-server somewhere.
Is it possible to place a H2 database on the network-folder and let both instances connect to the database (also concurrently)?
I could connect with both instances to the db using the embedded mode if I disable the file-locking, right?
The instances can perfom either READ or INSERT operations on the db. Do I risk data corruptions using multiple concurrent embedded connections?
As the documentation says; ( http://h2database.com/html/features.html#auto_mixed_mode
)
Multiple processes can access the same database without having to start the server manually. To do that, append ;AUTO_SERVER=TRUE to the database URL. You can use the same database URL independent of whether the database is already open or not. This feature doesn't work with in-memory databases.
// Application 1:
DriverManager.getConnection("jdbc:h2:/data/test;AUTO_SERVER=TRUE");
// Application 2:
DriverManager.getConnection("jdbc:h2:/data/test;AUTO_SERVER=TRUE");
From H2 documentation:
It is also possible to open the database without file locking; in this
case it is up to the application to protect the database files.
Failing to do so will result in a corrupted database.
I think that if your application use always the same configuration (shared file database on network folder), you need to create an application layer that manages concurrency

Installer package for program that uses JDBC to connect to MySQL

I have an installer wizard thing called 'install creator'. I want to include my mySQL database into the installer or find another way that the user, upon installation, can just use my database. Prob is-not everyone has MySQL installed on the computer and even then, the user doesn't know the name of the database or my password. Somehow the database must be created automatically upon install, and for my purposes, some of the tables created. How can one do this. Thanks
If you are just using MySQL as a local storage engine, as it seems to be what you are doing, then you should consider using Sqlite with JDBC, instead of MySQL. MySQL is really intended to be used on a server, where information from multiple users is stored, and where the database is accessed only indirectly through the programs that you create that run on the server. You could, in theory, package up MySQL and MySQL Connector/J which lets JDBC talk with MySQL; however, MySQL is a pretty big beast, and I don't think it's nice to do that to your users (also, don't forget that they might already have MySQL installed, and if you were to install MySQL for the first time, you would effectively be forcing them to use your root password). Unlike MySQL, sqlite is intended to provide the structure of SQL for use with lightweight, local file storage.

Resources