How to load a H2 database into memory? - h2

I have written a set of unit tests using H2 in embedded mode. Whatever changes tests make to DB stay there.
I know that the recommended approach is to create a blank in-memory database and create the schema when opening the connection.
However I am looking for an alternative approach. I would like to -
Initialize an in memory database with an embedded database file.
Or use embedded db in a way that all the changes are discarded as soon as the connection is closed.
How can I achieve this?

What I do in cases similar to this is to write the SQL script that creates the database and populates the tables. Then the application applies a database migration using Flyway DB.
Other possibilities are to create the database and load the tables from CSV files. The other would be to create the database with a different application and create a file with the SCRIPT command to create a backup. Your main application would have to run the RUNSCRIPT command to restore the database.

I use SQL scripts that create tables and other objects and/or populate them, and run these scripts at the beginning of the application.
One could also create a copy of the populated on-disk DB, package it into a ZIP/JAR archive, and open it read only, to be used to recreate and populate the in-memory DB.

Related

Oracle DB Export does not preserve order or dependencies

I'm trying to export an Oracle DB using Oracle SQL Developer having tables, sequences, view, packages, etc. with dependencies on each other.
When I use Tools -> Database Export and select all DDL options, unfortunately the exported SQL file does not preserve the other that is some DB objects should be created before some other.
Is there a way to make the DB export utility preserve object dependencies/order? Or Is there any other tool do you use for this task?
Thank you
Normally expdp does a pretty good job. Problems arise when there are dependencies on objects/users that are not part of the dump. This is because the counter part, impdp, does not add grants on objects that are not created by impdp. I call that the 'not created by me syndrome' that impdp has.
If you have no external dependencies (external meaning to schema's that are not part of the dump), expdp/impdp do a good job for you. You might not be able to use it if you can not have access to the database server since expdp writes it's files on the database server.
If you happen to have access to a database server that is able to connect to the original database, you could pull the data over into your local database using a database link.

Is there a need in Neo4j to have initial scripts just like rdbms store needs initial CREATE(and other DDL scripts) scripts to insert,update etc?

I recently was working with liquibase which is capable of generating the initial DDL script for my JPA entities.
I am trying to do the same for my entities which has Neo4j as the store. Is there any library like liquibase which I can use to get my work done. Can someone put light on this ?
Is there a need in Neo4j to have initial scripts just like rdbms store needs initial CREATE(and other DDL scripts) scripts to insert,update etc ?
I don't want to use the auto capability of spring boot.
There is no need in Neo4j to create or update schema itself, as you're doing in SQL. Schema is dynamically builds from the data you have in your database.
But if you're trying to manage migration of the data stored in your database, you can take a look at liquigraph. It's able to manage a CYPHER queries within changesets.

Cloning a Oracle Database Schema

I have an Oracle 12c Instance with a scheme 'wadmin' user, this instance has tables, view, data, triggers, sequences etc.
For quick spinning of docker images, I need to clone the db schema as fast as possible , so that I can create another user 'wadmin1' link it to new docker and start my testing.
Any CLI/tools for the same, does oracle provide any options?
I do not know if this is exacly what you are looking for but you can export your Oracle schema using ORACLE DataPump tool. This involves storing exported schema in the Oracle directory. While exporting schema to file you can transform the schema name, omit unnecessary tables or data etc. Exported files with database schema can be later used for imported to new database instance. More information regarding Oracle DataPump you can find here. https://oracle-base.com/articles/10g/oracle-data-pump-10g#SchemaExpImp.
Alternatively you can have scripts that create the database stored in the Git repository and integrate your builds with too called Flyway https://flywaydb.org/ which can be used to automatize of database schema creation. This is also really convenient from source control point of view. All changes on the schema are pull requested.
In my team we use OracleDataPump when we want to recreate the database together with the data, Flyway is used as a part of our continues integration.

Does copy pasting Apache derby db files into another system make it work fine

I have developed an application with derby db. I have created DB in my system. I need to deliver the application along with the db. I have deleted all the data from tables. Only the tables(structure with empty data) are remaining. So If I copy the db files (log,seg0,tmp,db.lck,service.properties all these in a single folder) to another system, will it work fine..?
Yes, it will work fine, although for the cleanest flow you should ensure that no application is accessing the database at the time that you copy the database folder.
From the Derby docs: http://db.apache.org/derby/docs/10.10/getstart/cgsintro.html
The on-disk database format used by Derby is portable and platform-independent. You can move Derby databases from machine to machine without needing to modify the data. A Derby application can include a pre-built, populated database if it needs to, and that database will work in any Derby configuration.
For more information about packaging a database with your application, see: http://db.apache.org/derby/docs/10.10/devguide/cdevdeploy32171.html

H2 Database multiple connections

I have the following issue:
Two instances of an application on two different systems should share a small database.
The main problem is that both systems can only exchange data through a network-folder.
I don't have the possibilty to setup a database-server somewhere.
Is it possible to place a H2 database on the network-folder and let both instances connect to the database (also concurrently)?
I could connect with both instances to the db using the embedded mode if I disable the file-locking, right?
The instances can perfom either READ or INSERT operations on the db. Do I risk data corruptions using multiple concurrent embedded connections?
As the documentation says; ( http://h2database.com/html/features.html#auto_mixed_mode
)
Multiple processes can access the same database without having to start the server manually. To do that, append ;AUTO_SERVER=TRUE to the database URL. You can use the same database URL independent of whether the database is already open or not. This feature doesn't work with in-memory databases.
// Application 1:
DriverManager.getConnection("jdbc:h2:/data/test;AUTO_SERVER=TRUE");
// Application 2:
DriverManager.getConnection("jdbc:h2:/data/test;AUTO_SERVER=TRUE");
From H2 documentation:
It is also possible to open the database without file locking; in this
case it is up to the application to protect the database files.
Failing to do so will result in a corrupted database.
I think that if your application use always the same configuration (shared file database on network folder), you need to create an application layer that manages concurrency

Resources