I am using an in-memory hsqldb database with a JDBC driver.
Now, I am looking for a way to persist this database for reloading after application reboot. I came up with the following options:
Export .script file with sql command "SCRIPT < path > " (link)
Log all statements to a log file.
Option 2 works, but it seems kind of ugly in my eyes. The script export for option 1 works too, but I seem to be unable to get the .script file back into an in-memory database.
I am thankful for any advice.
The first option is correct.
After you export the database with the SCRIPT <path> statement, you can get it into an in-memory database.
You need to connect to the scripted database with a read-only file: URL
For example if you export the database to d:/dbfiles/mydb.script, you will get the mydb.script file in the named directory. To connect to this database, use file:d:/dbfiles/mydb;files_readonly=true.
There is absolutely no speed difference between the above method and a mem: database.
Related
My instructor gave me a username and password and .dbf file and tell me to open it and try to retrieve with sqlplus and oracle database
I tried to open the dbf file from excel mysql and ms server but it i gave me an error
Speaking as a DBA: As Littlefoot stated, you can't just read a data file from an Oracle DB. At best they are proprietary binary file formats, assuming it isn't encrypted on top of that. Nor can you take a data file from one database instance and just plug it in to another database instance. You also can't import it to mySQL or any other database engine: as a stand-alone data file it can only be properly read by its original database installation (i.e. the specific database instance that created it).
Oracle has specific tools available to copy data and/or files from one database to another, but those would generally use the RMAN backup manager (used to make physical backups) or (more likely in your case) the Datapump "Transportable Tablespace" feature.
To restore it from an RMAN backup you would need a complete full backup of the entire source database instance: RMAN backup sets including all data files, redo logs (and perhaps archived logs), control files, parameter files, encryption keys,, and possibly more.
To restore a transportable tablespace dump you would need your own running Oracle database instance, the correct parameters to run the impdp import utility, and the assistance/cooperation of the DBA.
You need to confirm if the file you were given is such an export dump (though the .dbf file extension would suggest not), and how you are expected to access the data. You won't be able to just "open the file".
.DBF extension probably represents datafile; I don't think you can read it with any tool (at least, I don't know of any).
You should find an Oracle DBA who might try to help; in order to restore a database (which is contained in that file), they might need control file(s), redo log files and ... can't name what other files (I'm not a DBA).
Then, if everything goes OK, database might be started up so that you'd be able to connect to it using credentials you were given.
I'm trying to export an Oracle DB using Oracle SQL Developer having tables, sequences, view, packages, etc. with dependencies on each other.
When I use Tools -> Database Export and select all DDL options, unfortunately the exported SQL file does not preserve the other that is some DB objects should be created before some other.
Is there a way to make the DB export utility preserve object dependencies/order? Or Is there any other tool do you use for this task?
Thank you
Normally expdp does a pretty good job. Problems arise when there are dependencies on objects/users that are not part of the dump. This is because the counter part, impdp, does not add grants on objects that are not created by impdp. I call that the 'not created by me syndrome' that impdp has.
If you have no external dependencies (external meaning to schema's that are not part of the dump), expdp/impdp do a good job for you. You might not be able to use it if you can not have access to the database server since expdp writes it's files on the database server.
If you happen to have access to a database server that is able to connect to the original database, you could pull the data over into your local database using a database link.
I have written a set of unit tests using H2 in embedded mode. Whatever changes tests make to DB stay there.
I know that the recommended approach is to create a blank in-memory database and create the schema when opening the connection.
However I am looking for an alternative approach. I would like to -
Initialize an in memory database with an embedded database file.
Or use embedded db in a way that all the changes are discarded as soon as the connection is closed.
How can I achieve this?
What I do in cases similar to this is to write the SQL script that creates the database and populates the tables. Then the application applies a database migration using Flyway DB.
Other possibilities are to create the database and load the tables from CSV files. The other would be to create the database with a different application and create a file with the SCRIPT command to create a backup. Your main application would have to run the RUNSCRIPT command to restore the database.
I use SQL scripts that create tables and other objects and/or populate them, and run these scripts at the beginning of the application.
One could also create a copy of the populated on-disk DB, package it into a ZIP/JAR archive, and open it read only, to be used to recreate and populate the in-memory DB.
Is it possible to get a structure of a derby database so it is saved in a form of an sql script that I can run and it would recreate the database along with the data in it?
The dblook tool (http://db.apache.org/derby/docs/10.8/tools/ctoolsdblook.html) will get the structure of the database and export it as a SQL script.
But it doesn't extract the data.
You could perhaps use the backup and restore utilities, but the format of a Derby backup is not a sql script.
The Apache 'ddlutils' tool can extract and move the data, I believe. See: http://db.apache.org/derby/integrate/db_ddlutils.html
and
http://db.apache.org/ddlutils/
we get problem, while trying to export Oracle DB. OS: CentOS ~ 5.2 DB: Oracle 10g.
Exp command exports db files only in location:
/home/oracle/OraHome_1/oradata/master/xxx.dbf
, but tool can't export files in different location (we know about this files after getting trace) like this:
'/disk1/dblog06.dbf',
'/home/disk2/system01.dbf',
Please, advice me, how to get dump file. or buckup it.
Thanks.
You appear to have misunderstood what exp does, and particularly what the file parameter is for. The file is the output dump file, normally given a .dmp extension. Export takes data out of the database instance, it does not work under the hood on the datafiles - you have to tell it which data you want (full, user, tables, or tablespaces) and where to put it, not where it comes from.
If you really did try to exp file=/home/disk2/system01.dbf then what you actually asked it to do was trash your database; you're lucky that it did not overwrite the datafile and cause a catastrophic failure. Oracle seems to have saved you from yourself there, though possibly only thanks to having exclusive locks on the files at the time.
You need to read up on how it works and see if it actually does what you want - as APC notes it's not a backup tool. Looks at the Oracle documentation for your version, or somewhere like http://www.orafaq.com/wiki/Import_Export_FAQ, and also look at using data pump instead of exp.
I am not sure if that is the question, but the exp command will export database objects according to their logical schema (user name, table name). It does not matter which physical database file the data is coming from.
exp works through an Oracle instance, which needs to have mounted the datafiles.
Are these other files part of the Oracle database? Maybe another database? You need to find out which Oracle server uses them, and then run exp against that instance.
EXPORT is not a backup tool. It is meant for transferring data from one database to another, or perhaps from one schema to another.
If you want to recover your data in the event of a database crash or corruption then you need to use the appropriate tool. There are OS solutions to this, but Oracle comes with a sophisticated backup and recovery tool: RMAN. Find out more.