Verify H2 Backup is not corrupted - h2

I am developing an H2 (TCP), based application and I created a feature that takes a backup file when a button is clicked.
I used the
BACKUP TO '<FILENAME>'
command to take online backups. I had tested it and it works, but for very rare instances, the backup is corrupted.
Is there any API in H2 to check if the backup file is corrupted? I am thinking I would load that backup zip file, then do count queries on all tables and display it on the screen.

As Thomas (creator of H2) citiated before fastest way to verify is getting connection. If you want to learn if the data is corrupted, Backup and Restore is the another encouraged way. Currently There isn't any api or tool now.
From Thomas:
There is a small risk that the database file exists, but not fully initialized. If that is the case,
then some of the tables don't exist. The standard way to verify all tables exists is to us
DatabaseMetaData.getTables.

Related

Open dbf file from oracle database

My instructor gave me a username and password and .dbf file and tell me to open it and try to retrieve with sqlplus and oracle database
I tried to open the dbf file from excel mysql and ms server but it i gave me an error
Speaking as a DBA: As Littlefoot stated, you can't just read a data file from an Oracle DB. At best they are proprietary binary file formats, assuming it isn't encrypted on top of that. Nor can you take a data file from one database instance and just plug it in to another database instance. You also can't import it to mySQL or any other database engine: as a stand-alone data file it can only be properly read by its original database installation (i.e. the specific database instance that created it).
Oracle has specific tools available to copy data and/or files from one database to another, but those would generally use the RMAN backup manager (used to make physical backups) or (more likely in your case) the Datapump "Transportable Tablespace" feature.
To restore it from an RMAN backup you would need a complete full backup of the entire source database instance: RMAN backup sets including all data files, redo logs (and perhaps archived logs), control files, parameter files, encryption keys,, and possibly more.
To restore a transportable tablespace dump you would need your own running Oracle database instance, the correct parameters to run the impdp import utility, and the assistance/cooperation of the DBA.
You need to confirm if the file you were given is such an export dump (though the .dbf file extension would suggest not), and how you are expected to access the data. You won't be able to just "open the file".
.DBF extension probably represents datafile; I don't think you can read it with any tool (at least, I don't know of any).
You should find an Oracle DBA who might try to help; in order to restore a database (which is contained in that file), they might need control file(s), redo log files and ... can't name what other files (I'm not a DBA).
Then, if everything goes OK, database might be started up so that you'd be able to connect to it using credentials you were given.

Oracle application - migration to Exadata server

We have an upcoming migration of our Oracle database to an Exadata server. I want to clarify some issues I have thought of:
Will there be any issues with the code - performance issues? Exadata has another type of optimizer, it doesn’t uses indexes, has a columnar optimizer, if I’m not misleading,
Currently there are some import or export files generated on the database server (accessed via Filezilla). I understand that at Exadata the database server is inaccessible, and I suspect that either:
• we will have to move those files to another server - Oracle knows only FTP (which has ports closed at our client) -> how do we write / read from another server? (as far as I understand, they would like to put all the files on the WAS server)
• or we will need to import the files into the table using the java application and process them from there (and the same with the exported files).
Files that come automatically from other applications can be written to the database server? Or we have the same problems as for the manual part.
We have plenty of database jobs that run KSH scripts on the database server - is there a problem with them? I understand they should also be moved to the WAS server, but I do not know how Oracle will call them from there.
Will there be any problems with Jenkins deployments? Anything changed? Here we save the SQL/PLSQL sources in some XML files, from which the whole application is restored (packages, configuration tables, nomenclatures ...) (with the exception of the working data) (the XML files are read through a procedure from an oracle directory).
If you can think of any other issues concerning this migration, any problems you have encountered during or after the migration to Exadata, please share!
Thank you,
Step by step:
On exadata you are going to have the same optimizer behaviour with some improvements because the exadata may improve full table scan performance thanks to smart full scans. Indeed the exadata is able to avoid retrieving data blocks in fts because it knows in advance they do not contain neeeded data.
In the exadata you can export to external servers DBFS file systems, that might be useful for external tables, imports/exports and so on.
You can write your files on the DBFS you can configure.
You could use your DBFS, if you want the ksh files are accessed from outside your exadata.
Let your oracle directory point to a directory in the DBFS file system where you put your xml files and you are done.

How to check that the H2 DataBase is Fully not corrupted?

H2 Database is not very stable (But very Fast wich is very good for DEV), especialy during the developpement process, i hope that the number of corruption is du to the immediat shutdown of the Server (during debuging).
How to ensure that a H2 DataBase is not corrupted?
In order garant that a backup is good.
Probably the best way to check if everything is OK is to create a SQL script from the database, using the SCRIPT statement. If that works, then the data is fully readable. The index data might still be corrupt, but indexes can be re-created.
Another option is to always backup the data in the form of a SQL script. This will make a separate check unnecessary; but backup is a bit slower and can't be done online (while updates are happening).
By the way: if a database file gets corrupt, it's due to misconfiguration or wrong usage (H2 supports disabling the transaction log), due to hardware failure, or due to a bug in the database engine itself.

how to use exp command to export Oracle DB with files in different disk location

we get problem, while trying to export Oracle DB. OS: CentOS ~ 5.2 DB: Oracle 10g.
Exp command exports db files only in location:
/home/oracle/OraHome_1/oradata/master/xxx.dbf
, but tool can't export files in different location (we know about this files after getting trace) like this:
'/disk1/dblog06.dbf',
'/home/disk2/system01.dbf',
Please, advice me, how to get dump file. or buckup it.
Thanks.
You appear to have misunderstood what exp does, and particularly what the file parameter is for. The file is the output dump file, normally given a .dmp extension. Export takes data out of the database instance, it does not work under the hood on the datafiles - you have to tell it which data you want (full, user, tables, or tablespaces) and where to put it, not where it comes from.
If you really did try to exp file=/home/disk2/system01.dbf then what you actually asked it to do was trash your database; you're lucky that it did not overwrite the datafile and cause a catastrophic failure. Oracle seems to have saved you from yourself there, though possibly only thanks to having exclusive locks on the files at the time.
You need to read up on how it works and see if it actually does what you want - as APC notes it's not a backup tool. Looks at the Oracle documentation for your version, or somewhere like http://www.orafaq.com/wiki/Import_Export_FAQ, and also look at using data pump instead of exp.
I am not sure if that is the question, but the exp command will export database objects according to their logical schema (user name, table name). It does not matter which physical database file the data is coming from.
exp works through an Oracle instance, which needs to have mounted the datafiles.
Are these other files part of the Oracle database? Maybe another database? You need to find out which Oracle server uses them, and then run exp against that instance.
EXPORT is not a backup tool. It is meant for transferring data from one database to another, or perhaps from one schema to another.
If you want to recover your data in the event of a database crash or corruption then you need to use the appropriate tool. There are OS solutions to this, but Oracle comes with a sophisticated backup and recovery tool: RMAN. Find out more.

How to duplicate an Oracle instance?

How can I duplicate an Oracle instance? Does anyone have any idea how to do so?
Assuming you want the schema and data duplicated, use the exp and imp commands to export your database, then import it as another user using the FROMUSER and TOUSER parameters.
Well, presumably you have a backup (surely!), so just test your backup recovery on your test server.
To be slightly more serious, it depends what version you are using, newer versions of RMAN make it pretty easy I believe, older versions, you basically do it as a backup recover.
How I've done it in the past, is basically
copy backup data files
create init file
create a new controlfile is the command 'CREATE CONTROLFILE SET DATABASE "TEST" RESETLOGS ARCHIVELOG'
Apply archivelogs and then open with resetlogs
Here is an article which explains the process with a bit more detail
A minor comment on your terminology - "instance" is actually the set of processes running on the database server host and you want to duplicate the "database".
As someone else mentioned, the best way is to start with an RMAN backup of the original database. However, since Oracle 9 RMAN has had the "DUPLICATE DATABASE" command, which takes care of a lot of housekeeping that used to be necessary if you just made a copy by restoring a production backup (e.g. resetting DBID, changing data and log file locations in the control file, setting database GLOBAL_NAME, etc.).
If you're not using RMAN, and the database is on the small side, you can script something that puts each tablespace in hot backup mode, copies the datafiles for that tablespace to a backup location, and then takes the tablespace out of hot backup mode. You now have a recoverable backup that can be moved to another host for archive log application. This definitely has a performance impact on the original database and should be your last resort.
Create a template based on your existing instance. You can then create other instances.

Resources