Upgrade problem with h2 database when upgrading from 192 to 200 : Scale must not be bigger than precision - h2

Years ago I wrote an app to capture data into H2 datafiles for easy transport and archival purposes. The application was written with H2 1.4.192.
Recently, I have been revisiting some load code relative to that application and I have found that there are some substantial gains to be had in some things I am doing in H2 1.4.200.
I would like to be able to load the data that I had previously saved to the other databases. But I had some tables that used a now invalid precision scale specification. Here is an example:
CREATE CACHED TABLE PUBLIC.MY_TABLE(MY_COLUMN DATETIME(23,3) SELECTIVITY 5)
H2 databases created with 1.4.192 that contain tables like this will not load on 1.4.200,
they will get the following error:
Scale($"23") must not be bigger than precision({1}); SQL statement:
CREATE CACHED TABLE PUBLIC.MY_TABLE(MY_COLUMN DATETIME(23,3) SELECTIVITY 5) [90051-200] 90051/90051 (Help)
My question is how can I go about correcting the invalid table schema? My application utilizes a connection to an H2 database and then loads the data it contains into another database. Ideally I'd like to have my application be able to detect this situation and repair it automatically so the app can simply utilize the older data files. But in H2 1.4.200 I get the error right up front upon connection.
Is there a secret/special mode that will allow me to connect 1.4.200 to the database to repair its schema? I hope???
Outside of that it seems like my only option is have a separate classloader for different versions of H2 and have remedial operations happen in a classloader and the load operations happen in another classloader. Either that or start another instance of the JVM to do remedial operations.
I wanted to check for options before I did a bunch of work.
This problem is similar to this reported issue, but there was no specifics in how he performed his resolution.

This data type is not valid and was never supported by H2, but old H2, due to bug, somehow accepted it.
You need to export your database to a script with 1.4.192 Beta using
SCRIPT TO 'source.sql'
You need to use the original database file, because if you opened file from 1.4.192 Beta with 1.4.200, it may be corrupted by it, such automatic upgrade is not supported.
You need to replace DATETIME(23,3) with TIMESTAMP(3) or whatever you need using a some text editor. If exported SQL is too large for regular text editors, you can use a stream editor, such as sed:
sed 's/DATETIME(23,3)/TIMESTAMP(3)/g' source.sql > fixed.sql
Now you can create a new database with 1.4.200 and import the edited script into it:
RUNSCRIPT FROM 'fixed.sql'

Related

How do I integrate Liquibase within an existing CI/CD pipeline in large organization?

We are working in a very big organization, many Databases (of many types), many schemas, many users.
Does LB has to work with some Source Control (for locking the files
when many users exist in the organization and using the same DB,
same Schema, etc)?
What is the best practice of working with LB in a very big
organization, many concurrent users?
Can SQLCL general sql format type or just xml format type?
Is there some integration with SQL Developer? I mean, suppose a user
changes an objects via sql developer, what happens then?
We get this type of question all the time, after folks get a handle of how to automate DB changes, next step is typically to add it into an existing CI/CD workflow.
Yes, Liquibase works with any source control. Most users are using
Git. But you can use Git, TFS, SVN, CVS... Once you are up and
running with Liquibase, you just need to make sure that your scripts
are in source control and you are good to go.
Besides 3rd party source control tools, Liquibase has tracking tables called "DATABASECHANGELOG" tables that keep track of the changes applied to your database when using Liquibase deployments.
Here is some more information about getting started and How Liquibase Works. https://www.liquibase.org/get_started/how-lb-works.html
Liquibase has one more table that it uses internally called "DATABASECHANGELOGLOCK" table.
This table was designed to prevent multiple Liquibase users running deployments concurrently - potentially leaving the Database in a bad state. Once the Liquibase deployment (the liquibase update command) is done, the "DATABASECHANGELOGLOCK" will allow the next Liquibase user to deploy.
You can use both SQL and XML formats (or even JSON and YAML formats).
When using SQL, you have a few options:
Best option is to use Formatted SQL changeLogs https://www.liquibase.org/documentation/sql_format.html
https://www.liquibase.org/get_started/quickstart_sql.html
You can use plain raw SQL files referenced from an XML changeLog
https://www.liquibase.org/documentation/changes/sql_file.html
When using XML, can find all the available change types (also called changeSets) available in the following page (on the left of the page)
https://www.liquibase.org/documentation/changes/
XML changeLog are more agnostic and sometimes can be used for different Database platforms when doing migrations. Also, many of the change types in XML have the ability to be rolled back automatically. The reason that this is possible with XML is because Liquibase uses it own built in functions to figure out inverse statements like "create table" to be "drop table".
For each of those changeSets you can find out if they are auto rollback eligible (at the bottom of the page). For example, create table changeSet will be Auto Rollback = yes.
https://www.liquibase.org/documentation/changes/create_table.html

Sybase to Oracle table Migration via Migration Wizard offline

How can I create a script of inserts for my sybase to oracle Migration? The Migration wizard only gives me the option to migrate procedures and triggers and such. But there is no select for just tables. When I try to migrate tables offline and move data. the datamove/ folder is empty. I would also want to only migrate specific tables (ones with long identifiers) because i was able to migrate the rest with Copy to Oracle.
I must also note that i do not want to upgrade to an new version of oracle. Currently on ~12.1 so i need to limit the identifiers.
How can I get the offline scripts for table inserts?
You (probably!) don't want INSERTs for offline migration scripts. If you're just running INSERTs, then the online method would probably suffice.
The point of the Offline strategy is to take the data from your Sybase instance to flat, delimited text files (using BCP), which we can THEN use to load back into an Oracle Database using SQLLDR or External Tables which will be EXPONENTIALLY faster than using INSERT scripts.
Take a look at this whitepaper where I go into offline Sybase migrations in detail.
You can consider DCO-based Sybase-to-Oracle replication via the Sybase Rep Server. This way, not only will you have all data moved, but you will also be able to have DML updates propagated online, which will make your system switchable live.

Oracle application - migration to Exadata server

We have an upcoming migration of our Oracle database to an Exadata server. I want to clarify some issues I have thought of:
Will there be any issues with the code - performance issues? Exadata has another type of optimizer, it doesn’t uses indexes, has a columnar optimizer, if I’m not misleading,
Currently there are some import or export files generated on the database server (accessed via Filezilla). I understand that at Exadata the database server is inaccessible, and I suspect that either:
• we will have to move those files to another server - Oracle knows only FTP (which has ports closed at our client) -> how do we write / read from another server? (as far as I understand, they would like to put all the files on the WAS server)
• or we will need to import the files into the table using the java application and process them from there (and the same with the exported files).
Files that come automatically from other applications can be written to the database server? Or we have the same problems as for the manual part.
We have plenty of database jobs that run KSH scripts on the database server - is there a problem with them? I understand they should also be moved to the WAS server, but I do not know how Oracle will call them from there.
Will there be any problems with Jenkins deployments? Anything changed? Here we save the SQL/PLSQL sources in some XML files, from which the whole application is restored (packages, configuration tables, nomenclatures ...) (with the exception of the working data) (the XML files are read through a procedure from an oracle directory).
If you can think of any other issues concerning this migration, any problems you have encountered during or after the migration to Exadata, please share!
Thank you,
Step by step:
On exadata you are going to have the same optimizer behaviour with some improvements because the exadata may improve full table scan performance thanks to smart full scans. Indeed the exadata is able to avoid retrieving data blocks in fts because it knows in advance they do not contain neeeded data.
In the exadata you can export to external servers DBFS file systems, that might be useful for external tables, imports/exports and so on.
You can write your files on the DBFS you can configure.
You could use your DBFS, if you want the ksh files are accessed from outside your exadata.
Let your oracle directory point to a directory in the DBFS file system where you put your xml files and you are done.

Why so slow returning data from Oracle external tables?

We are an ETL shop and make heavy use of external tables. Typically these tables are queried to populated staging tables. I am surprised at the time it takes to for queries to return data from the external tables.
Typically there is around a 15 second delay before any result is returned. This is true even in the cases when the data file contains no data and when the data file does not exist. The delay doesn't seem related to the number of rows in the file.
I am logging into the database server itself, on which the external table data files are located.
Is this expected behaviour?
File system operations (ls, vim) at least on smaller files happen with no delay.
All files on local disk.
Oracle 12.1.
Oracle Linux Server release 6.6
I would recommend reviewing or looking into release Oracle 12.2 notes. There was a Patch for both the Big Data Appliance Firmware (22911748) for Exadata and a fix made in 12.2.
It addresses a view that is specific to the access to external tables. It's possible that you are impacted by this view. The view name is LOADER_DIR_OBJS, which is used to query the directory that external tables point to.
Our customers are running into very similar issues, and Oracle recommended installing the 12.2 release which contains the patch.
So, we are currently testing the 12.2 release. Anytime an external table is read, it has to have access to the LOADER_DIR_OBJS system view. Typically, the poor performance comes from this view, which accesses the SYS.OBJ$ and SYS.X$DIR system object because query plan is not optimal. Some people have found work arounds. (See Oracle Workaround Document ID 2034938.1 to see if it applies to you).

H2 database: not able to copy multiple tables simultaneously from Oracle to H2 using Java concurrency

I am trying to copy 30 tables from Oracle 11g database to H2 database during a process in my Java application. None of the tables are related.
To speed up the process I am creating 30 threads for each table and try to copy the tables simultaneously. I am able to start all threads, but as soon as one of thread start to execute the query, all other threads that hit the H2 db, went into monitor state from running state.
Is it not possible with H2 database to copy multiple table simultaneously or am I doing something wrong and need any special configuration while creating connection
Do anyone has any workaround for this problem.
Is it at least possible to read data from different tables simultaneously.
H2 is single threaded by default. To use the multi-threaded mode, append ;MULTI_THREADED=1 to the database URL. Please note this feature is not fully tested; I suggest to use at least H2 version 1.4.x (and the MVStore) when using this feature.

Resources