"Invalid column index" during a rolling release - oracle

I am dropping a table column in my Oracle database. My JRuby connection pool appears to be caching metadata and throws an exception upon select:
ERROR 12:40:00.408000 pid=1832 tid=bok56 :: Java::JavaSql::SQLException: Invalid column index:
SELECT "MY_TABLE".* FROM "MY_TABLE" WHERE "MY_TABLE"."ID" = :a1
Restarting each of my application's several JRuby instances will cause the connections in the pools to grab the correct metadata for the table but I want to do this rollout without downtime. (During a rolling restart there will be some servers with the old state and some with the new.)
Is there a way to force the connections in my pools to grab new metadata, either upon select or when signaled by the database that the metadata has changed?
Or is there another strategy for removing an unused column from an active table?
I am not averse to a multi-release (or multi-restart) solution.
I'd prefer not to use Oracle Edition-Based Redifinition™.
While I'm running JRuby on Trinidad, I'm hoping folks with Rails or Java applications will have had to solve this problem as well.

Related

Debezium Oracle Connectors CDC usage on top of Kafka and Backup / DataGuard

We are trying to use the Oracle Debezium connector (1.9b) on top of Kafka.
We tried to use 2 things regarding snapsot_mode: schema_only and initial.
We use "log.mining.strategy":"online_catalog" (should be the default)
We are using a PDB/CDB Oracle instance on Oracle 19c.
My understanding is that;
The connector create a session to the PDB
It add a shared lock to ensure the structure will not change (shared) for a short duration
the DDL structure is retrieved from the PDB
It create a session to the CDB
It retrieve the last LSN event from CDB
if snapshot == initial, it will use a "JDBC query to retrieve the whole data" from PDB
it does NOT seems to release the initiated session (or rather process) to the PDB
it continues to mines new events from CDB
x. ... it seems to work for a couple of minutes
After a couple of minutes, the number of process increase drastically
The Oracle database freeze, due to an excess of process (that you can follow using v$process)
We had a lot of errors message; like:
A. Failed to resolve Oracle database
B. IO Error: Got minus one from a read call
C. ORA-04025: maximum allowed library object lock allocated
D. None of log files contains offset SCN: xxxxxxx
The message in point D. said it tries to use a offset which was part of "an old" archived log.
Every 30min (or before, it we have more activity), the log is switched from a file to another.
And a backup is occuring every 30minutes which will read the logs, backup it and then: delete it .
It seems to me that Debezium tried to reach past archived log whose was deleted by the backup process.
The process of "deleting previous archived logs" seems "correct" to me, isn't it ?
Why Debezium tries to pass through the archived logs ? because when snapshot==schema_only it should only catch the news events, therefore why using the archived one ?
How can we manage it ?
I hope that if this point is resolved in my use-case, the debezium will stop to "loop" creating new process and ultimately will stop blocking the Oracle DB.
If you have any clues or opinions, don't hesitate to share it. Many thanks
We try to use shared lock and none
We try to limite the number of tables in the scope
I cannot ask to stop the backup, in production it's not a good idea and in test, it seems that the backup is only there to clean the archived logs and avoid ending with completely used storage.

Upgrade problem with h2 database when upgrading from 192 to 200 : Scale must not be bigger than precision

Years ago I wrote an app to capture data into H2 datafiles for easy transport and archival purposes. The application was written with H2 1.4.192.
Recently, I have been revisiting some load code relative to that application and I have found that there are some substantial gains to be had in some things I am doing in H2 1.4.200.
I would like to be able to load the data that I had previously saved to the other databases. But I had some tables that used a now invalid precision scale specification. Here is an example:
CREATE CACHED TABLE PUBLIC.MY_TABLE(MY_COLUMN DATETIME(23,3) SELECTIVITY 5)
H2 databases created with 1.4.192 that contain tables like this will not load on 1.4.200,
they will get the following error:
Scale($"23") must not be bigger than precision({1}); SQL statement:
CREATE CACHED TABLE PUBLIC.MY_TABLE(MY_COLUMN DATETIME(23,3) SELECTIVITY 5) [90051-200] 90051/90051 (Help)
My question is how can I go about correcting the invalid table schema? My application utilizes a connection to an H2 database and then loads the data it contains into another database. Ideally I'd like to have my application be able to detect this situation and repair it automatically so the app can simply utilize the older data files. But in H2 1.4.200 I get the error right up front upon connection.
Is there a secret/special mode that will allow me to connect 1.4.200 to the database to repair its schema? I hope???
Outside of that it seems like my only option is have a separate classloader for different versions of H2 and have remedial operations happen in a classloader and the load operations happen in another classloader. Either that or start another instance of the JVM to do remedial operations.
I wanted to check for options before I did a bunch of work.
This problem is similar to this reported issue, but there was no specifics in how he performed his resolution.
This data type is not valid and was never supported by H2, but old H2, due to bug, somehow accepted it.
You need to export your database to a script with 1.4.192 Beta using
SCRIPT TO 'source.sql'
You need to use the original database file, because if you opened file from 1.4.192 Beta with 1.4.200, it may be corrupted by it, such automatic upgrade is not supported.
You need to replace DATETIME(23,3) with TIMESTAMP(3) or whatever you need using a some text editor. If exported SQL is too large for regular text editors, you can use a stream editor, such as sed:
sed 's/DATETIME(23,3)/TIMESTAMP(3)/g' source.sql > fixed.sql
Now you can create a new database with 1.4.200 and import the edited script into it:
RUNSCRIPT FROM 'fixed.sql'

Manually logging database event in datastage job

i have a parallel job that writes in oracle table. I want to manually write warnings in Datastage's log if some event occur. For example if a certain value for a certain column is inserted i want to track this information in the log. Could this be achieved somehow?
To write custom messages into the logs for a particular jobs data stream, you can use a combination of a copy stage, transformer, and peak stage. The peak stage is the one that writes to the logs. I like to set the peak stage to run in sequential mode, so that your messages are kept together in single entries in the log, instead across nodes.
Also, you can peak the rejects of the oracle stage. maybe combine this with the above option (using a funnel stage and a standard column schema).
Lastly, if you'd actually like to query the logs themselves and write those logs out somewhere else or use them in a job (amoungst allother data kept about jobs in the repository). You can directly query the DSODB schema in the XMETA database. I.e. the DataStage repository (by default DB2).
You would need to have the DataStage Operations Console up and running for that (not sure what version of DataStage you're running). If DataStage is running on a single tier and using the default DB2 database. You can simply catalog the DSODB database so that it's available as a connection in the DB2 connector. Else you'd need to install a DB2 client on the DataStage engine tier and catalog the database there.
All the best!
Twitter: #InforgeAcademy
DataStage tips and Tricks: https://www.inforgeacademy.com/blog/

DB2 Exclusive Lock Not Released

In a particularly requested DB2 table, accessed by distributed Java desktop applications via JDBC, I'm getting the following scenario several times a day:
Client A wants to INSERT new registers and gets a IX lock on the table, and X locks in each new row;
Other client(s) want(s) to perform a SELECT, is granted a IS lock on the table, but the application stucks;
Client A continues to work, but the INSERT and UPDATE queries are not commited, the locks are not released, and it keeps collecting X locks to each row;
Client A exits and its work is not committed. The other clients finnally get their SELECT result set.
Used to work well, and it does most of the time, but the lock situations are getting more and more frequent.
Auto-commit is ON.
There are no exceptions thrown or errors detected in the logs.
DB2 9.5 / JDBC Driver 9.1 (JDBC 3 specification)
If the jdbc applications are not performing COMMIT then the locks will persist until a rollback or commit. If an application quits with uncommitted inserts then a rollback will happen for all recent versions of Db2. This is expected behaviour for Db2 on Linux/Unix/Windows.
If the jdbc application is failing to commit then it is broken or misconfigured so you must get to root cause of that if you seek a permanent solution.
If the other clients wish to ignore the insert row-locks then they should choose the correct isolation level and you can configure Db2 to skip insert-locks . See documentation DB2_SKIPINSERTED at this link
It turns out that sometimes the auto-commit, and I don't know why, becomes off to a random single instance of the application.
The following validation seems to solve the problem (but not the root of it):
if (!conn.getAutoCommit()) {
conn.commit();
}

Oracle - side-by-side schema update technology...is there any?

Is there any technology out there that will allow you to do side-by-side updates of production schemas?
The goal is to have zero down time when applying updates to a schema in production.
Weblogic 10 has a similar feature for their Java EE apps where by you deploy the new version of the app and new connections go to the new app, while the existing connections continue to the old app. When all the old connections complete/timeout, the old app is retired and the new app continues on...zero down time.
Is there something similar in Oracle?
Yes. There is online redefinition package.
DBMS_Redefinition
But I doubt this will give you zero downtime, this doesn't account for every possible change to a schema. This lets you do some table changes. I think you need to define zero and how extensive the changes you want to make. Usually if you change the database, you have to change your client as well. If you changed your database, how would the client switch automatically from the old proc signature to the new proc signature - Instantaneously?
Databases don't work like apps. There either is a FK from tableA to tableB or there isn't... it can't not be there for current connection and exist only for new connection in the same manner as your application can. Databases just aren't the same.
That being said, there is rumor that Oracle is working on package versioning... so you could connect to a specific version of a package to make such a migration simpler. But again... that would work for packages, DBMS_redef would work for tables... but that's not the sum total of your database.
Oracle release today 11gr2, it has edition-based redefinition: http://download.oracle.com/docs/cd/E11882_01/server.112/e10881/chapter1.htm#NEWFTCH1
Depends what you mean, or include, in "schema".
If you want to add or drop an index, that can be done "in-flight", although it will require a lock which may halt activity for a time. In the latest Oracle versions, it doesn't need to hold the lock for the entire time it takes to build the index, just for a moment to lock in the change. If you have short-duration transactions it shouldn't be noticeable.
In some cases that applies to tables as well (eg adding a nullable or default column).
If you use PL/SQL (especially packages), things can be a little more complicated. Enhancements were mooted for 11gR1 to enable the in-flight application upgrade, but it got pushed out and is now expected in 11gR2 (probably out first half next year).
In the meantime, a workaround is a multi-schema solution. Say your data sits in one schema ("yellow") and your current application code is running in "blue" schema, you load your new application into "green schema". You switch your connections, one by one, from blue to green. Once your connections are all using "green", you can retire "blue" until your next upgrade (when "blue" becomes the new app and "green" is retired).
If you have a genuine 24/7 system, you'll probably always have to stage some upgrades. For example, add a new column as optional, upgrade the application to set it, then make it mandatory (possibly with some data change script for pre-existing rows).

Resources