if the glassfish server loses connectivity with the DB the connections all die. I want to detect it and recover the connection.
When I set it to use "table", this works but when i set it to "meta-data", this seems not working. Anybody know why or is this a known glassfish bug?
Likely not bug in GlassFish, but JDBC driver that caches meta data. This is also addressed in GlassFish documentation:
table: Performing the query on a specified table. If this option is
selected, Table Name must also be set. Choosing this option may be
necessary if the JDBC driver caches calls to setAutoCommit() and
getMetaData().
Related
We created a SQL table with KEY_TYPE and VALUE_TYPE class. Given those class details to server as well via placing jar on libs folder.
Now we start to insert the rows with SQL Insert statement.
And can see the ROWS in both SQL and Cache.
But when we do cache.get(key), it returns null for Ignite thin client.
The same works fine without issue for Ignite client node. Strang that why same key is not available for thin clients.
Have tried with latest client and server version as well, the result remain same.
Is there they advise the ignite experts can share on above behaviour?
Seems related to Ignite cache size returning correct value but while trying to access the cache its returning null value
It looks like you have different binary configurations for thin and thick clients/server nodes.
Try to adjust your thin client configuration with compactFooter=true and check if it resolves the issue.
clientConfig.setBinaryConfiguration(new BinaryConfiguration().setCompactFooter(true)
Defaults are different for backward compatibility and some historical reasons, but I hope this mismatch will be fixed in some future versions.
I have following environment:
EAR application on WebSphere 9, container managed transactions using XA datasource for Oracle 19c database (let's name it database "A").
The problem is that datasource (in some transactions), i.e. database "A" is calling database "B" via database link (database "B" is also Oracle 19c).
Connection pool gets "Too many database links in use" error message because of 2 phase commit. Let's say max. number of database links in use is 4, if i refresh screen 5th time i get SQL exception.
Setting maximum database links in use parameter in database properties only delays the problem.
I am in no control (from application perspective) of closing database links.
ATM we've set datasource to non-XA and everything works fine, but in some time we'll need to manualy handle transaction that include one datasource and WebSphere MQ.
Anyone got any ideas or experience with this setup?
EDIT: I'm trying to get this working with JPA 2.0.
You have two options:
If not you, then the developers of the application need to make sure the database links get closed. If your max number of active database links is 4, then you can only have 4 active sessions/users in your application.
Increase the allowed number of database links
This article describes the fixes/workarounds in greater detail.
I have to deal with the following scenario for spring application with Oracle database:
Spring application uses the primary database. In the meantime the secondary database stores data for disaster recovery (from primary).
The first step is currently provided. At this moment I have to implement:
When the primary database gets offline application should change the connection to the secondary database).
The implementation should be programmatically. How can I achieve that without changing the code that currently exists? Is there any working solution (library)?
I think about AbstractRoutingDataSource and ping databases (e.g. every 5 seconds) but I'm not sure about this solution.
So, let's to summarize the issue. I was unable to use Oracle RAC (Real Application Cluster). If the implementation should be programmatically you can try AbstractRoutingDataSource approche.
I have implemented timer that pings current database every 1 second (you can use validation query and check if you can read from database... if no we assume there is no connection and we can switch a datasource).
Thanks to that I was able to change datasource on runtime when current datasource is offline. What is more important it was automatic.
On the other hand, there are disadvantages:
For short time user can see the errors if the database is not
switched yet.
Some part of application may stop working if it is not properly
secured against the lack of connection to the database.
Anyone know why the Netbeans IDE's Database result explorer disables CrUD operations and "Show SQL 'CrUD' Operation" when using a JDBC connection via JTOpen 9.1 driver to a DB2 for i database with Netbeans 8.1?
JTOpen is a open source JDBC driver to IBM i DB2 for i database in addition to bunch of Java Classes for interacting with the IBMi system. http://jt400.sourceforge.net/
I tried a few JDBC connection properties but no cigar...
I guess i'll have to keep browsing the IBM KB
http://www.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzahh/jdbcproperties.htm
and the JT400 source
https://github.com/devjunix/libjt400-java/blob/master/src/com/ibm/as400/access/JDProperties.java
Many DB2 for i systems are configured to not use commitment control or journaling. This is not what many toolkits expect to see. Try changing the connection string to tell Netbeans that you don't want commitment control.
Add "extended metadata=true" in the connection properties fixed my issue.
https://godzillai5.wordpress.com/2016/08/21/jdbc-jt400-setting-to-get-crud-and-show-sql-features-added-in-netbeans-with-ibm-db2-for-i/
The IBM documentation here http://www.ibm.com/support/knowledgecenter/ssw_ibm_i_71/rzahh/jdbcproperties.htm
"extended metadata"
Specifies whether the driver requests extended metadata from the
server. Setting this property to true increases the accuracy of the
information returned from the following ResultSetMetaData methods:
getColumnLabel(int) isReadOnly(int) isSearchable(int) isWriteable(int)
apparently readonly for the result set is incorrectly assumed true, unless the ext. metadata comes back with the actual value for isReadOnly(int). I'm guessing that its assumed false because on the initial connection the connection property "Read Only" is true. It would be helpful to understand what setting on the system or the Library/Schema is causing the connection to have that property.
The most obvious reason for the original image showing just some read-only operations presented, would seem to have been the "access" attribute for the connection; i.e. if set to "read only", that would limit access to SELECT statements only. But with the new information showing the connection properties, seems the readOnly=false, so that "access" attribute should not be the origin for the issue.
I suspect that for any given TABLE, the issue might be for lack of a PRIMARY KEY CONSTRAINT; i.e. IIRC, some client database applications might prevent update-capable mode for a particular TABLE, if that TABLE is not known to have a PK.
Oracle WebLogic allows initializing database connections with SQL Code (s. http://docs.oracle.com/cd/E13222_01/wls/docs81/ConsoleHelp/jdbc_connection_pools.html#1127542). Is there any way to do this in WebSphere 8?
Please see the Connection validation by SQL query column of the Application programming model row in the deprecation table for the recommended alternative. IBM support cannot provide you with a time frame for removal, as there is no fixed time frame for deprecated features to be removed. At a minimum, the feature will not be removed for 2 releases after deprecation. In this particular case, the feature was deprecated because the JDBC spec now defines similar capability.
Yes it is possible. See WebSphere Application Server data source properties page.
You need to check: Validate new connections and provide your SQL code in the: Validation by SQL string in the Data source properties.