I am looking for an analog of the setDefaultTimeout method of Spring's AbstractPlatformTransactionManager in jOOQ/HikariCP connection pool.
I found various timeouts like loginTimeout, maxLifetime, and idleTimeout in HikariDataSource, but none of them seems to fit my purpose.
I looked at jOOQ's TransactionProvider too.
After some source code investigation I spotted the following code in HikariCP:
setNetworkTimeout(connection, validationTimeout);
try (Statement statement = connection.createStatement()) {
if (isNetworkTimeoutSupported != TRUE) {
setQueryTimeout(statement,
(int) MILLISECONDS.toSeconds(
Math.max(1000L, validationTimeout)));
}
statement.execute(config.getConnectionTestQuery());
}
Looking at this, I suppose the configuration I am after is validationTimeout. Is this correct?
The code you have found is to run connection validation query (normally very quick) where it is applying 'validation timeout'.
Most probably, transaction in 'your app' will take much longer than validation timeout specified for HikariCP
at present, you can set query time out for org.jooq.Query but not for org.jooq.Routine. see https://github.com/jOOQ/jOOQ/issues/3892
If you are referencing AbstractPlatformTransactionManager I am guessing that you wish to use transactions which expressing your queries using JOOQ on top of a HikariCP connection pool.
The best place to start maybe JOOQ's transaction documentation here
http://www.jooq.org/doc/3.8/manual/sql-execution/transaction-management/
As you are coming from Spring, the Spring TX integration maybe a good starting place.
HikariCP does not itself provide timeout management as it focuses on just managing the connections that it has formed. As such the 3 values you have listed do very different things
loginTimeout - how long HikariCP will wait for a connection to be formed to the database (basically a JDBC connection)
maxLifetime - how long a connection will live in the pool before being closed
idleTimeout - how long an unused connection lives in the pool
Related
I have a spring application where its home page fire multiple ajax calls which in turn fetch data from the DB and return back. The DB has been configured with connection pooling with minPoolSize as 50 and maxPoolSize as 100.
now when I open the home page, around 7 connections are established with the DB, which is expected as around 7 ajax calls are made and I assume all create their own connection. Now when I refresh the page, I see 7 more new connection are established (I see total 14 physical connections from db2 monitoring), which seems to be unexpected, as I assume jdbcTemplate do close the connection after the query from the first access and refresh in this case should reuse the connections ?
Now question here is does resultsets are also getting closed by jdbcTemplate along with connection close ? or Do i need to explicitly close the resultSet so that connection can be closed automatically. Opened resultSet may be a reason of connection not getting close ? Attaching the code for connection pooling configuration
<dataSource jdbcDriverRef="db2-driver" jndiName="jdbc/dashDB-Development" transactional="true" type="javax.sql.DataSource">
<properties.db2.jcc databaseName="BLUDB" id="db2-dashDB-Development-props" password="********" portNumber="*****" serverName="*********" sslConnection="false" user="*****"/>
<connectionManager id="db2-DashDB-Development-conMgr" maxPoolSize="100" minPoolSize="50" numConnectionsPerThreadLocal="2"/>
My initial theory was that the reuse of the connection will happen only when minPoolSize is reached and till that time it will always create new physical connection. HOwever I see this behavior even after reaching that limit. I refreshed my page 10 time and I see 70 physical connections. Now my only doubt is that connection are somehow are not getting close and spring is seeing those connection busy ? This may be because resultsets are not closed or some other reason ? Is it a way to say jdbctemplate not to wait for closing resultset beyond a time limit ?
Thanks
Manoj
If you look at the org.springframework.jdbc.core.JdbcTemplate.query method source code you see calls to -
JdbcUtils.closeResultSet(rs);
In the finally blocks - so yes JDBCTemplate does call rs.close
The template also closes or returns the connection to the pool
Hikari: 2.4.7
PostgreSQL JDBC driver: 9.4-1201-jdbc41
I'm trying to understand what must be done to a java.sql.Connection object for it to be available again in the
connection pool?
I've just introduced connection pooling to a multi threaded application that was
previously standing up / tearing down connections with each SQL statement.
What I have noticed, after introducing Hikari, is that as soon as I hit maximumPoolSize every attempt
thereafter to HikariDataSource.getConnection will fail due to connectionTimeout. So it seems like I'm not "releasing" this connection somehow.
The typical use of the Connection object is:
# omits Exception handling, parameter substitution, result evaluation.
PreparedStatement preparedStatement = hikariDataSource.getConnection().prepareStatement(sql);
preparedStatement.executeQuery();
preparedStatement.close();
Is there anything else that is expected to be done on this connection to get it eligible for reuse in the connection pool?
Autocommit is on. Connection.close(), unless doing something special when provided by Hikari, seemed like the exact thing I wanted to avoid.
I don't know Hikari specifically, but for every connection you take out of a connection pool, you have to return that connection when you are done with it.
Typically this is done using Connection.close() - the pool hands out a wrapper function where close() doesn't physically close the connection, only returns it.
So your code should look like this:
Connection con = hikariDataSource.getConnection();
PreparedStatement preparedStatement = con.prepareStatement(sql);
preparedStatement.executeQuery();
preparedStatement.close();
con.close(); // this returns the connection to the pool
Of course the two close() methods should be called in a finally block.
I am pretty sure that somebody else already asked this question, but I still couldn't find a satisfactory answer to it.
So, here is my scenario: I want to use the Oracle's JDBC driver implicit statement caching (documented here: http://docs.oracle.com/cd/B28359_01/java.111/b31224/stmtcach.htm#i1072607)
I need to use the connections from a 3rd party JDBC pool provider (to be more specific, Tomcat JDBC) and I have no choice there.
The problem is that the way to enable the implicit caching is a two-step process (accordingly to the documentation):
1.
Call setImplicitCachingEnabled(true) on the connection
or
Call OracleDataSource.getConnection with the ImplicitCachingEnabled
property set to true. You set ImplicitCachingEnabled by calling
OracleDataSource.setImplicitCachingEnabled(true)
2.
In addition to calling one of these methods, you also need to call
OracleConnection.setStatementCacheSize on the physical connection. The
argument you supply is the maximum number of statements in the cache.
An argument of 0 specifies no caching.
I can live with 1 (somehow I can configure my pool to use the OracleDataSource as a primary connection factory and on that I can set the OracleDataSource.setImplicitCachingEnabled(true)).
But at the second step, I already need the connection to be present in order to call the setStatementCacheSize.
My question is if there is any possibility to specify at the data source level a default value for the statementCacheSize so that I can get from the OracleDataSource connections that are already enabled for implicit caching.
PS: some related questions I found here:
Oracle jdbc driver: implicit statement cache or setPoolable(true)?
Update (possible solution):
Eventually I did this:
Created a native connection pool using oracle.jdbc.pool.OracleDataSource.
Created a tomcat JDBC connection pool using org.apache.tomcat.jdbc.pool.DataSource that uses the native one (see the property dataSource).
Enabled via AOP a poincut so that after the execution of 'execution(public java.sql.Connection oracle.jdbc.pool.OracleDataSource.getConnection())' I pickup the object and perform the setting I wanted.
The solution works great; I am just unhappy that I had to write some boilerplate to do it (I was expecting a straight-forward property).
The white paper Oracle JDBC Memory Management says that
The 11.2 drivers also add a new property to enable the Implicit Statement Cache.
oracle.jdbc.implicitStatementCacheSize
The value of the property is an
integer string, e.g. “100”. It is the initial size of the statement
cache. Setting the property to a positive value enables the Implicit
Statement Cache. The default is “0”. The property can be set as a
System property via -D or as a connection property via getConnection.
You can only change statement cache size through OracleConnection.setStatementCacheSize method.
Instead of modifying your application to call OracleConnection.setStatementCacheSize on every connection, you can create a JDBC interceptor.
#Override
public void reset(ConnectionPool pool, PooledConnection connection) {
if (connection == null) {
return;
}
Connection original = connection.getConnection();
if (!(original instanceof OracleConnection)) {
return;
}
try {
if (!((OracleConnection) original).getImplicitCachingEnabled() && implicitCachingEnabled) {
((OracleConnection) original).setImplicitCachingEnabled(implicitCachingEnabled);
log.info("Activated statement cache");
((OracleConnection) original).setStatementCacheSize(statementCacheSize);
log.info("Statement cache size set to " + statementCacheSize);
}
} catch (SQLException e) {
log.error(e.getMessage(), e);
}
}
I have the following code in Java:
if(!conn.isClosed())
{
conn.close();
}
Instead of working, I am awarded with:
java.sql.SQLException: Connection already closed
My connection object is a Snaq.db.CacheConnection
I checked the JavaDocs for isClosed, and they state that:
This method generally cannot be called
to determine whether a connection to a
database is valid or invalid. A
typical client can determine that a
connection is invalid by catching any
exceptions that might be thrown when
an operation is attempted.
So my questions are:
1) What good is JDBC's isClosed() anyway? Since when do we use Exceptions in Java to check for validity?
2) What is the correct pattern to close a database? Should I just close and swallow exceptions?
3) Any idea why would SnaqDB be closing the connection? (My backend is a Postgres 8.3)
I'll answer your questions with corresponding numbers:
I agree with you, it seems strange that isClosed provides the closed state only on a best effort basis, and that your code still has to be prepared to catch the exception when closing the connection. I think the reason is that the connection may be closed at any time by the database, and so any status returned by a query state method like isClosed is intrinsicly stale information - the state may change between checking isClosed and calling close on the Connection.
Calling close has no affect on your data and on previous queries. JDBC operations execute with synchronous results, so all useful execution has either succeeded or failed by the time isClosed is called. (True with both autoCommit or explicit transaction boundaries.) If your application is a single user accessing a local database, then perhaps showing the error to the user might help them diagnose problems. In other environments, logging the exception and swallowing it is probably the best course of action. Either way, swallowing the excpetion is safe, as has no bearing on the state of the database.
Looking at the source for SnaqDB CacheConnection, the isClosed method delegates to the underlying connection. So the problem is not there, but lies with the defined contract for isClosed() and Connection.close() throwing an exception.
We have our JBoss and Oracle on separate servers. The connections seem to be dropped and is causing issues with JBoss. How can I have the JBoss reconnect to Oracle if the connection is bad while we figure out why the connections are being dropped in the first place?
Whilst you can use the old "select 1 from dual" trick, the downside with this is that it issues an extra query each and every time you borrow a connection from the pool. For high volumes, this is wasteful.
JBoss provides a special connection validator which should be used for Oracle:
<valid-connection-checker-class-name>
org.jboss.resource.adapter.jdbc.vendor.OracleValidConnectionChecker
</valid-connection-checker-class-name>
This makes use of the proprietary ping() method on the Oracle JDBC Connection class, and uses the driver's underlying networking code to determine if the connection is still alive.
However, it's still wasteful to run this each and every time a connection is borrowed, so you may want to use the facility where a background thread checks the connections in the pool, and silently discards the dead ones. This is much more efficient, but means that if the connections do go dead, any attempt to use them before the background thread runs its check will fail.
See the wiki docs for how to configure the background checking (look for background-validation-millis).
There is usually a configuration option on the pool to enable a validation query to be executed on borrow. If the validation query executes successfully, the pool will return that connection. If the query does not execute successfully, the pool will create a new connection.
The JBoss Wiki documents the various attributes of the pool.
<check-valid-connection-sql>select 1 from dual</check-valid-connection-sql>
Seems like it should do the trick.
Not enough rep for a comment, so it's in a form of an answer. The 'Select 1 from dual' and skaffman's org.jboss.resource.adapter.jdbc.vendor.OracleValidConnectionChecker method are equivalent , although the connection check does provide a level of abstraction. We had to decompile the oracle jdbc drivers for a troubleshooting exercise and Oracle's internal implementation of the ping is to perform a 'Select 'x' from dual'. Natch.
JBoss provides 2 ways to Validate connection:
- Ping based AND
- Query based
You can use as per requirement. This is scheduled by separate thread as per duration defined in datasource configuration file.
<background-validation>true</background-validation> <background-validation-minutes>1</background-validation-minutes>
Some time if you are not having right oracle driver at Jboss, you may get classcast or related error and for that connection may start dropout from connection pool. You can try creating your own ConnectionValidator class by implementing org.jboss.resource.adapter.jdbc.ValidConnectionChecker interface. This interface provides only single method 'isValidConnection()' and expecting 'NULL' in return for valid connection.
Ex:
public class OracleValidConnectionChecker implements ValidConnectionChecker, Serializable {
private Method ping;
// The timeout (apparently the timeout is ignored?)
private static Object[] params = new Object[] { new Integer(5000) };
public SQLException isValidConnection(Connection c) {
try {
Integer status = (Integer) ping.invoke(c, params);
if (status.intValue() < 0) {
return new SQLException("pingDatabase failed status=" + status);
}
}
catch (Exception e) {
log.warn("Unexpected error in pingDatabase", e);
}
// OK
return null;
}
}
A little update to #skaffman's answer. In JBoss 7 you have to use "class-name" attribute when setting valid connection checker and also package is different:
<valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker" />
We've recently had some floating request handling failures caused by orphaned oracle DBMS_LOCK session locks that retained indefinitely in client-side connection pool.
So here is a solution that forces session expiry in 30 minutes but doesn't affect application's operation:
<check-valid-connection-sql>select case when 30/60/24 > sysdate-LOGON_TIME then 1 else 1/0 end
from V$SESSION where AUDSID = userenv('SESSIONID')</check-valid-connection-sql>
This may involve some slow down in process of obtaining connections from pool. Make sure to test this under load.