using Oracle JDBC driver implicit caching feature - oracle

I am pretty sure that somebody else already asked this question, but I still couldn't find a satisfactory answer to it.
So, here is my scenario: I want to use the Oracle's JDBC driver implicit statement caching (documented here: http://docs.oracle.com/cd/B28359_01/java.111/b31224/stmtcach.htm#i1072607)
I need to use the connections from a 3rd party JDBC pool provider (to be more specific, Tomcat JDBC) and I have no choice there.
The problem is that the way to enable the implicit caching is a two-step process (accordingly to the documentation):
1.
Call setImplicitCachingEnabled(true) on the connection
or
Call OracleDataSource.getConnection with the ImplicitCachingEnabled
property set to true. You set ImplicitCachingEnabled by calling
OracleDataSource.setImplicitCachingEnabled(true)
2.
In addition to calling one of these methods, you also need to call
OracleConnection.setStatementCacheSize on the physical connection. The
argument you supply is the maximum number of statements in the cache.
An argument of 0 specifies no caching.
I can live with 1 (somehow I can configure my pool to use the OracleDataSource as a primary connection factory and on that I can set the OracleDataSource.setImplicitCachingEnabled(true)).
But at the second step, I already need the connection to be present in order to call the setStatementCacheSize.
My question is if there is any possibility to specify at the data source level a default value for the statementCacheSize so that I can get from the OracleDataSource connections that are already enabled for implicit caching.
PS: some related questions I found here:
Oracle jdbc driver: implicit statement cache or setPoolable(true)?
Update (possible solution):
Eventually I did this:
Created a native connection pool using oracle.jdbc.pool.OracleDataSource.
Created a tomcat JDBC connection pool using org.apache.tomcat.jdbc.pool.DataSource that uses the native one (see the property dataSource).
Enabled via AOP a poincut so that after the execution of 'execution(public java.sql.Connection oracle.jdbc.pool.OracleDataSource.getConnection())' I pickup the object and perform the setting I wanted.
The solution works great; I am just unhappy that I had to write some boilerplate to do it (I was expecting a straight-forward property).

The white paper Oracle JDBC Memory Management says that
The 11.2 drivers also add a new property to enable the Implicit Statement Cache.
oracle.jdbc.implicitStatementCacheSize
The value of the property is an
integer string, e.g. “100”. It is the initial size of the statement
cache. Setting the property to a positive value enables the Implicit
Statement Cache. The default is “0”. The property can be set as a
System property via -D or as a connection property via getConnection.

You can only change statement cache size through OracleConnection.setStatementCacheSize method.
Instead of modifying your application to call OracleConnection.setStatementCacheSize on every connection, you can create a JDBC interceptor.
#Override
public void reset(ConnectionPool pool, PooledConnection connection) {
if (connection == null) {
return;
}
Connection original = connection.getConnection();
if (!(original instanceof OracleConnection)) {
return;
}
try {
if (!((OracleConnection) original).getImplicitCachingEnabled() && implicitCachingEnabled) {
((OracleConnection) original).setImplicitCachingEnabled(implicitCachingEnabled);
log.info("Activated statement cache");
((OracleConnection) original).setStatementCacheSize(statementCacheSize);
log.info("Statement cache size set to " + statementCacheSize);
}
} catch (SQLException e) {
log.error(e.getMessage(), e);
}
}

Related

SingleConnectionDataSource now closing on new instance

I have a small commandline utility. My code is simple I create a SingleConnectionDataSource and pass it along till it is needed and I do
ds.getConnection()
Uptil now it was working and I would get a connection and would use it but some months back this stopped working and threw an exception
Failed to obtain JDBC Connection; nested exception is java.sql.SQLException: Connection was closed in SingleConnectionDataSource. Check that user code checks should Close() before closing Connections, or set 'suppress Close' to 'true'
Now when i create the datasource I added
((SingleConnectionDataSource)db).setSuppressClose(true);
and now it works fine ( as the exception suggested)
My question is why did it stop working or how was it working before, why would it be closed even at first user. As per the java doc it is supposed to be
Implementation of SmartDataSource that wraps a single JDBC Connection
which is not closed after use.
So I should be the one closing it to begin with at the end of the process.
So technically, I have a question of why did i get the problem that i have already solved but i don't understand when did this start coming.
Edit -- It behaves like this on SQL server only and not Oracle.
Edit2 -- Sorry, In oracle it uses a different way so it works
JdbcTemplate template = new JdbcTemplate(dataSource);
So either use SuppressClose(true) or use JdbcTemplate
We would need to know your database and application server to answer definitively, but my guess is that one or the other was closing the connection after a timeout. Why are you trying to manage the connection to begin with however? Many application servers provide a connection pool.
This is a partial answer to my own question: why would it close the connection before first use?
in SingleConnectionDatasource getConnection calls
/**
* Initialize the underlying Connection via the DriverManager.
*/
public void initConnection() throws SQLException {
if (getUrl() == null) {
throw new IllegalStateException("'url' property is required for lazily initializing a Connection");
}
synchronized (this.connectionMonitor) {
closeConnection();
this.target = getConnectionFromDriver(getUsername(), getPassword());
prepareConnection(this.target);
if (logger.isDebugEnabled()) {
logger.debug("Established shared JDBC Connection: " + this.target);
}
this.connection = (isSuppressClose() ? getCloseSuppressingConnectionProxy(this.target) : this.target);
}
}
This basically, creates a closed connection to begin with. Which makes it more intriguing why did it work in the first place. This class has been with the same initConnection() method since its inception ( as far as I can see on github).

How to set transaction timeout in HikariCP

I am looking for an analog of the setDefaultTimeout method of Spring's AbstractPlatformTransactionManager in jOOQ/HikariCP connection pool.
I found various timeouts like loginTimeout, maxLifetime, and idleTimeout in HikariDataSource, but none of them seems to fit my purpose.
I looked at jOOQ's TransactionProvider too.
After some source code investigation I spotted the following code in HikariCP:
setNetworkTimeout(connection, validationTimeout);
try (Statement statement = connection.createStatement()) {
if (isNetworkTimeoutSupported != TRUE) {
setQueryTimeout(statement,
(int) MILLISECONDS.toSeconds(
Math.max(1000L, validationTimeout)));
}
statement.execute(config.getConnectionTestQuery());
}
Looking at this, I suppose the configuration I am after is validationTimeout. Is this correct?
The code you have found is to run connection validation query (normally very quick) where it is applying 'validation timeout'.
Most probably, transaction in 'your app' will take much longer than validation timeout specified for HikariCP
at present, you can set query time out for org.jooq.Query but not for org.jooq.Routine. see https://github.com/jOOQ/jOOQ/issues/3892
If you are referencing AbstractPlatformTransactionManager I am guessing that you wish to use transactions which expressing your queries using JOOQ on top of a HikariCP connection pool.
The best place to start maybe JOOQ's transaction documentation here
http://www.jooq.org/doc/3.8/manual/sql-execution/transaction-management/
As you are coming from Spring, the Spring TX integration maybe a good starting place.
HikariCP does not itself provide timeout management as it focuses on just managing the connections that it has formed. As such the 3 values you have listed do very different things
loginTimeout - how long HikariCP will wait for a connection to be formed to the database (basically a JDBC connection)
maxLifetime - how long a connection will live in the pool before being closed
idleTimeout - how long an unused connection lives in the pool

Does BoneCP (or any other pool) close connection's statements when connection is returned to pool?

Does BoneCP (or any other pool) close connection's statements when connection is returned to pool? As I understand, it does not call actual connection's close method, so there is no automatic statement closing. So, does it close statements in any other way or do I need to close them manually?
The JDBC spec is very unclear on what should happen under normal connection close so, irrespective of the pool you use, you should always make sure to close off the statements manually. Consider what would happen to your application if you opt to switch to a different pool in the future that does not do what you expect it to do for you.
As regards BoneCP, the answer is no, it will not close off your statements for you though it can be configured to close off your connections if you forget. This is for performance reasons since some JDBC drivers will close off any still active statements internally if you close off the connection.
However, BoneCP will close off any cached statements if you have statements caching enabled.
EDIT: As of v0.8.0, support has been added to close off unclosed statements (+ print out stack trace of location where statement was opened if you want).
BoneCP (0.8.0 -RC3), there are 2 possible results,
close off with some configuration for non-cached statement only
non-close off no matter how you configure it for cached statement even you invoke the statement.close() explicitly.
There is a StatementCache class to cache the preparedStatement & callableStatement. The default is disabled. You need call BoneCPConfig.setStatementsCacheSize() with the >0 parameter to enable it. After enable the cache,
1 BoneCP.Statement.Close() will bypass the underlying statement close if it is cached.
public void close() throws SQLException {
this.connectionHandle.untrackStatement(this);
this.logicallyClosed.set(true);
if (this.logStatementsEnabled){
this.logParams.clear();
this.batchSQL = new StringBuilder();
}
if (this.cache == null || !this.inCache){ // no cache = throw it away right now
this.internalStatement.close();
}
}
2 BoneCP.Connection.close()
Will just simply clear the cache through the function "clearStatementCaches()"
The good news is MYSQL JDBC driver, Connector/J, will close all the opened statements when you close the connection through the function "closeAllOpenStatements()"

Behaviour of Callable and Prepared Statements in an app server

CallableStatement and PreparedStatements are precompiled. Are they done with respect to a connection? I mean, lets assume there are some 100 connection objects residing in a connection pool of an app server. There's a class that uses Callable and PreparedStatements. Lets say the method that is used for that is :
public void invokePreparedAndCallableStatements(){
//Fetches connection from pool
Connection con = getConnectionFromPool();
CallableStatement cs = con.prepareCall(.....);
cs.register...(...);
cs.execute();
...
...
PreparedStatement st = con.prepareStatement(...);
st.setXXX(..);
st.executeUpdate();
...
}
Now when the method is called for the first time, a connection is fetched from pool and the request is processed. The Callable and Prepared Statements are compiled. When the method is called another 99 times, each time a different connection is fetched from the pool, then - will the statements be complied for each connection ?
What will be the most optimal way to use statements in this context ? I can't make them (con.prepareCall() or con.prepareStatement()) static because connection isn't static.
The code is actually compiled and stored in the shared pool of the database. Any number of connections using that same code will benefit from the cache. The compiled code is kept as long as the memory limits allow.
The statements will be precompiled. Pooling will be based on your specified parameters.
Note: If you are using JDBC 3.0, you can also pool your PreparedStatements. Reference: What's new in JDBC 3.0

Is there any way to have the JBoss connection pool reconnect to Oracle when connections go bad?

We have our JBoss and Oracle on separate servers. The connections seem to be dropped and is causing issues with JBoss. How can I have the JBoss reconnect to Oracle if the connection is bad while we figure out why the connections are being dropped in the first place?
Whilst you can use the old "select 1 from dual" trick, the downside with this is that it issues an extra query each and every time you borrow a connection from the pool. For high volumes, this is wasteful.
JBoss provides a special connection validator which should be used for Oracle:
<valid-connection-checker-class-name>
org.jboss.resource.adapter.jdbc.vendor.OracleValidConnectionChecker
</valid-connection-checker-class-name>
This makes use of the proprietary ping() method on the Oracle JDBC Connection class, and uses the driver's underlying networking code to determine if the connection is still alive.
However, it's still wasteful to run this each and every time a connection is borrowed, so you may want to use the facility where a background thread checks the connections in the pool, and silently discards the dead ones. This is much more efficient, but means that if the connections do go dead, any attempt to use them before the background thread runs its check will fail.
See the wiki docs for how to configure the background checking (look for background-validation-millis).
There is usually a configuration option on the pool to enable a validation query to be executed on borrow. If the validation query executes successfully, the pool will return that connection. If the query does not execute successfully, the pool will create a new connection.
The JBoss Wiki documents the various attributes of the pool.
<check-valid-connection-sql>select 1 from dual</check-valid-connection-sql>
Seems like it should do the trick.
Not enough rep for a comment, so it's in a form of an answer. The 'Select 1 from dual' and skaffman's org.jboss.resource.adapter.jdbc.vendor.OracleValidConnectionChecker method are equivalent , although the connection check does provide a level of abstraction. We had to decompile the oracle jdbc drivers for a troubleshooting exercise and Oracle's internal implementation of the ping is to perform a 'Select 'x' from dual'. Natch.
JBoss provides 2 ways to Validate connection:
- Ping based AND
- Query based
You can use as per requirement. This is scheduled by separate thread as per duration defined in datasource configuration file.
<background-validation>true</background-validation> <background-validation-minutes>1</background-validation-minutes>
Some time if you are not having right oracle driver at Jboss, you may get classcast or related error and for that connection may start dropout from connection pool. You can try creating your own ConnectionValidator class by implementing org.jboss.resource.adapter.jdbc.ValidConnectionChecker interface. This interface provides only single method 'isValidConnection()' and expecting 'NULL' in return for valid connection.
Ex:
public class OracleValidConnectionChecker implements ValidConnectionChecker, Serializable {
private Method ping;
// The timeout (apparently the timeout is ignored?)
private static Object[] params = new Object[] { new Integer(5000) };
public SQLException isValidConnection(Connection c) {
try {
Integer status = (Integer) ping.invoke(c, params);
if (status.intValue() < 0) {
return new SQLException("pingDatabase failed status=" + status);
}
}
catch (Exception e) {
log.warn("Unexpected error in pingDatabase", e);
}
// OK
return null;
}
}
A little update to #skaffman's answer. In JBoss 7 you have to use "class-name" attribute when setting valid connection checker and also package is different:
<valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker" />
We've recently had some floating request handling failures caused by orphaned oracle DBMS_LOCK session locks that retained indefinitely in client-side connection pool.
So here is a solution that forces session expiry in 30 minutes but doesn't affect application's operation:
<check-valid-connection-sql>select case when 30/60/24 > sysdate-LOGON_TIME then 1 else 1/0 end
from V$SESSION where AUDSID = userenv('SESSIONID')</check-valid-connection-sql>
This may involve some slow down in process of obtaining connections from pool. Make sure to test this under load.

Resources