C3P0 Spring Hibernate: Pool maxed out. How to debug? - spring

I have a Spring Hibernate application on Tomcat.
Connection pool is C3P0
I am rapidly encountering a thread pool maxed out warning from C3P0. Then all requests to the webapp hang.
I am still assuming that somewhere in the code I have missed an #Transaction annotation. I want to debug my code.
Question: Can I access the connection pool via code so I can debug when a connection is released and when it is not released?
UPDATE: Current c3p0 config:
<!-- Hibernate -->
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource"
destroy-method="close">
<property name="driverClass" value="org.postgresql.Driver" />
<property name="jdbcUrl" value="jdbc:postgresql://localhost/ikoda01?useUnicode=true&characterEncoding=utf8" />
<property name="user" value="xxxuser" />
<property name="password" value="xxxxx" />
<property name="acquireIncrement" value="2" />
<property name="minPoolSize" value="3" />
<property name="maxPoolSize" value="50" />
<property name="maxIdleTime" value="600" />
</bean>

Can I access the connection pool via code so I can debug when a connection is released and when it is not released?
Yes
You don't even have to code, that's built in. If you configure c3p0 to debug Connection leaks, it will simply print stack traces of the codepaths that checked out the leaked Connections to your logs.
Update:
<property name="unreturnedConnectionTimeout" value="30" />
<property name="debugUnreturnedConnectionStackTraces" value="true" />
The value you should use for unreturnedConnectionTimeout depends on your application. It should be longer than the longest expected legit use of a Connection, but not the shorter it is, the more quickly you'll get log messages about the leak and the more smoothly your app will work around it. For most web-ish applications, the 30 secs shown above is conservative, clients aren't expected to ever wait around 30 secs for a response, so a timeout safely indicates a Connection leak.

Related

HikariCP Lazy with Spring LazyConnectionDataSourceProxy

Can a HikariCP Datasource be started with a Lazy configuration?
For that, i'm using Spring LazyConnectionDataSourceProxy.
<bean id="hikariConfig" class="com.zaxxer.hikari.HikariConfig" lazy-init="true">
<property name="poolName" value="TargetHikariCP" />
<property name="dataSourceClassName" value="oracle.jdbc.pool.OracleDataSource" />
<property name="connectionInitSql" value="SELECT 1 FROM DUAL"/>
<property name="leakDetectionThreshold" value="300000"/>
<property name="minimumIdle" value="1"/>
<property name="maximumPoolSize" value="10"/>
<property name="autoCommit" value="false"/>
<property name="dataSourceProperties"> <props> ... </props> </property>
</bean>
<bean id="dataSourceLazy" class="com.zaxxer.hikari.HikariDataSource" destroy-method="close" lazy-init="true">
<constructor-arg ref="hikariConfig" />
</bean>
<bean id="dataSource"
class="org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy">
<property name="targetDataSource" ref="dataSourceLazy" />
</bean>
<bean id="txManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager" lazy-init="true">
<property name="dataSource" ref="dataSource" />
</bean>
Nevertheless, its not working, as the Datasource is started on project startup.
The same configuration, when using a org.springframework.jdbc.datasource.DriverManagerDataSource, works correctly.
In the version > 3 we can set setInitializationFailTimeout(-1);
According to docs:
Any value greater than zero will be treated as a timeout for pool initialization.The calling thread will be blocked from continuing until a successful connection
to the database, or until the timeout is reached. If the timeout is reached, then
a PoolInitializationException will be thrown.
A value of zero will not prevent the pool from starting in the
case that a connection cannot be obtained. However, upon start the pool will
attempt to obtain a connection and validate that the connectionTestQuery
and connectionInitSql are valid. If those validations fail, an exception
will be thrown. If a connection cannot be obtained, the validation is skipped
and the the pool will start and continue to try to obtain connections in the
background. This can mean that callers to DataSource#getConnection() may
encounter exceptions.
A value less than zero will bypass any connection attempt and validation during
startup, and therefore the pool will start immediately. The pool will continue to
try to obtain connections in the background. This can mean that callers to
DataSource#getConnection() may encounter exceptions.
HikariCP has a property, initializationFailFast, that controls whether the pool will "fail fast" if the pool cannot be seeded with initial connections successfully:
This property controls whether the pool will "fail fast" if the pool cannot be seeded with initial connections successfully. If you want your application to start even when the database is down/unavailable, set this property to false. Default: true
This property was documented in their site, but per version 2.6.2 its not, but it seems its still supported.
In my use case, the use of this property should be enough to solve my problem.

Fuse distributed tx manager doesn't release DB sessions

We have an OracleXADataSource that is being wrapped by Apache Aries in Fuse Fabric (like in this article). If I keep sending a lot of request to the server, it starts throwing the following error:
Caused by: java.sql.SQLException: Listener refused the connection with the following error:
ORA-12519, TNS:no appropriate service handler found
When I check the sessions using the following query, after every request in Oracle, it keeps showing an increased number under current utilization.
select resource_name, current_utilization, max_utilization, limit_value
from v$resource_limit
where resource_name in ('sessions', 'processes', 'transactions');
CURRENT_UTILIZATION MAX_UTILIZATION LIMIT_VALUE
processes 545 768 800
sessions 553 774 1222
transactions 0 0 UNLIMITED
Most of the recommendations for this issue says to increase the processes and session limits in Oracle, but this would solve the problem temporarily, until we reach a certain load I'm affraid.
I found/tried the followings so far:
Perodically when the load increases (or certain amount of time spent) the session and processes get decreased with a bigger amount (100-200). (I guess Geronimo periodically releases the sessions). At the same time when a number of sessions are released, the active transactions column shows the same amount:
CURRENT_UTILIZATION MAX_UTILIZATION LIMIT_VALUE
processes 355 768 800
sessions 363 774 1222
transactions 122 122 UNLIMITED
If I shut down Fuse, the processes values goes back to initial size immediately (so the issue is on client side)
If I turn off the distributed transaction support, then everything is fine and processes doesn't increase at all
I tried adding pooling to the OracleXADataSource, but nothing has changed (it's deprecated, but I assume it still works. We don't have the UCP jar unfortunately, so I couldn't test it with that)
<property name="connectionCachingEnabled" value="true"/>
<property name="connectionCacheProperties">
<props merge="default">
<prop key="InitialLimit">1</prop>
<prop key="MinLimit">1</prop>
<prop key="MaxLimit">1</prop>
</props>
</property>
I couldn't resolve this issue using Aries unfortunately. I consider it a bug. However I managed to make it properly work using Atomikos, which I strongly recommend. Much more straightforward than using Aries' built in auto-proxy behavior: you declare everything so you know what actually happens.
<bean id="transactionManager" class="com.atomikos.icatch.jta.UserTransactionManager" init-method="init" destroy-method="close">
<property name="forceShutdown" value="false" />
</bean>
<bean id="userTransaction" class="com.atomikos.icatch.jta.UserTransactionImp">
<property name="transactionTimeout" value="300" />
</bean>
<bean id="jtaTransactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManager" ref="transactionManager" />
<property name="userTransaction" ref="userTransaction" />
</bean>
<bean id="dataSource" class="com.atomikos.jdbc.AtomikosDataSourceBean">
<property name="uniqueResourceName" value="oracledb" />
<property name="xaDataSource">
<bean class="oracle.jdbc.xa.client.OracleXADataSource">
<property name="URL" value="jdbc:oracle:thin:#${db.host}:${db.port}:${db.sid}"/>
<property name="user" value="${db.schema}" />
<property name="password" value="${db.password}" />
</bean>
</property>
</bean>

how to configure (spring) JMS connection Pool for WMQ

I am trying to configure a JMS connection pool in spring/camel for Websphere MQ. I am seeing class cast exception, when tried to use CachingConnectionFactory from spring. Could not find a pool from WMQ, have anybody done connection pooling with WMQ, i didnt find any examples. There are lot of examples for ActiveMQ.
here is what i have so far, that is producing class cast exception.
<bean id="inCachingConnectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="inboundMqConnectionFactory1" />
<property name="sessionCacheSize" value="5" />
</bean>
<bean id="inboundWebsphereMq1" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="inCachingConnectionFactory" />
<property name="destinationResolver" ref="jmsDestinationResolver" />
<property name="transacted" value="true" />
<property name="transactionManager" ref="txManager1" />
</bean>
<bean id="inboundMqConnectionFactory1" class="com.ibm.mq.jms.MQQueueConnectionFactory">
<property name="hostName" value="${isi.inbound.queue.host2}" />
<property name="port" value="${isi.inbound.queue.port}" />
<property name="queueManager" value="${isi.inbound.queue.queuemanager2}" />
<property name="channel" value="${isi.inbound.queue.channel2}" />
<property name="transportType" value="${isi.queue.transportType}" />
</bean>
The exception i see is here
trying to recover. Cause: com.sun.proxy.$Proxy37 cannot be cast to com.ibm.mq.jms.MQQueueSession
In general:
do not use QueueConnectionFactory or
TopicConnectionFactory, as ConnectionFactory (JMS 1.1) is replacement for both
Each ConnectionFactory from v7 WMQ JMS client jars provide caching logic on each own so in general you don't need CachingConnection Factory.
Now try it this way:
<bean id="mqConnectionFactory" class="com.ibm.mq.jms.MQConnectionFactory"
p:queueManager="${QM_NAME}"
p:hostName="${QM_HOST_NAME}"
p:port="${QM_HOST_PORT}"
p:channel="${QM_CHANNEL}"
p:clientID="${QM_CLIENT_ID}">
<property name="transportType">
<util:constant static-field="com.ibm.msg.client.wmq.WMQConstants.WMQ_CM_CLIENT" />
</property>
</bean>
<bean id="userConnectionFactory" class="org.springframework.jms.connection.UserCredentialsConnectionFactoryAdapter"
p:targetConnectionFactory-ref="mqConnectionFactory"
p:username="${QM_USERNAME}"
p:password="${QM_PASSWORD}" />
<!-- this will work -->
<bean id="connectionFactory" class="org.springframework.jms.connection.CachingConnectionFactory"
p:targetConnectionFactory-ref="userConnectionFactory"
p:cacheConsumers="true"
p:reconnectOnException="true" />
Of course you can cache sessions instead of consumers if you want it that way. By my experience WMQ session caching is measurable performance improvement but only if you are limited on CPU power on WMQ machine or by actual message throughput; both situations are rare in majority of world applications. Caching consumers avoids excessive MQ OPEN calls which is expensive operation on WMQ so it helps too.
My rule of the thumb is consumer + session caching performance benefit is equal to 1/2 of performance benefit of connection caching and usually not wort of pursuing in your everyday JEE stack unless you are hardware limited.
Since WMQ v7, asynchronous consumers are realy realy fast with literally no CPU overhead when compared to spring MC, and are preferred way of consuming messages if you are HW limited. Most of the days I still use Spring as I prefer its easy-going nature.
Hope it helps.

Spring BasicDataSource pool to perform WHEN_EXHAUSTED_GROW behaviour

When uploaded our application on prod server we faced strange behaviour - sometimes it stopped to get data from data base as though the logic and everything was right and local version was working perfect. After remote debugging we found that GenericObjectPool was blocking thread. And after some time spent on searching we found situation that fits our problem.
The pool just got exhausted when filled with 8 connections (default value) and by default it's behaviour is set to block thread when exhausted.
Here is my datamodel-context.xml config of BasicDataSource
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
<property name="url" value="url_to_db"/>
<property name="username" value="username"/>
<property name="password" value="12211"/>
<property name="defaultAutoCommit" value="false"/>
<property name="poolPreparedStatements" value="true"/>
<property name="initialSize" value="10"/>
<property name="maxIdle" value="5"/>
<property name="testOnBorrow" value="true"/>
<property name="logAbandoned" value="true"/>
</bean>
Found here two solutions - increase max active number or change pool's behaviour.
The first one seems to be simple - set property maxActive in BasicDataSource to some number (please correct me if I am mistaking). This way is unwanted because we can not know the exact number of possible simultanious connections.
It was decided to try the second way - to change behaviour to WHEN_EXHAUSTED_GROW.
So is there any way to configure spring default pool to change it's whenExhaustedAction behaviour? Or should I define my own connection pool for BasicDataSource? If so, please can you provide examples?
I appreciate any comments or advices. Thanks.

Connections not closed Spring with tomcat 5.5

We are using a j2ee application with spring framework 2.0. The server used is tomcat 5.5. The database used is mysql. We are using a VPS for hosting our application and we have noticed that the CPU usage increases with more users using our application. The CPU usage does not come down once the users stop using the application. Is it the connections that are not closed properly or is there any other issue?
Here is the servlet.xml configuration for the connections
<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
<property name="url" value="jdbc:mysql://localhost/myDB"/>
<property name="username" value="xxxx"/>
<property name="password" value="xxxx"/>
<property name="validationQuery" value="SELECT 1"/>
<property name="testOnBorrow" value="true"/>
</bean>
We have also tried using
<bean id="myDataSource"
class="org.springframework.jdbc.datasource.DriverManagerDataSource">
<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
<property name="url" value="jdbc:mysql://localhost/myDB"/>
<property name="username" value="xxxx"/>
<property name="password" value="xxxx"/>
</bean>
But both of them cause the same problem. Can anyone help us out quickly? Because we need to correct this issue at the earliest. Thanks in advance.
It is unlikely that high CPU usage be caused by some connection pool issues. It's probably a mistake within your application code. Did you monitored database connections — are they released and closed properly?
By the way, I'd suggest you switch to the native connection pool built in Tomcat. It can be obtained as a standard Java EE resource from the pseudo-JNDI implemented in Tomcat.

Resources