Oracle JDBC connection cache, connection kept open for long time and eventually fails to close it - oracle

In our application, we are facing an issue where in for certain hibernate queries, the queries are taking longer (sometimes not completing) than usual and when profiled using a profiler, we are observing that the connection objects related to these queries are open but not closed.
Because of this behavior, eventually the application runs out of connections and goes into high CPU and heap utilization.
java.lang.Thread.State: TIMED_WAITING
at java.lang.Object.wait(Native Method)
- waiting on <3a685292> (a oracle.jdbc.pool.OracleImplicitConnectionCache)
at oracle.jdbc.pool.OracleImplicitConnectionCache.processConnectionWaitTimeout(OracleImplicitConnectionCache.java:2955)
at oracle.jdbc.pool.OracleImplicitConnectionCache.getConnection(OracleImplicitConnectionCache.java:374)
at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:374)
at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:178)
at oracle.jdbc.pool.OracleDataSource.getConnection(OracleDataSource.java:156)
at org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy$LazyConnectionInvocationHandler.getTargetConnection(LazyConnectionDataSourceProxy.java:403)
at org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy$LazyConnectionInvocationHandler.invoke(LazyConnectionDataSourceProxy.java:376)
at com.sun.proxy.$Proxy75.prepareStatement(Unknown Source)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$5.doPrepare(StatementPreparerImpl.java:161)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl$StatementPreparationTemplate.prepareStatement(StatementPreparerImpl.java:182)
at org.hibernate.engine.jdbc.internal.StatementPreparerImpl.prepareQueryStatement(StatementPreparerImpl.java:159)
at org.hibernate.loader.Loader.prepareQueryStatement(Loader.java:1854)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1831)
at org.hibernate.loader.Loader.executeQueryStatement(Loader.java:1811)
at org.hibernate.loader.Loader.doQuery(Loader.java:899)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:341)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:311)
at org.hibernate.loader.Loader.loadEntity(Loader.java:2111)
at org.hibernate.loader.entity.AbstractEntityLoader.load(AbstractEntityLoader.java:82)
at org.hibernate.loader.entity.AbstractEntityLoader.load(AbstractEntityLoader.java:72)
at org.hibernate.persister.entity.AbstractEntityPersister.load(AbstractEntityPersister.java:3917)
at org.hibernate.event.internal.DefaultLoadEventListener.loadFromDatasource(DefaultLoadEventListener.java:460)
at org.hibernate.event.internal.DefaultLoadEventListener.doLoad(DefaultLoadEventListener.java:429)
at org.hibernate.event.internal.DefaultLoadEventListener.load(DefaultLoadEventListener.java:206)
at org.hibernate.event.internal.DefaultLoadEventListener.proxyOrLoad(DefaultLoadEventListener.java:262)
at org.hibernate.event.internal.DefaultLoadEventListener.onLoad(DefaultLoadEventListener.java:150)
at org.hibernate.internal.SessionImpl.fireLoad(SessionImpl.java:1091)
at org.hibernate.internal.SessionImpl.access$2000(SessionImpl.java:174)
at org.hibernate.internal.SessionImpl$IdentifierLoadAccessImpl.load(SessionImpl.java:2473)
at org.hibernate.internal.SessionImpl.get(SessionImpl.java:991)
at org.hibernate.event.internal.DefaultMergeEventListener.entityIsDetached(DefaultMergeEventListener.java:271)
at org.hibernate.event.internal.DefaultMergeEventListener.onMerge(DefaultMergeEventListener.java:151)
at org.hibernate.event.internal.DefaultMergeEventListener.onMerge(DefaultMergeEventListener.java:76)
at org.hibernate.internal.SessionImpl.fireMerge(SessionImpl.java:913)
at org.hibernate.internal.SessionImpl.merge(SessionImpl.java:897)
at org.hibernate.internal.SessionImpl.merge(SessionImpl.java:901)
In such a scenario could you please suggest what kind of timeout property is preferable for the connection cache.
As per JDBC documentation, we came across the following properties, please help:
InactivityTimeout
TimeToLiveTimeout
AbandonedConnectionTimeout
Ref: http://docs.oracle.com/cd/B14117_01/java.101/b10979/conncache.htm#CDEBCBJC

Please use Oracle Universal Connection Pool for Java (UCP) a replacement for the Implicit Connection Cache (ICC) which has been de-supported in Oracle Database 12c. The documentation is found on OTN.

Related

ConnectionPool: pool is empty - increase either maxPoolSize or borrowConnectionTimeout

I was facing this issue for my springboot application that connects to a DB and MQ, and uses Atomikos Transaction manager.
com.atomikos.jms.AtomikosJMSException|Connection pool exhausted - try increasing 'maxPoolSize' and/or 'borrowConnectionTimeout' on the AtomikosConnectionFactoryBean.
com.atomikos.datasource.pool.PoolExhaustedException: ConnectionPool: pool is empty - increase either maxPoolSize or borrowConnectionTimeout
at com.atomikos.datasource.pool.ConnectionPool.waitForAtLeastOneAvailableConnection(ConnectionPool.java:326)
at com.atomikos.datasource.pool.ConnectionPool.findOrWaitForAnAvailableConnection(ConnectionPool.java:144)
at com.atomikos.datasource.pool.ConnectionPool.borrowConnection(ConnectionPool.java:132)
at com.atomikos.datasource.pool.ConnectionPoolWithSynchronizedValidation.borrowConnection(ConnectionPoolWithSynchronizedValidation.java:23)
at com.atomikos.jms.AtomikosConnectionFactoryBean.createConnection(AtomikosConnectionFactoryBean.java:601)
at org.springframework.jms.support.JmsAccessor.createConnection(JmsAccessor.java:196)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.access$100(AbstractPollingMessageListenerContainer.java:77)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer$MessageListenerContainerResourceFactory.createConnection(AbstractPollingMessageListenerContainer.java:490)
at org.springframework.jms.connection.ConnectionFactoryUtils.doGetTransactionalSession(ConnectionFactoryUtils.java:325)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:281)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:245)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1189)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1179)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1076)
at java.lang.Thread.run(Thread.java:748)
I tried printing the maxPoolSize and found that it is 1. This page came across in between (https://www.atomikos.com/Documentation/ConfiguringJms) and I found the line where they increased the MaxPoolSize to 5. I just tried setting it to 2 and it worked.
AtomikosConnectionFactoryBean xaConnectionFactory = new AtomikosConnectionFactoryBean();
xaConnectionFactory.setXaConnectionFactory(ibmMQXAConnectionFactory);
xaConnectionFactory.setMaxPoolSize(2);
Can someone help me to understand what should be the ideal poolsize. what it is for etc?
In order to process messages Atomikos uses DB and JMS connections (in your case).
These connections are taken from the pools of available connections. To get the idea why connection pools are needed, please follow this link as a starting point - Connection_pool
To put it simple - in order to process one message at a time Atomikos needs one DB and one JMS connection/session. So if you plan to process 10 messages in parallel, each connection pool size must be at least 10 (10 for DB and 10 for JMS connection pools respectively).

Can MAX_UTILIZATION for PROCESSES reached cause "Unable to get managed connection" Exception?

A JBoss 5.2 application server log was filled with thousands of the following exception:
Caused by: javax.resource.ResourceException: Unable to get managed connection for jdbc_TestDB
at org.jboss.resource.connectionmanager.BaseConnectionManager2.getManagedConnection(BaseConnectionManager2.java:441)
at org.jboss.resource.connectionmanager.TxConnectionManager.getManagedConnection(TxConnectionManager.java:424)
at org.jboss.resource.connectionmanager.BaseConnectionManager2.allocateConnection(BaseConnectionManager2.java:496)
at org.jboss.resource.connectionmanager.BaseConnectionManager2$ConnectionManagerProxy.allocateConnection(BaseConnectionManager2.java:941)
at org.jboss.resource.adapter.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:96)
... 9 more
Caused by: javax.resource.ResourceException: No ManagedConnections available within configured blocking timeout ( 30000 [ms] )
at org.jboss.resource.connectionmanager.InternalManagedConnectionPool.getConnection(InternalManagedConnectionPool.java:311)
at org.jboss.resource.connectionmanager.JBossManagedConnectionPool$BasePool.getConnection(JBossManagedConnectionPool.java:689)
at org.jboss.resource.connectionmanager.BaseConnectionManager2.getManagedConnection(BaseConnectionManager2.java:404)
... 13 more
I've stripped off the first part of the exception, which is basically our internal JDBC wrapper code which tries to get a DB connection from the pool.
Looking at the Oracle DB side I ran the query:
select resource_name, current_utilization, max_utilization, limit_value
from v$resource_limit
where resource_name in ('sessions', 'processes');
This produced the output:
RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION LIMIT_VALUE
processes 1387 1500 1500
sessions 1434 1586 2272
Given the fact that that PROCESSES limit of 1500 was reached, would this cause the JBoss exceptions we experienced? I've also been investigating the possibility of connection leaks, but haven't found any evidence of that so far.
What is the recommended course of action here? Is simply increasing the limit a valid solution?
Usually when max_utilization gets the processes value listener will refuse new connections to database. you can see the errors relates to it in alert log. to solve this in database side you should increase the processes parameter.
hmm strange. is it possible, that exception wrapping in JBOSS hides the original error? You should get some sql exception whose text starts with ORA-. Maybe your JDBC wrapper does not handle errors properly.
The recommended actions is to:
check configured size of connection pool against processes sessions Oracle startup paramters.
check Oracles view v$session, especially columns STATUS, LAST_CALL_ET, SQL_ID, PREV_SQL_ID.
translate sql_id(prev_sql_id) into sql_text via v$sql.
if you application has a connection leak, sql_id and pred_sql_id might point you onto a place in your source code, where a connection was used last (i.e. where it was leaked).

Hibernate - Getting exception : maximum number of processes (550) exceeded

I am using hibernate 3 along with spring.My Hibernate configurations are as under :
hibernate.dialect=org.hibernate.dialect.Oracle8iDialect
hibernate.connection.release_mode=on_close
But after starting application, even if only one user accesses it then also I am getting this exception :
ORA-00020: maximum number of processes (550) exceeded
This is stacktrace:
Caused by: java.sql.SQLException: ORA-00020: maximum number of processes (550) exceeded
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216)
at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:799)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1038)
at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:839)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1133)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3285)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3329)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:76)
at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208)
at org.hibernate.loader.Loader.getResultSet(Loader.java:1953)
at org.hibernate.loader.Loader.doQuery(Loader.java:802)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:274)
at org.hibernate.loader.Loader.loadEntity(Loader.java:2037)
I have kept connection pool time out = 5000. I have also tried to found the cause and got that release mode may affect the mechanism of closing DB resources. But I couldn't find exact solution for that.
Please help..
Thanks in advance..
This is a database error not an application error so you need to go to the database to solve it. 550 processes is a lot more than it sounds so either someone has gone insane or you have a lot of inactive processes running.
The best way to find out is to query the v$session view or Gv$session if you're using a RAC, look at the STATUS column.
Take careful not of where all these sessions are coming from; the OSUSER, TERMINAL and PROGRAM will probably be the most useful. It might almost be worth creating a temporary table with this information - proof and a record afterwards. Then after checking that you're not going to break anything, and with your DBAs if you have any, kill all the inactive sessions simultaneously or one at a time.
That'll remove the error but if it's occurred once it can occur again, so you need to solve it. Either:
You've got a lot of people using the database.
There is an application / program somewhere that is not closing it's
sessions after it's finished.
Someone is connecting in the middle of a loop.
Whichever reason it is you need to track it down and correct it. I'd start with the program or terminal from v$session that had the most number of processes.

Weblogic Stuck Thread on JDBC call

We frequently get a series of Stuck threads on our Weblogic servers. I've analyzed this over a period of time.
What I'd like to understand is whether this stuck thread block indicates it is still reading data from the open socket to the database since the queries are simple SELECT stuff?
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at oracle.net.ns.Packet.receive(Packet.java:239)
at oracle.net.ns.DataPacket.receive(DataPacket.java:92)
We've run netstat and other commands, the sockets from the Weblogic app server to the Database match the number of connections in the pool.
Any ideas what else we should be investigating here?
Stack trace of thread dump:
"[STUCK] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'" daemon prio=10 tid=0x61a5b000 nid=0x25f runnable [0x6147b000..0x6147eeb0]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at oracle.net.ns.Packet.receive(Packet.java:239)
at oracle.net.ns.DataPacket.receive(DataPacket.java:92)
at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:172)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:117)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:92)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:77)
at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1023)
at oracle.jdbc.driver.T4CMAREngine.unmarshalSB1(T4CMAREngine.java:999)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:584)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:183)
at oracle.jdbc.driver.T4CStatement.fetch(T4CStatement.java:1000)
at oracle.jdbc.driver.OracleResultSetImpl.close_or_fetch_from_next(OracleResultSetImpl.java:314)
- locked <0x774546e0> (a oracle.jdbc.driver.T4CConnection)
at oracle.jdbc.driver.OracleResultSetImpl.next(OracleResultSetImpl.java:228)
- locked <0x774546e0> (a oracle.jdbc.driver.T4CConnection)
at weblogic.jdbc.wrapper.ResultSet_oracle_jdbc_driver_OracleResultSetImpl.next(Unknown Source)
The bit starting from weblogic.work.ExecuteThread.run to here has been omitted. We have 8 sets of thread dumps - and each show the thread waiting on the same line, and the same object locked
at oracle.jdbc.driver.OracleResultSetImpl.close_or_fetch_from_next(OracleResultSetImpl.java:314)
- locked <0x774546e0> (a oracle.jdbc.driver.T4CConnection)
At the time the stack was printed, it seems blocked waiting for more data from the server
at oracle.jdbc.driver.OracleResultSetImpl.next(OracleResultSetImpl.java:228)
Maybe it is just the query which is taking more than StuckThreadMaxTimeand WL issues a Warning.
If possible I would try:
Find which query or queries are getting the threads stuck and check execution time
Use Wireshark to analyze communication with database
Have a look at the driver source code (JD comes to mind) to understand stack trace
if you use weblogic debug flag -Dweblogic.debug.DebugJDBCSQL you will be able to trace the SQL which is actually being executed

Timeout error trying to lock table in h2

I get the following error under a certain scenario
When a different thread is populating a lot of users via the bulk upload operation and I was trying to view the list of all users on a different web page. The list query, throws the following timeout error. Is there a way to set this timeout so that I can avoid this timeout error.
Env: h2 (latest), Hibernate 3.3.x
Caused by: org.h2.jdbc.JdbcSQLException: Timeout trying to lock table "USER"; SQL statement:
[50200-144]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:327)
at org.h2.message.DbException.get(DbException.java:167)
at org.h2.message.DbException.get(DbException.java:144)
at org.h2.table.RegularTable.doLock(RegularTable.java:482)
at org.h2.table.RegularTable.lock(RegularTable.java:416)
at org.h2.table.TableFilter.lock(TableFilter.java:139)
at org.h2.command.dml.Select.queryWithoutCache(Select.java:571)
at org.h2.command.dml.Query.query(Query.java:257)
at org.h2.command.dml.Query.query(Query.java:227)
at org.h2.command.CommandContainer.query(CommandContainer.java:78)
at org.h2.command.Command.executeQuery(Command.java:132)
at org.h2.server.TcpServerThread.process(TcpServerThread.java:278)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:137)
at java.lang.Thread.run(Thread.java:619)
at org.h2.engine.SessionRemote.done(SessionRemote.java:543)
at org.h2.command.CommandRemote.executeQuery(CommandRemote.java:152)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:96)
at org.jboss.resource.adapter.jdbc.WrappedPreparedStatement.executeQuery(WrappedPreparedStatement.java:342)
at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208)
at org.hibernate.loader.Loader.getResultSet(Loader.java:1808)
at org.hibernate.loader.Loader.doQuery(Loader.java:697)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259)
at org.hibernate.loader.Loader.doList(Loader.java:2228)
... 125 more
Yes, you can change the lock timeout. The default is relatively low: 1 second (1000 ms).
In many cases the problem is that another connection has locked the table, and using multi-version concurrency also solves the problem (append ;MVCC=true to the database URL).
EDIT: MVCC=true param is no longer supported, because since h2 1.4.200 it's always true for a MVStore engine, which is a default engine.
I faced quite the same problem and using the parameter "MVCC=true", it solved it. You can find more explanations about this parameter in the H2 documentation here : http://www.h2database.com/html/advanced.html#mvcc
I'd like to suggest that if you are getting this error, then perhaps you should not be using a transaction on your bulk database operation. Consider instead doing a transaction on each individual update: does it make sense to think of an entire bulk import as a transaction? Probably not. If it does, then yes, MVCC=true or a bigger lock timeout is a reasonable solution.
However, I think for most cases, you are seeing this error because you are trying to perform a very long transaction - in other words you are not aware that you are performing a really long transaction. This was certainly the case for myself and I simply took more care on how I was writing records (either using no transactions or using smaller transactions) and the lock timeout issue was resolved.
For those having this issue with integration tests (i.e. server is accessing the h2 db and an integration test is accessing the db before calling the server, to prepare the test), adding a 'commit' to the script executed before the test makes sure that the data are in the database before calling the server (without MVCC=true - which I find is a bit 'weird' if it is not enabled by default).
I had MVCC=true in my connection string but still was getting error above. I had added ;DEFAULT_LOCK_TIMEOUT=10000;LOCK_MODE=0 and problem was solved
I got this issue with the PlayFramework
JPAQueryException occured : Error while executing query from
models.Page where name = ?: Timeout trying to lock table "PAGE"
It ended being an infinite loop of sorts because I had a
#Before
without an unless which caused the function to repeatedly call itself
#Before(unless="getUser")
Working with DBUnit, H2 and Hibernate - same error, MVCC=true helped, but I would still get the error for any tests following deletion of data. What fixed these cases was wrapping the actual deletion code inside a transaction:
Transaction tx = session.beginTransaction();
...delete stuff
tx.commit();
From a 2020 user, see reference
Basically, the reference says:
Sets the lock timeout (in milliseconds) for the current session. The default value for this setting is 1000 (one second).
This command does not commit a transaction, and rollback does not affect it. This setting can be appended to the database URL: jdbc:h2:./test;LOCK_TIMEOUT=10000

Resources