Universal Connection Pool Memory Management - oracle

I have been trying to upgrade my ojdbc code from ojdbc14-10.2.0.1.0 to ojdbc6-11.1.0.7.0. We have been using OracleConnectionCacheImpl for datasource connections and then moved to the Universal Connection Pool using OracleDataSource at the heart. Here is how we currently have it configured in Spring:
<bean id="myDatasource" class="oracle.ucp.jdbc.PoolDataSourceFactory" factory-method="getPoolDataSource">
<property name="URL" value="#JDBC_URL#"/>
<property name="user" value="#JDBC_USERNAME#"/>
<property name="password" value="#JDBC_PASSWORD#"/>
<property name="connectionFactoryClassName" value="oracle.jdbc.pool.OracleDataSource"/>
<property name="connectionPoolName" value="MFR_RTE_POOL"/>
<property name="minPoolSize" value="5"/>
<property name="maxPoolSize" value="100"/>
<property name="validateConnectionOnBorrow" value="true" />
<property name="connectionWaitTimeout" value="30"/>
<property name="connectionHarvestMaxCount" value="25"/>
<property name="connectionHarvestTriggerCount" value="5"/>
<property name="maxStatements" value="100"/>
</bean>
It took a bit to get it to run without closed connection errors, but now I have an issue with memory management. I have run jconsole against an application I have that uses a ThreadPool. This application uses a Thread Pool and uses ThreadPoolExecutors to create fee requests based on the data passed from the file. A file can have hundreds of thousands of fee requests. My issue is that long term memory in the Heap is filling up and is not releasing objects. In the performance test I have set up, long term memory in Garbage Collection is filling up in about 20-25 minutes and does not ever free up. The application eventually hits the GC Limit Exceeded Exception and comes to grinding halt.
When I run the same test using the old OracleConnectionCacheImpl class it just runs with no problem. Granted the thread pool and all accompanying code was written to run using older versions of Spring (1.2.6) and old ojdbc driver, but is there really that big of difference in the way OracleConnectionCacheImpl works versus Universal Connection Pooling? Am I looking at rewriting my domain model if I want to accommodate the latest versions of Oracle's JDBC driver code. I have tried OracleDataSource connection and it failed miserably with NullPointerExceptions after working on several files concurrently. I then went to UCP (at the suggestion of another post in this forum) which works fine in all but one application. At this point I'm trying to figure out whether I can further optimize the Spring config bean for my datasource or do I need to start thinking about upgrading the code base. As stated previously, this code runs very well against the old ojdbc class, but I have had issues every step of the way trying to implement UCP. I'm startg to wonder if its even worth upgrading.

This problem had bugged me for months, I hope what I came up with helps someone else out there:
I did finally figure out a solution to my issue. Instead of using OracleDataSource as the connection factory :
<property name="connectionFactoryClassName" value="oracle.jdbc.pool.OracleDataSource"/>
I would suggest trying OracleConnectionPoolDataSource:
<property name="connectionFactoryClassName" value="oracle.jdbc.pool.OracleConnectionPoolDataSource"/>
OracleConnectionPoolDataSource extends OracleDataSource and seems to do better in applications where multiple connections need to be opened by multiple resources. In my case I have an application that requires processing multiple batch files. The same SQL code is run over and over, but the application needs a new connection for each new file. Under these circumstances OracleDataSource often times had failed connection errors or some sort (e.g. SQLException: closed connection, NullPointerException: connection closed with or without UCP), lead to issues with Garbage Collection (long-term GC would fill up and cause GC to ultimately fail no matter how much memory I added to the JVM).
I found OracleDataSource to work well on applications that do not do use a lot of batch processing. For instance another application I use is a file processing application but it only works on one file at a time. OracleDataSource works great in this circumstance. It also seems to work well for Web Applications as well. We have a web app we installed OracleDataSource in 9 months ago and has had no issues.
I'm sure there are ways to make OracleDataSource work as well as OracleConnectionPoolDataSource, but this is worked for me.

Related

How to check DB connections and make reconnect in case of need?

I have an application that connects to the DB using scan address. There are two identical Oracle instances of DB under this scan address. If one instance is down for some reason or crashes the app should recover automatically and try to connect to the other working instance of the DB.
I use HikariCP, Hibernate, jdbc:thin client and Oracle DB.
Which of these I should force to check if the connection to the DB is alive and if it's not try to establish new one?
The scan address above the two database instances works fine. When my app crashes and I refresh the page it automatically creates a pool of connections on the other working DB, but I need this process to be done automatically.
I use Hikari xml configuration:
<bean id="dataSource" class="com.zaxxer.hikari.HikariDataSource" destroy-method="close">
<property name="driverClassName" value="${jdbc.driverClassName}"/>
<property name="jdbcUrl" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
<property name="minimumIdle" value="${jdbc.minPoolSize:0}" />
<property name="maximumPoolSize" value="${jdbc.maxPoolSize:10}" />
<property name="idleTimeout" value="${jdbc.maxIdleTimeSeconds:10000}" />
</bean>
There are various thing to consider:
HikariCP checks every connections before borrowing it to application. Unless there is a delay less that 10 seconds between borrows.
Oracle's JDBC drivers implement functionality: Application Continuity(AC). JDBC drivers remembers inserts/updates sent to a database, when transaction is open. When you loose this connection, JDBC driver transparently re-connects and replays SQLs. This feature has some limitations, but in basic situations data is not lost even in situations when transaction is not COMMITed yet.
Oracle introduced few handy features in JDBC 4.x standards. One of them is Connection.isValid(). On Oracle's case is does the same thing as OCIPing. It exchanges a single packet roundtrip between Application and DB server. Having a particular SQL to be executed on each Connection borrow belongs to past. Just execute your JDBC driver and it will tell you when JDBC version it supports.
$ java -jar ojdbc8.jar
Oracle 12.2.0.1.0 JDBC 4.2 compiled with javac 1.8.0_91 on Tue_Dec_13_06:08:31_PST_2016
#Default Connection Properties Resource
#Mon Jun 10 20:13:17 CEST 2019
Aside from HikariCP you can also use Oracle's UCP. This connection pooling library implementation can receive events from RAC cluster, can load balance connections between nodes. can close connection to a particular db node when requested. You can even have zero-downtime DB server patching, when configured properly.

What is the difference : BasicDataSource and JOnASDataBaseManagerService

We have a web site using spring.
I found in the code 2 ways of connecting to the oracle database, both use a bean called phareDataSource :
1st method :
<bean id="phareDataSource"
class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName"
value="oracle.jdbc.driver.OracleDriver" />
<property name="url"
value="${hibernate.connection.url}" />
<property name="username" value="${hibernate.connection.username}" />
<property name="password" value="${hibernate.connection.password}" />
</bean>
and 2nd method :
<bean id="phareDataSource"
class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiName">
<value>jdbc/PHARE</value>
</property>
</bean>
In Jonas Directory : jonas.properties
jonas.service.dbm.class org.ow2.jonas.dbm.internal.JOnASDataBaseManagerService
jonas.service.dbm.datasources PHARERH
PHARERH.properties :
datasource.name jdbc/PHARE
datasource.url jdbc\:oracle\:thin\:#blabla\:1521\:R59A001
datasource.classname oracle.jdbc.driver.OracleDriver
datasource.username bla
datasource.password bla
The first method or the second one is picked when we are building (maven active profile).
The first uses a simple configuration file, the second uses a configuration file located in jonas conf directory.
Since we use tomcat in our dev environment, we picked the first method.
But in our int and prod environment with Jonas installed, shouldn't we use the second method ?
Is it better in performance ?
Both will use a JDBC connection pool to manage your connections so I would not expect there to be a major difference in performance.
Method 1 doesn't use any of the JOnAS features to create or manage the JDBC connections + pool. The same configuration will work outside of the application server. This is useful for testing without the requirement to deploy to the application server.
Method 2 will use the JDBC connection pool configured and managed by JOnAS (the application server).
Typically, if you have made the decision to go with an application container (e.g. JOnAS, JBoss, Websphere, etc) it is usually a good idea to let the container manage the resources that you use. That's what they are designed to do, and you may require some of the advanced features for managing the configured resources. There is also the benefit that your application doesn't have to know what implementation/driver is being used (or username/password, so you can deploy your EAR/WAR to different application servers without having to change your application configuration. These details are configured in the server instance instead.
If using Method 1 inside an application server, the server will have no control over the threads being created, because it knows nothing about them.

DB connections not closing in OSGi

I have an OSGi bundle which needs to persist data in a database. As described in a previous stackoverflow question I have found that in order for transactions to work as expected I need to use an XADataSource to connect to the database. When I do so however I see that the connections to the database that are opened by my application are never closed, which of course results in the database not being able to accept any more connections after a while.
My setup is the following:
I have a bundle which creates the datasource and which only includes a blueprint.xml file with the following content
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
<bean id="dataSource" class="com.mysql.jdbc.jdbc2.optional.MysqlDataSource">
<property name="url" value="jdbc:mysql://localhost:3306/myschema"/>
<property name="user" value="user"/>
<property name="password" value="pass"/>
</bean>
<service interface="javax.sql.XADataSource" ref="dataSource">
<service-properties>
<entry key="osgi.jndi.service.name" value="jdbc/mysqlds"/>
</service-properties>
</service>
</blueprint>
Then in the bundle that persists my data I have a persistence.xml
<persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
<persistence-unit name="mypu" transaction-type="JTA">
<jta-data-source>osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=jdbc/mysqlds)
</jta-data-source>
</persistence-unit>
</persistence>
And I specify that my service methods should run in a transaction in my blueprint.xml
<bean id="MyServiceImpl"
class="com.test.impl.MyServiceImpl">
<jpa:context property="em" unitname="mypu" />
<tx:transaction method="*" value="Required" />
</bean>
<service id="MyService" ref="MyServiceImpl" interface="com.test.api.MyService" />
I deploy by bundles in Karaf, using Aries and OpenJPA for persistence, while I have also deployed the Aries transaction wrapper bundle (org.apache.aries.transaction.wrappers) in order to enlist my XA resources with the transaction manager.
Any ideas what I am missing in my configuration?
Edit:
After some more searching I found this DBCP issue which suggests that the problem I'm having is a bug of DBCP with MySQL. However I'm at a loss on how to replace DBCP with some other Connection Pool implementation OpenJPA can work with. Any suggestions are more than welcome.
I used commons-dbcp to have a connection pool that also enlists XA Connections with the following configuration:
<bean id="myXAEnabledConnectionPoolDataSource" class="org.apache.commons.dbcp.managed.BasicManagedDataSource" destroy-method="close">
<property name="xaDataSourceInstance" ref="mysqlXADataSourceBean" />
<property name="transactionManager" ref="transactionManager" />
</bean>
You can get the transaction manager as a reference based on the interface javax.transaction.TransactionManager.
In this way commons-dbcp will handle the lifecycle of the connections properly. Please note that the destroy method is there so when the blueprint container stops the connection pool will be closed.
Edit:
1-2 years ago I had the same problem but with PostgreSQL. I debugged aries.transaction.wrapper at that time a lot but I cannot remember exactly the cause why I left it out. I think the motivation was behind that commons-dbcp is a solution that worked for me in previous projects while I could not fix aries.transaction.wrapper even after analyzing it's code a lot.
Please note that MysqlDataSource is not a connection pool. It gives back a new connection always when you need one. It is also not XA enabled. MysqlXADatasource is XA enabled so you should probably instantiate an object from that class. However, an XADataSource is responsible only to give back XAConnections for you but not for enlisting them. That is where a ManagedConnectionPool can help. A managed connection pool does the followings:
Wraps all provided Connection objects with a custom managed connection class
In case close is called on a connection, it is not closed if there is an ongoing transaction. It is only closed (added back to the pool) when a transaction commit or rollback is done
In case a connection is queried from the pool and there was a connection also provided in the same transaction, the same transaction will be returned (that was a difficult sentence :))
Sometimes JDBC drivers provide connection pools and even managed connection pools, however, it is better to use the JDBC driver only to get new connections and wrap it with a 3rdparty library that was tested in several projects and works for sure.

Why is org.hibernate.impl.SessionFactoryImpl not being garbage collected?

I have a website deployed to a tomcat server which has been very rapidly using up its available heap space (Old Gen) then crashing. When I took a heap dump i found that most (if not all) of the space is being taken up by lots of org.hibernate.impl.SessionFactoryImpl instances (802 to be precise with a keep alive size of 541mb) (referenced from org.apache.catalina.loader.WebappClassLoader -> java.util.concurrent.ConcurrentHashMap$HashEntry)
About the server
The server is tomcat 6 proxied by apache2 using proxypass.
About the software
I am using opencms to content manage the webapp and as such all the code is called through its template classes
Hibernate is accessed using springs HibernateTemplate. The data source is held in tomcat and is accessed through org.springframework.jndi.JndiObjectFactoryBean and injected into my datasource (org.apache.commons.dbcp.BasicDataSource), the session factory is configured as below.
<bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="annotatedClasses">
<list><value>com.someobjects.SomeObject</value>
</list>
</property>
Does any one know why org.hibernate.impl.SessionFactoryImpl is not being garbage collected?
I can provide any further information required... I really am at a loss with this one. Any help is much appreciated.

Validating Connection Before Handing over to WebApp in ConnectionPooling

I have connection pooling implemented in spring using Oracle Data Source. Currently we are facing an issue where connections are becoming invalid after a period of time. (May be Oracle is dropping those idle connections after a while). Here are my questions:
Can Oracle database be configured to drop idle connections automatically after a specific period of time. Since we expect those connections to lie idle for a while; if there is any such configuration; it may be happening.
In our connection pooling properties in spring we didn't have "validateConnection" property. I understand that it validates the connection before handing it over to web application? But does that mean that if a connection passes validateConnection test then it'll always connect to database correctly. I ask this, as I read following problem here:
http://forum.springsource.org/showthread.php?t=69759
If suppose validateConnection doesn't do the whole 9 yards of ensuring that connection is valid, is there any other option like "testBeforBorrow" in DBCP , which runs a test query to ensure that connection is active before handing it over to webapp?
I'll be grateful if you could provide answers to one ore more queries listed above.
Cheers
You don't say what application server you are using, or how you are configuring the datasource, so I can't give you specific advice.
Connection validation often sounds like a good idea, but you have to be careful with it. For example, we once used it in our JBoss app servers to validate connections in the pool before handing them to the application. This Oracle-proprietary mechanism used the ping() method on the Oracle JDBC driver, which checks that the connection is still alive. It worked fine, but it turns out that ping() executes "select 'x' from dual' on the server, which is a surprisingly expensive query when it's run dozens of times per second.
So the moral is, if you have a high-traffic server, be very careful with connection validation, it can actually bring your database server to its knees.
As for DBCP, that has the ability to validate connections as their borrowed from the pool, as well as returned to the pool, and you can tell it what SQL to send to the database to perform this validation. However, if you're not using DBCP for your connection pooling, then that's not much use to you. C3PO does something similar.
If you're using an app server's data source mechanism, then you have to find out if you can configure that to validate connections, and that's specific to your server.
One last thing: Spring isn't actually involved here. Spring just uses the DataSource that you give it, it's up to the DataSource implementation to perform connection validation.
Configuration of data source "was" as follows:
<bean id="datasource2"
class="org.apache.commons.dbcp.BasicDataSource" destroy-method="close">
<property name="driverClassName">
<value>org.apache.commons.dbcp.BasicDataSource</value>
</property>
<property name="url">
<value>ORACLE URL</value>
</property>
<property name="username">
<value>user id</value>
</property>
<property name="password">
<value>user password</value>
</property>
<property name="initialSize" value="5"/>
<property name="maxActive" value="20"/>
</bean>
have changed it to:
<bean id="connectionPool1" class="oracle.jdbc.pool.OracleDataSource" destroy-method="close">
<property name="connectionCachingEnabled" value="true" />
<property name="URL">
<value>ORACLE URL</value>
</property>
<property name="user">
<value>user id</value>
</property>
<property name="password">
<value>user password</value>
</property>
<property name="connectionCacheProperties">
<value>
MinLimit:1
MaxLimit:5
InitialLimit:1
ConnectionWaitTimeout:120
InactivityTimeout:180
ValidateConnection:true
</value>
</property>
</bean>

Resources