I've spend the last days trying to locate the cause of some new problem during development that raised a few days ago... and I've not found it yet. But I've found a workaround. But let's start with the problem itself.
We are using JBoss EAP 6.1.0.GA (AS 7.2.0.Final-redhat-8) as our application server for a quite large enterprise project. The JPA layer is handled by Hibernate Core {4.2.0.Final-redhat-1} using oracle.jdbc.OracleDriver (Version 11.2) connecting Oracle 11.2.0.3.0.
A few weeks ago everything worked as expected and we had no database related problems. We were using the following datasource:
<datasource jta="true" jndi-name="java:/myDS" pool-name="myDS" enabled="true" use-java-context="true" use-ccm="true">
<connection-url>jdbc:oracle:thin:#192.168.0.93:1521:DEV</connection-url>
<driver>oracle</driver>
<transaction-isolation>TRANSACTION_READ_COMMITTED</transaction-isolation>
<pool>
<min-pool-size>1</min-pool-size>
<max-pool-size>20</max-pool-size>
<prefill>true</prefill>
<use-strict-min>false</use-strict-min>
<flush-strategy>FailingConnectionOnly</flush-strategy>
</pool>
<security>
<user-name>MY_DB</user-name>
<password>pass</password>
</security>
</datasource>
Most of the time we had 5-10 open connections with 1-3 in use (single development environment)... the pool held that level and worked just fine.
But with some unknown changes to our code that pool stopped working... didn't release it connections anymore... even did not re-use them at all! It took a few simple requests to fill the pool up to the maximum of 20 connections and JPA refused any new database queries.
We've spend several days to find the relevant changes to our code... without success!
Today I've discovered a workaround. We changes persistence.xml a little bit:
<persistence-unit name="myPU">
<provider>org.hibernate.ejb.HibernatePersistence</provider>
<jta-data-source>java:/myDS</jta-data-source>
<properties>
<property name="jboss.entity.manager.factory.jndi.name" value="java:/myDSMF" />
<property name="hibernate.dialect" value="org.hibernate.dialect.Oracle10gDialect" />
<property name="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.JBossTransactionManagerLookup" />
<property name="hibernate.default_batch_fetch_size" value="1000" />
<property name="hibernate.jdbc.batch_size" value="0" />
<property name="hibernate.connection.release_mode" value="after_statement" />
<!-- <property name="hibernate.connection.release_mode" value="after_transaction" /> -->
<property name="hibernate.connection.SetBigStringTryClob" value="true" />
</properties>
</persistence-unit>
Changing hibernate.connection.release_mode from after_transaction to after_statement did the trick. But that setting has never been touched before. Now connections are released as expected and the pooling is usable again.
I don't get why after_transaction doesn't work anymore... because changes are committed. We see all these things in the database. And committing a transaction should end it - doesn't it?
Although I've found that simple workaround I'd really get to know the problem. I've no good feeling to delay that knowledge until production time. So any feedback is very well appreciated! Thanks!
You are using JTA . So after_transaction mode in never recommended for JTA transactions.
Here is the document from JBOSS site.
after_transaction - says to use
ConnectionReleaseMode.AFTER_TRANSACTION. This setting should not be
used in JTA environments. Also note that with
ConnectionReleaseMode.AFTER_TRANSACTION, if a session is considered to
be in auto-commit mode connections will be released as if the release
mode were AFTER_STATEMENT.
so you should either use auto or after_statement explicitly, to aggressively release the connection.
References
Connection Release Modes.
Related
I have an OSGi bundle which needs to persist data in a database. As described in a previous stackoverflow question I have found that in order for transactions to work as expected I need to use an XADataSource to connect to the database. When I do so however I see that the connections to the database that are opened by my application are never closed, which of course results in the database not being able to accept any more connections after a while.
My setup is the following:
I have a bundle which creates the datasource and which only includes a blueprint.xml file with the following content
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
<bean id="dataSource" class="com.mysql.jdbc.jdbc2.optional.MysqlDataSource">
<property name="url" value="jdbc:mysql://localhost:3306/myschema"/>
<property name="user" value="user"/>
<property name="password" value="pass"/>
</bean>
<service interface="javax.sql.XADataSource" ref="dataSource">
<service-properties>
<entry key="osgi.jndi.service.name" value="jdbc/mysqlds"/>
</service-properties>
</service>
</blueprint>
Then in the bundle that persists my data I have a persistence.xml
<persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
<persistence-unit name="mypu" transaction-type="JTA">
<jta-data-source>osgi:service/javax.sql.DataSource/(osgi.jndi.service.name=jdbc/mysqlds)
</jta-data-source>
</persistence-unit>
</persistence>
And I specify that my service methods should run in a transaction in my blueprint.xml
<bean id="MyServiceImpl"
class="com.test.impl.MyServiceImpl">
<jpa:context property="em" unitname="mypu" />
<tx:transaction method="*" value="Required" />
</bean>
<service id="MyService" ref="MyServiceImpl" interface="com.test.api.MyService" />
I deploy by bundles in Karaf, using Aries and OpenJPA for persistence, while I have also deployed the Aries transaction wrapper bundle (org.apache.aries.transaction.wrappers) in order to enlist my XA resources with the transaction manager.
Any ideas what I am missing in my configuration?
Edit:
After some more searching I found this DBCP issue which suggests that the problem I'm having is a bug of DBCP with MySQL. However I'm at a loss on how to replace DBCP with some other Connection Pool implementation OpenJPA can work with. Any suggestions are more than welcome.
I used commons-dbcp to have a connection pool that also enlists XA Connections with the following configuration:
<bean id="myXAEnabledConnectionPoolDataSource" class="org.apache.commons.dbcp.managed.BasicManagedDataSource" destroy-method="close">
<property name="xaDataSourceInstance" ref="mysqlXADataSourceBean" />
<property name="transactionManager" ref="transactionManager" />
</bean>
You can get the transaction manager as a reference based on the interface javax.transaction.TransactionManager.
In this way commons-dbcp will handle the lifecycle of the connections properly. Please note that the destroy method is there so when the blueprint container stops the connection pool will be closed.
Edit:
1-2 years ago I had the same problem but with PostgreSQL. I debugged aries.transaction.wrapper at that time a lot but I cannot remember exactly the cause why I left it out. I think the motivation was behind that commons-dbcp is a solution that worked for me in previous projects while I could not fix aries.transaction.wrapper even after analyzing it's code a lot.
Please note that MysqlDataSource is not a connection pool. It gives back a new connection always when you need one. It is also not XA enabled. MysqlXADatasource is XA enabled so you should probably instantiate an object from that class. However, an XADataSource is responsible only to give back XAConnections for you but not for enlisting them. That is where a ManagedConnectionPool can help. A managed connection pool does the followings:
Wraps all provided Connection objects with a custom managed connection class
In case close is called on a connection, it is not closed if there is an ongoing transaction. It is only closed (added back to the pool) when a transaction commit or rollback is done
In case a connection is queried from the pool and there was a connection also provided in the same transaction, the same transaction will be returned (that was a difficult sentence :))
Sometimes JDBC drivers provide connection pools and even managed connection pools, however, it is better to use the JDBC driver only to get new connections and wrap it with a 3rdparty library that was tested in several projects and works for sure.
Previously, my Java web projects used Eclipse-ordinary structure, and at the start of the container (in case, Tomcat), Hibernate generated the schemes correctly.
Now I'm using Maven infrastructure. I've relocated the needed files and configured all well (I think, because all is working right: Spring is starting, Hibernate is connecting the database - when it was previously created and there's some data to fetch). I've tested all CRUD operations and it's working.
The problem is that Hibernate refuses to generate the schemes (DDL) as it did when over Eclipse-ordinary infrastructure.
Additional information:
My persistence.xml is almost empty (as always) because Spring applicationContext.xml is starting it. I have not changed the file, it continues the same way as before.
<!-- Location: src/main/resources/META-INF/persistence.xml -->
<persistence>
<persistence-unit name="jpa-persistence-unit" transaction-type="RESOURCE_LOCAL"/>
</persistence>
Part of the Spring configuration goes here (applicationContext.xml):
<!-- Location: src/main/webapp/WEB-INF/applicationContext.xml -->
<!-- ... -->
<bean id="jpaVendorAdapter" class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="database" value="[DATABASE-NAME]" />
<property name="showSql" value="true" />
<property name="generateDdl" value="true" /> <!-- THIS CONFIGURATION WORKED PREVIOUSLY, NOW WITH MAVEN, IT'S IGNORED -->
<property name="databasePlatform" value="[DIALECT]" />
</bean>
<!-- ... -->
I'm not using any Maven Hibernate plugin, because I just want the default behavior that occurred earlier.
Did Maven invalidate this "generateDdl" property!? Why!? What should I do!? I can't find any solution.
I found out the solution.
Maven has any fault about that.
Hibernate was not able to create my database because the "DIALECT" was wrong.
I remembered that I changed the dialect from MySQL to MySQL-InnoDB. Hibernate was logging this problem but I couldn't see it because the slf4j-simple dependency was not explicity imported.
Thank you for your time, Shawn.
I have been trying to upgrade my ojdbc code from ojdbc14-10.2.0.1.0 to ojdbc6-11.1.0.7.0. We have been using OracleConnectionCacheImpl for datasource connections and then moved to the Universal Connection Pool using OracleDataSource at the heart. Here is how we currently have it configured in Spring:
<bean id="myDatasource" class="oracle.ucp.jdbc.PoolDataSourceFactory" factory-method="getPoolDataSource">
<property name="URL" value="#JDBC_URL#"/>
<property name="user" value="#JDBC_USERNAME#"/>
<property name="password" value="#JDBC_PASSWORD#"/>
<property name="connectionFactoryClassName" value="oracle.jdbc.pool.OracleDataSource"/>
<property name="connectionPoolName" value="MFR_RTE_POOL"/>
<property name="minPoolSize" value="5"/>
<property name="maxPoolSize" value="100"/>
<property name="validateConnectionOnBorrow" value="true" />
<property name="connectionWaitTimeout" value="30"/>
<property name="connectionHarvestMaxCount" value="25"/>
<property name="connectionHarvestTriggerCount" value="5"/>
<property name="maxStatements" value="100"/>
</bean>
It took a bit to get it to run without closed connection errors, but now I have an issue with memory management. I have run jconsole against an application I have that uses a ThreadPool. This application uses a Thread Pool and uses ThreadPoolExecutors to create fee requests based on the data passed from the file. A file can have hundreds of thousands of fee requests. My issue is that long term memory in the Heap is filling up and is not releasing objects. In the performance test I have set up, long term memory in Garbage Collection is filling up in about 20-25 minutes and does not ever free up. The application eventually hits the GC Limit Exceeded Exception and comes to grinding halt.
When I run the same test using the old OracleConnectionCacheImpl class it just runs with no problem. Granted the thread pool and all accompanying code was written to run using older versions of Spring (1.2.6) and old ojdbc driver, but is there really that big of difference in the way OracleConnectionCacheImpl works versus Universal Connection Pooling? Am I looking at rewriting my domain model if I want to accommodate the latest versions of Oracle's JDBC driver code. I have tried OracleDataSource connection and it failed miserably with NullPointerExceptions after working on several files concurrently. I then went to UCP (at the suggestion of another post in this forum) which works fine in all but one application. At this point I'm trying to figure out whether I can further optimize the Spring config bean for my datasource or do I need to start thinking about upgrading the code base. As stated previously, this code runs very well against the old ojdbc class, but I have had issues every step of the way trying to implement UCP. I'm startg to wonder if its even worth upgrading.
This problem had bugged me for months, I hope what I came up with helps someone else out there:
I did finally figure out a solution to my issue. Instead of using OracleDataSource as the connection factory :
<property name="connectionFactoryClassName" value="oracle.jdbc.pool.OracleDataSource"/>
I would suggest trying OracleConnectionPoolDataSource:
<property name="connectionFactoryClassName" value="oracle.jdbc.pool.OracleConnectionPoolDataSource"/>
OracleConnectionPoolDataSource extends OracleDataSource and seems to do better in applications where multiple connections need to be opened by multiple resources. In my case I have an application that requires processing multiple batch files. The same SQL code is run over and over, but the application needs a new connection for each new file. Under these circumstances OracleDataSource often times had failed connection errors or some sort (e.g. SQLException: closed connection, NullPointerException: connection closed with or without UCP), lead to issues with Garbage Collection (long-term GC would fill up and cause GC to ultimately fail no matter how much memory I added to the JVM).
I found OracleDataSource to work well on applications that do not do use a lot of batch processing. For instance another application I use is a file processing application but it only works on one file at a time. OracleDataSource works great in this circumstance. It also seems to work well for Web Applications as well. We have a web app we installed OracleDataSource in 9 months ago and has had no issues.
I'm sure there are ways to make OracleDataSource work as well as OracleConnectionPoolDataSource, but this is worked for me.
I am using hibernate 3, c3p0 9.1.2, Oracle 11g in my application. If I restart the Oracle then the stale connections are not getting refresh and I am getting exception "java.sql.SQLRecoverableException: Closed Connection". Below is my hibernate.cfg.xml.
I am a beginner in Hibernate API. Can you please suggest how to configure hibernate to automatically refresh the stale connections on a specified time.
Here is my hibernate.cfg.xml
oracle.jdbc.driver.OracleDriver
jdbc:oracle:thin:#localhost:1521:ems
emsman
<property name="hibernate.c3p0.idle_test_period">60</property> <!-- seconds -->
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.timeout">1800</property>
<property name="hibernate.c3p0.max_statements">50</property>
<property name="show_sql">false</property>
<property name="dialect">org.hibernate.dialect.OracleDialect</property>
<property name="c3p0.validate">true</property>
<mapping resource="<package-name>/GroupOpWorkflow.hbm.xml"/>
<mapping resource="<package-name>/GroupOperation.hbm.xml"/>
<mapping resource="<package-name>/GroupOpNode.hbm.xml"/>
<mapping resource="<package-name>/NodeStatusLog.hbm.xml"/>
</session-factory>
It's c3p0, your database connection pool, that you need to configure - not hibernate. Try setting idleConnectionTestPeriod and an appropriate preferredTestQuery, e.g., select 1 from dual. The validate property has been deprecated and it's recommended that you not use that.
See http://community.jboss.org/wiki/HowToConfigureTheC3P0ConnectionPool for more information. You'll get the most control if you create a c3p0.properties file in WEB-INF/classes but you need to make sure not to override those properties in your hibernate.cfg.xml.
After gone through the document { http://community.jboss.org/wiki/HowToConfigureTheC3P0ConnectionPool } I found C3P0 was not a all used by hibernate.
So wrote a new C3P0 xml file and used the below system properties:
C3P0_SYS_PROPS="-Dcom.mchange.v2.c3p0.cfg.xml=<FILE-PATH>/c3p0-config.xml -Dcom.mchange.v2.log.MLog=com.mchange.v2.log.FallbackMLog -Dcom.mchange.v2.log.FallbackMLog.DE
FAULT_CUTOFF_LEVEL=WARNING"
So here is the final working configuration
hibernate.cfg.xml
<session-factory>
<property name="hibernate.connection.driver_class">oracle.jdbc.driver.OracleDriver</property>
<property name="hibernate.connection.url">jdbc:oracle:thin:#localhost:1521:ems</property>
<property name="hibernate.connection.username">emsman</property>
<property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<property name="hibernate.connection.autoReconnect">true</property>
<property name="show_sql">false</property>
<property name="dialect">org.hibernate.dialect.Oracle10gDialect</property>
<property name="hibernate.c3p0.idle_test_period">300</property> <!-- In seconds -->
<property name="hibernate.c3p0.min_size">5</property>
<property name="hibernate.c3p0.max_size">20</property>
<property name="hibernate.c3p0.timeout">1800</property>
<property name="hibernate.c3p0.max_statements">50</property>
.....
<c3p0-config>
<default-config>
<!-- Configuring Connection Testing -->
<!-- property name="automaticTestTable">TEST_EMS_HIBERNATE_CONN</property -->
<property name="checkoutTimeout">0</property>
<property name="testConnectionOnCheckout">true</property>
<property name="testConnectionOnCheckin">false</property>
<property name="preferredTestQuery">SELECT 1 from dual</property>
<!-- Configuring Recovery From Database Outages -->
<property name="acquireRetryAttempts">0</property>
<property name="acquireRetryDelay">1000</property>
<property name="breakAfterAcquireFailure">false</property>
<!-- Configuring to Debug and Workaround Broken Client Apps -->
<property name="unreturnedConnectionTimeout">1800</property>
<property name="debugUnreturnedConnectionStackTraces">true</property>
</default-config>
c3p0-config.xml
<c3p0-config>
<default-config>
<!-- Configuring Connection Testing -->
<!-- property name="automaticTestTable">TEST_EMS_HIBERNATE_CONN</property -->
<property name="checkoutTimeout">0</property>
<property name="testConnectionOnCheckout">true</property>
<property name="testConnectionOnCheckin">false</property>
<property name="preferredTestQuery">SELECT 1 from dual</property>
<!-- Configuring Recovery From Database Outages -->
<property name="acquireRetryAttempts">0</property>
<property name="acquireRetryDelay">1000</property>
<property name="breakAfterAcquireFailure">false</property>
<!-- Configuring to Debug and Workaround Broken Client Apps -->
<property name="unreturnedConnectionTimeout">1800</property>
<property name="debugUnreturnedConnectionStackTraces">true</property>
</default-config>
You can try the following, it's a simple advice from the author of c3p0 taken here:
The best thing to do is usually to try Step 3, see if it helps
(however you measure performance), see if it hurts (is your
application troubled by broken Connections? does it recover from
database restarts well enough?), and then decide.
Step 3: If you'd like to improve performance by eliminating
Connection testing from clients' code path:
Set testConnectionOnCheckout to false
Set testConnectionOnCheckin to true
Set idleConnectionTestPeriod to 30, fire up you application and
observe. This is a pretty robust setting, all Connections will tested
on check-in and every 30 seconds thereafter while in the pool. Your
application should experience broken or stale Connections only very
rarely, and the pool should recover from a database shutdown and
restart quickly. But there is some overhead associated with all that
Connection testing.
If database restarts will be rare so quick recovery is not an issue,
consider reducing the frequency of tests by idleConnectionTestPeriod
to, say, 300, and see whether clients are troubled by stale or broken
Connections. If not, stick with 300, or try an even bigger number.
Consider setting testConnectionOnCheckin back to false to avoid
unnecessary tests on checkin. Alternatively, if your application does
encounter bad Connections, consider reducing idleConnectionTestPeriod
and set testConnectionOnCheckin back to true. There are no correct or
incorrect values for these parameters: you are trading off overhead
for reliability in deciding how frequently to test. The exact numbers
are not so critical. It's usually easy to find configurations that
perform well. It's rarely worth spending time in pursuit of "optimal"
values here.
I'm facing a weird production problem. Environment is the following:
JBOSS 4.0.2
SQL Server 2005
Driver JTDS 1.2.5
From time to time the following szenario occurs.
A SQL command fails to Excute with
java.sql.SQLException: I/O Error: Read timed out
(I can live with that, if it just happens twice a day or so)
But from that moment on the connection seems to be wasted without the pool recognizing it, as I continously receive
java.sql.SQLException: Invalid state, the Connection object is closed.
from that moment on. The only thing that helps is restarting JBOSS. This occurs despite of the fact that I have
<check-valid-connection-sql>select getdate()</check-valid-connection-sql>
set up in my Datasource definition.
I was wondering if I can use a custom ValidConnectionChecker, that either rebuilds the connection itself, or explicitly throws a Exception to fix this. Maybe anyone has other suggestions.
Here is my complete DS definition.
<local-tx-datasource>
<jndi-name>MyDS</jndi-name>
<connection-url>jdbc:jtds:sqlserver://192.168.35.235:1433/MyDb;user=user1;password=pwd;appName=MyApp;loginTimeout=15;socketTimeout=120</connection-url>
<driver-class>net.sourceforge.jtds.jdbc.Driver</driver-class>
<user-name>user1</user-name>
<password>pwd</password>
<min-pool-size>10</min-pool-size>
<max-pool-size>25</max-pool-size>
<blocking-timeout-millis>60000</blocking-timeout-millis>
<idle-timeout-minutes>1</idle-timeout-minutes>
<check-valid-connection-sql>select getdate()</check-valid-connection-sql>
</local-tx-datasource>
Any help appriciated.
Regards
Try changing your driver class line to
net.sourceforge.jtds.jdbcx.JtdsDataSource.
net.sourceforge.jtds.jdbc.Driver doesn't implement the javax.sql.ConnectionPoolDataSource interface.
source:
http://jtds.sourceforge.net/faq.html#features
Probably too late the solution, but I am stuck with the jtds driver here. Hope this saves half an hour of your productive time.
The fix is to specify a validationQuery to the Apache dbcp2 Connection Pool implementation.
For jtds/sql server
I specified the spring configuration as follows:
<bean id="sqlServerDS" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close" >
<property name="driverClassName" value="${jdbc.driverClassName}" />
<property name="url" value="${jdbc.url}" />
<property name="username" value="${jdbc.username}" />
<property name="password" value="${jdbc.password}" />
<property name="defaultReadOnly" value="true" />
<property name="validationQuery" value="select 1" />
</bean>
In case you are not using Spring, call setValidationQuery method on BasicDataSource in your java code.
BasicDataSource bds = new BasicDataSource();
bds.setValidationQuery("select 1");
Connection.isValid() isn't implemented in JTDS.
I found even catching the exception and forcing a complete restart of the connection didn't work.