AbstractRoutingDataSource not working in Spring Batch - spring

I am using Spring Batch to import large set of data from XML to Oracle database.
My application is multi tenant aware where each tenant can have separate db schema or few tenants can also share single db schema at the same time.
From my XML, I am getting unique tenant identifier which I used as a tenant database resolution identifier to select DB at a runtime.
I am using Spring MVC, hibernate.
My XML
<users>
<user>
<userAccountDetail>
<tenantId>acmebank</tenantId>
<emailAddress>966019620abc5009254#6657170682.com</emailAddress>
</userAccountDetail>
</user>
</users>
My Spring Batch Configuration
<batch:job id="walletUsersImportJob">
<batch:step id="importUsers" allow-start-if-complete="true">
<batch:tasklet>
<batch:chunk reader="xmlItemReader" writer="userDetailWriter"
processor="userDetailProcessor" commit-interval="2147483647">
</batch:chunk>
</batch:tasklet>
<batch:listeners>
<batch:listener>
<bean
class="com.masterpass.datamigration.batch.core.listener.ItemFailureHandler" />
</batch:listener>
</batch:listeners>
</batch:step>
</batch:job>
DataSource Configuration
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="routingDataSource" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="true" />
</bean>
</property>
</bean>
<bean id="routingDataSource"
class="com.csam.wsc.enabling.tenant.jdbc.datasource.lookup.TenantRoutingDataSource" >
<property name="defaultTargetDataSource" ref="globalDataSource"/>
<property name="tenantMetadataLookupStrategy" ref="tenantMetadataLookupStrategy" />
</bean>
Here tenantMetadataLookupStrategy needs to be looked for tenant identifier i.e tenantId received as a part of XML and I have a map like
tenantId = Datasource
acmebank = java:/ACMEBANK_DS
anybank = java:/ANYBANK_DS
which will give me which database to be used for any DAO operation.
Hope configurations wise I am fine.
Let me tell my problem.
When Spring batch application boot strap, org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource.determineTargetDataSource() called which needs tenantID to identify target datasource.
This tenantID, I have in XML and will not be available until ItemReader complete.
My Spring Batch configuration using batch:tasklet which surrounds reader, writer, and processor in a single transaction.
Due to above, during boot strap I suspect when initializing transaction manager , entityManagerFactory looks for datasource i.e routingDataSource and which in turn needed on tenantId.
I think if I remove reader and processor from transaction body, I will get some place to capture tenantId and set it in context somewhere so that it can select DB at runtime.
Even I do not want reader & processor transnational aware so better If I can wrap only writer in transaction.
Please suggest.

Two things here:
To "remove the read and process from the transaction body" in Spring Batch is virtually impossible. You'd have to rewrite the TaskletStep (one of the most sensitive pieces of code in the framework) which I would not recommend.
I believe you are correct in the reason the routing data source isn't working. That being said, I'd actually recommend a slightly different approach. Instead of using the routing data source technique, why not use the ClassifierCompositeItemWriter? This moves the routing up a level (from the connection to the item writer) which allows each writer instance to obtain a connection for the life of the transaction.
You can read more about the ClassifierCompositeItemWriter in the javadoc here: http://docs.spring.io/spring-batch/trunk/apidocs/org/springframework/batch/item/support/ClassifierCompositeItemWriter.html

Related

Flyway Spring JPA2 integration - possible to keep schema validation?

Hy, i have a webapplication where i am trying to integrate JPA2(Hibernate)+Spring+Flyway
I added flyway to my ApplicationContext like this:
<bean id="flyway" class="org.flywaydb.core.Flyway" init-method="migrate">
<property name="baselineOnMigrate" value="true" />
<property name="dataSource" ref="dataSource" />
</bean>
Theoretically this works fine and updates the schema with scripts that i save under db/migration. So far so good.
The one problem that is left for me is that if i change something (e.g. adding a String field to an Entity) the application won't even get this far because Hibernates Schema-Validator will throw something like this: Caused by: org.hibernate.HibernateException: Missing column: showCaseField in demo.testEntity. This happens because i have set "hibernate.hbm2ddl.auto" to "validate"
Now i have read about Hibernate failing to recognize perfeclty valid schemas in some (rare?) cases and i MAY (or not) reach a point someday where i disable this feature altogether. But as of now i actually like the extra-validation and don't want to turn it off.
Is it possible to integrate Spring and Flyway while still keeping Hibernates-Schema-Validation? I guess this could be a problem, because Flyway probably depends on a DataSource-bean or something and in conclusion requires the applicationContext to be completely initialized, which in turn Hibernate prevents because of the schema mismatch..
Any ideas?
Found the answer now. Basically all you have to do is letting your entityManagerFactory-bean depend on your Flyway bean (there's an attribute for that). Now Flyway (and in turn your dataSource) is initialized first and the Flyway-Scripts are executed before Hibernates schema-validation
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
depends-on="flyway"> ....
</bean>
<bean id="flyway" class="org.flywaydb.core.Flyway" init-method="migrate">
<property name="baselineOnMigrate" value="true"/>
<property name="dataSource" ref="dataSource"/>
</bean>

Multi-tenant webapp using Spring MVC and Hibernate 4.2.0.Final

I have developed a small webapp using and SpringMVC(3.1.3.RELEASE) and Hibernate 4.2.0.Final.
I'm trying to convert it to be a multi-tenant application.
Similar topics have been covered in other threads, but I couldn't find a definitive solution to my problem.
What I am trying to achieve is to design a web app which is able to:
Read a datasource configuration at startup (an XML file containing multiple datasource definitions, which is placed outside the WAR file and it's not the application-context or hibernate configuration file)
Create a session factory for each one of them (considering that each datasource is a database with a different schema).
How can i set my session factory scope as session? ( OR Can i reuse the same session factory ?) .
Example:
Url for client a - URL: http://project.com/a/login.html
Url for client b - URL: http://project.com/b/login.html
If client "a" make request,read the datasource configuration file and Create a session factory using that XML file for the client "a".
This same process will be repeating if the client "b" will send a request.
What I am looking, how to implement datasource creation upon customer subscription without editing the Spring configuration file. It needs to be automated.
Here is my code ,that i have done so far.
Please anyone tell me,What modifications i need to be made?
Please give an answer with some example code..I am quite new in spring and hibernate world.
Spring.xml
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close" p:driverClassName="${jdbc.driverClassName}"
p:url="${jdbc.databaseurl}"
p:username="${jdbc.username}" p:password="${jdbc.password}" />
<bean id="sessionFactory"
class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="configLocation">
<value>classpath:hibernate.cfg.xml</value>
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">${jdbc.dialect}</prop>
<prop key="hibernate.show_sql">true</prop>
</props>
</property>
</bean>
<bean id="transactionManager"
class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
JDBC.properties File
jdbc.driverClassName=com.mysql.jdbc.Driver
jdbc.dialect=org.hibernate.dialect.MySQLDialect
jdbc.databaseurl=jdbc:mysql://localhost:3306/Logistics
jdbc.username=root
jdbc.password=rot#pspl#12
hibernate.cfg.xml File
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE hibernate-configuration PUBLIC
"-//Hibernate/Hibernate Configuration DTD//EN"
"http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<mapping class="pepper.logis.organizations.model.Organizaions" />
<mapping class="pepper.logis.assets.model.Assets" />
</session-factory>
</hibernate-configuration>
Thanks,
First create a table for Tenant with tenant_id and associate it with all users.Now, you can fetch this details while the user logs in and set it in session.
We are using AbstractRoutingDataSource to switch DataSource for every request on Spring Boot. I think it is Hot Swapable targets/datasource mentioned by #bhantol above.
It solves our problems but I don't think it is sound solution. I guess JNDI could be a better one than AbstractRoutingDataSource.
Wondering what you ended up with.
Here are some ideas for you.
Option 1) Single Application Instance.
It is somewhat ambitious to to this using what you are actually trying to achieve.
The gist is to simply deploy the same exact application with different context root on the same JVM. You can still tune the JVM as a whole like you would have if you had a truely multi-tenant application. But this comes at the expense of duplication of classes, contexts, local caching, start up times etc.
But as of today the Spring Framework 4.0 does not provide much of an multi-tenancy support (other than Hot Swapable targets/datasource) etc. I am looking for a good framework but it may be a wash to move away from Spring at this time for me.
Option 2) Multiple deployments of same application (more practical as of today)
Just have your same exact application deploy to the same application server JVM instance or even different.
If you use the same instance you may now need to bootstrap your app to pickup a DataSource based on what the instance should serve e.g. client=a property would be enough to pickup a **a**DataSource" or **b**DataSource I myself ended up going this approach.
If you have a different application server instance you could just configure a different JNDI path and treat things generically. No need for client="a" property because you have liberty to define your datasource differently with the same name.

How do you use a scope Tomcat DataSource with Spring Batch?

I am currently using Spring Batch to import data from a SQL server. In order to make the datasource configurable I needed to "step scope" the datasource bean. However, this concerns me. If the datasource bean, which does connection pooling, is step scoped, then how can it manage connection in a pool and is there even a benefit to using it.
My datasource is configured as follows:
<bean id="dataSourceMssql" class="org.apache.tomcat.jdbc.pool.DataSource" scope="step">
<property name="driverClassName" value="${batch.mssql.driver}" />
<property name="username" value="${batch.mssql.user}" />
<property name="password" value="${batch.mssql.password}" />
<property name="removeAbandoned" value="true" />
<property name="removeAbandonedTimeout" value="3610" />
<property name="url"
value="${batch.mssql.connect}#{jobParameters['dburl']}:#{jobParameters['port']}/#{jobParameters['databaseName']}" />
</bean>
Why is it step scoped? Because I needed to retrieve the jobParameters to configure the datasource.
What do I want to know?
Will connection pooling still occur? (Perhaps the beans resources stay alive and are reclaimed)
I appreciate the help.
The scope "step" is only usable on spring batch beans. Other beans (Spring) only know the scope : singleton, prototype, request or session.
Normal way to handle this is to set these parameters in a properties file read by your applicatioonContext.xml.
JobParameters are used to pass Job related parameters (path, filename, date, seqNo etc) since a job with the same JobParameters won't be able to run twice.
EDIT: Is your job multithreaded? Because majority of the jobs created are single threaded! If your job is in fact single thread, i would ask myself why pooling connections is needed!
Regards

Why is the AbstractRoutingDataSource.determineCurrentLookupKey() not called inside a #Transactional block?

I'm using Hibernate with Spring, relevant config:
<bean id="transactionManager"
class="org.springframework.orm.hibernate3.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
</bean>
<bean id="openSessionInViewInterceptor" class="org.springframework.orm.hibernate3.support.OpenSessionInViewInterceptor">
<property name="sessionFactory"><ref bean="sessionFactory" /></property>
</bean>
<tx:annotation-driven />
<aop:aspectj-autoproxy />
Think about it...
Some code wants to obtain a Connection from a DataSource. Probably in order to start a transaction and run some SQL query
AbstractRoutingDataSource executes determineCurrentLookupKey() in order to find suitable DataSource from a set of available ones
Lookup key is used to obtain current DataSource. AbstractRoutingDataSource returns JDBC connections from that data source.
Connection is returned from AbstractRoutingDataSource as if it was a normal source.
Now you are asking why determineCurrentLookupKey() is not running within a transaction? First Spring would have to go to point 1. to fetch some database connection required to start a transaction. Look at the next point. See the problem? Smells like infinite recursion to me.
Simply put - determineCurrentLookupKey() can't run within a transaction because transaction needs a connection and the purpose of that method is to determine which DataSource to use to obtain a connection. See also: Chicken or the egg.
Similarly, the engineers couldn't use a computer to design the first computer.

spring transactional cpool. Which one do I use?

I originally set up spring with xapool, but it turns out that's a dead project and seems to have lots of problems.
I switched to c3p0, but now I learn that the #Transactional annotations don't actually create transactions when used with c3p0. If I do the following it will insert the row into Foo even through an exception was thrown inside the method:
#Service
public class FooTst
{
#PersistenceContext(unitName="accessControlDb") private EntityManager em;
#Transactional
public void insertFoo() {
em.createNativeQuery("INSERT INTO Foo (id) VALUES (:id)")
.setParameter("id", System.currentTimeMillis() % Integer.MAX_VALUE )
.executeUpdate();
throw new RuntimeException("Foo");
}
}
This is strange because if I comment out the #Transactional annotation it will actually fail and complain about having a transaction set to rollback only:
java.lang.IllegalStateException: Cannot get Transaction for setRollbackOnly
at org.objectweb.jotm.Current.setRollbackOnly(Current.java:568)
at org.hibernate.ejb.AbstractEntityManagerImpl.markAsRollback(AbstractEntityManagerImpl.java:421)
at org.hibernate.ejb.AbstractEntityManagerImpl.throwPersistenceException(AbstractEntityManagerImpl.java:576)
at org.hibernate.ejb.QueryImpl.executeUpdate(QueryImpl.java:48)
at com.ipass.rbac.svc.FooTst.insertFoo(FooTst.java:21)
at com.ipass.rbac.svc.SingleTst.testHasPriv(SingleTst.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.springframework.test.context.junit4.SpringTestMethod.invoke(SpringTestMethod.java:160)
at org.springframework.test.context.junit4.SpringMethodRoadie.runTestMethod(SpringMethodRoadie.java:233)
at org.springframework.test.context.junit4.SpringMethodRoadie$RunBeforesThenTestThenAfters.run(SpringMethodRoadie.java:333)
at org.springframework.test.context.junit4.SpringMethodRoadie.runWithRepetitions(SpringMethodRoadie.java:217)
at org.springframework.test.context.junit4.SpringMethodRoadie.runTest(SpringMethodRoadie.java:197)
at org.springframework.test.context.junit4.SpringMethodRoadie.run(SpringMethodRoadie.java:143)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.invokeTestMethod(SpringJUnit4ClassRunner.java:160)
at org.junit.internal.runners.JUnit4ClassRunner.runMethods(JUnit4ClassRunner.java:51)
at org.junit.internal.runners.JUnit4ClassRunner$1.run(JUnit4ClassRunner.java:44)
at org.junit.internal.runners.ClassRoadie.runUnprotected(ClassRoadie.java:27)
at org.junit.internal.runners.ClassRoadie.runProtected(ClassRoadie.java:37)
at org.junit.internal.runners.JUnit4ClassRunner.run(JUnit4ClassRunner.java:42)
at org.springframework.test.context.junit4.SpringJUnit4ClassRunner.run(SpringJUnit4ClassRunner.java:97)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:45)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196)
So, clearly it notices the #Transactional annotation. But, it doesn't actually set autocommit to off at the start of the method.
Here is how I have transactional stuff setup up in the applicationContext.xml. Is this correct? If not, what is this supposed to be?
<bean id="jotm" class="org.springframework.transaction.jta.JotmFactoryBean"/>
<bean id="txManager" class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManager" ref="jotm"/>
<property name="userTransaction" ref="jotm"/>
<property name="allowCustomIsolationLevels" value="true"/>
</bean>
<tx:annotation-driven transaction-manager="txManager" proxy-target-class="false"/>
After a bunch of searching I found a connection pool called Bitronix, but their spring setup page describes stuff about JMS which doesn't even make any sense. What does JMS have to do with setting up a connection pool?
So I'm stuck. What am I actually supposed to do? I don't understand why the connection pool needs to support transactions. All connections support turning autocommit on and off, so I have no idea what the problem is here.
It took a lot of searching and experimentation, but I finally got things working. Here are my results:
enhydra xapool is a terrible connection pool. I won't enumerate the problems it caused because it doesn't matter. The latest version of that pool hasn't been updated since Dec 2006. It's a dead project.
I put c3p0 into my application context and got it working fairly easily. But, for some reason it just doesn't seem to support rollback even inside a single method. If I mark a method as #Transactional then do an insert into a table and then throw a RuntimeException (one that's definitely not declared in the throws list of the method because there is no throws list on the method) it will still keep the insert into that table. It will not roll back.
I was going to try Apache DBCP, but my searching turned up lots of complaints about it, so I didn't bother.
I tried Bitronix and had plenty of trouble getting it to work properly under Tomcat, but once I figured out the magic configuration it works beautifully. What follows is all the things you need to do to set it up properly.
I dabbled briefly with the Atomkos connection pool. It looks like it should be good, but I got Bitronix working first, so I didn't try using it much.
The configuration below works in standalone unit tests and under Tomcat. That was the major problem I had. Most of the examples I found about how to set up Spring with Bitronix assume that I'm using JBoss or some other full container.
The first bit of configuration is the part that sets up the Bitronix transaction manager.
<!-- Bitronix transaction manager -->
<bean id="btmConfig" factory-method="getConfiguration" class="bitronix.tm.TransactionManagerServices">
<property name="disableJmx" value="true" />
</bean>
<bean id="btmManager" factory-method="getTransactionManager" class="bitronix.tm.TransactionManagerServices" depends-on="btmConfig" destroy-method="shutdown"/>
<bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManager" ref="btmManager" />
<property name="userTransaction" ref="btmManager" />
<property name="allowCustomIsolationLevels" value="true" />
</bean>
<tx:annotation-driven transaction-manager="transactionManager" />
The major difference between that code and the examples I found is the "disableJmx" property. It throws exceptions at runtime if you don't use JMX but leave it enabled.
The next bit of configuration is the connection pool data source. Note that the connection pool classname is not the normal oracle class "oracle.jdbc.driver.OracleDriver". It's an XA data source. I don't know what the equivalent class would be in other databases.
<bean id="dataSource" class="bitronix.tm.resource.jdbc.PoolingDataSource" init-method="init" destroy-method="close">
<property name="uniqueName" value="dataSource-BTM" />
<property name="minPoolSize" value="1" />
<property name="maxPoolSize" value="4" />
<property name="testQuery" value="SELECT 1 FROM dual" />
<property name="driverProperties"><props>
<prop key="URL">${jdbc.url}</prop>
<prop key="user">${jdbc.username}</prop>
<prop key="password">${jdbc.password}</prop>
</props></property>
<property name="className" value="oracle.jdbc.xa.client.OracleXADataSource" />
<property name="allowLocalTransactions" value="true" />
</bean>
Note also that the uniqueName needs to be different than any other data sources you have configured.
The testQuery of course needs to be specific to the database that you are using. The driver properties are specific to the database class that I'm using. OracleXADataSource for some silly reason has different setter names for OracleDriver for the same value.
The allowLocalTransactions had to be set to true for me. I found recommendations NOT to set it to true online. But, that seems to be impossible. It just won't work if it's set to false. I am not sufficiently knowledgeable about these things to know why that is.
Lastly we need to configure the entity manager factory.
<util:map id="jpa_property_map">
<entry key="hibernate.transaction.manager_lookup_class" value="org.hibernate.transaction.BTMTransactionManagerLookup"/>
<entry key="hibernate.current_session_context_class" value="jta"/>
</util:map>
<bean id="dataSource-emf" name="accessControlDb" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource"/>
<property name="persistenceXmlLocation" value="classpath*:META-INF/foo-persistence.xml" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="true"/>
<property name="databasePlatform" value="org.hibernate.dialect.Oracle10gDialect"/>
</bean>
</property>
<property name="jpaPropertyMap" ref="jpa_property_map"/>
<property name="jpaDialect"><bean class="org.springframework.orm.jpa.vendor.HibernateJpaDialect"/></property>
</bean>
Note the dataSource property refers to the id of the dataSource I declared. The persistenceXmlLocation refers to a persistence xml file that exists in the classpath somewhere. The classpath*: indicates it may be in any jar. Without the * it won't find it if it's in a jar for some reason.
I found util:map to be a handy way to put the jpaPropertyMap values in one place so that I don't need to repeat them when I use multiple entity manager factories in one application context.
Note that the util:map above won't work unless you include the proper settings in the outer beans element. Here is the header of the xml file that I use:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:p="http://www.springframework.org/schema/p"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:tx="http://www.springframework.org/schema/tx" xmlns:util="http://www.springframework.org/schema/util"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
http://www.springframework.org/schema/context
http://www.springframework.org/schema/context/spring-context-2.5.xsd
http://www.springframework.org/schema/tx
http://www.springframework.org/schema/tx/spring-tx-2.5.xsd http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd">
Lastly, in order for Bitronix (or apparently any cpool which supports two phase commit) to work with Oracle you need to run the following grants as user SYS. (See http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=/com.ibm.websphere.express.doc/info/exp/ae/rtrb_dsaccess2.html and http://docs.codehaus.org/display/BTM/FAQ and http://docs.codehaus.org/display/BTM/JdbcXaSupportEvaluation#JdbcXaSupportEvaluation-Oracle)
grant select on pending_trans$ to <someUsername>;
grant select on dba_2pc_pending to <someUsername>;
grant select on dba_pending_transactions to <someUsername>;
grant execute on dbms_system to <someUsername>;
Those grants need to be run for any user that a connection pool is set up for regardless of whether you actually do any modifying of anything. It apparently looks for those tables when a connection is established.
A few other misc issues:
You can't query tables which are remote synonyms in Oracle while inside a Spring #Transactional block while using Bitronix (you'll get an ORA-24777). Use materialized views or a separate EntityManager which directly points at the other DB instead.
For some reason the btmConfig in the applicationContext.xml has problems setting config values. Instead create a bitronix-default-config.properties file. The config values you can use are found at http://docs.codehaus.org/display/BTM/Configuration13 . Some other config info for that file is at http://docs.codehaus.org/display/BTM/JdbcConfiguration13 but I haven't used it.
Bitronix uses some local files to store transactional stuff. I don't know why, but I do know that if you have multiple webapps with local connection pools you will have problems because they will both try to access the same files. To fix this specify different values for bitronix.tm.journal.disk.logPart1Filename and bitronix.tm.journal.disk.logPart2Filename in the bitronix-default-config.properties for each app.
Bitronix javadocs are at http://www.bitronix.be/uploads/api/index.html .
That's pretty much it. It's very fiddly to get it to work, but it's working now and I'm happy. I hope that all this helps others who are going through the same troubles I did to get this all to work.
When I do connection pooling I tend to use the one provided by the app server I'm deploying on. It's just a JNDI name to Spring at that point.
Since I don't want to worry about an app server when I'm testing, I use a DriverManagerDataSource and its associated transaction manager when I'm unit testing. I'm not as concerned about pooling or performance when testing. I do want the tests to run efficiently, but pooling isn't a deal breaker in that case.

Resources