Spring and Hibernate save not working - spring

I am using Hibernate and Spring 3.0 i am trying to save the value into database but when i see a console the only select query is showing insert or update is not showing and save is not working
I created a sessionFactory bean and inject it into Impl
<bean id="GetStartedDAOBean" class="com.sample.dao.impl.GetStartedDAOImpl" >
<property name="sessionfactory" ref="sessionFactory">
</property>
</bean
<bean id="GetStartedActionBean" class="com.sample.action.GetStartedAction">
<property name="getStartedDAOImpl" ref="GetStartedDAOBean"></property>
<property name="industryDAOImpl" ref="IndustryDAOBean"></property>
<property name="stateDAOImpl" ref="stateDAOBean"></property>
</bean>
In impl i have
private SessionFactory sessionfactory;
public void setSessionfactory(SessionFactory sessionfactory) {
this.sessionfactory = sessionfactory;
}
public void save(Customer customer)throws IllegalStateException,SystemException{
try {
sessionfactory.openSession().saveOrUpdate(customer);
}
catch(Exception e){
e.printStackTrace();
}
}
when i debug there is value in sessionFactory but it does not save any value. and also does not show any inserted query. There is no error.
Any one can help me?

You open your session (in-memory) and save something onto it, but the session saves in the database only when you flush(). Do a
Session session = sessionfactory.openSession();
session.saveOrUpdate(customer);
session.flush();
Another way is to commit the transaction, and thus Hibernate will automatically call flush().

Try with #Transactional at the method, and add the following to your XML:
<tx:annotation-driven/>

#Transaction you have give on method of service class and <tx:annotation-driven/> you have to give in applicaiton-context.xml file.
Hence, when any one call the service class's method, the transaciton will start by spring and it will handle up to commit and rollback.

Related

Is it possible to use SpringData-JPA with a hibernate4.LocalSessionFactoryBean?

I am already using Hibernate 4 directly with a LocalSessionFactoryBean and a SessionFactory in my code.
I would now like to include Spring-Data-JPA in my code.
But Spring-Data needs an EntityManagerFactory to work, which can be configured through a LocalContainerEntityManagerFactoryBean. Can these Beans LocalSessionFactoryBean and LocalContainerEntityManagerFactoryBean coexist in one Spring project?
(Or can one be adapted by the other?)
What is the best practice?
Although they can coexists it will be problematic especially if you want to have them participate in the same transaction. However if you switch your logic around and configure a LocalContainerEntityManagerFactoryBean instead of a LocalSessionFactoryBean you can use the HibernateJpaSessionFactoryBean to get access to the underlying SessionFactory.
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<!-- Your properties here -->
</bean>
<bean id="sessionFactory" class="org.springframework.orm.jpa.vendor.HibernateJpaSessionFactoryBean">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
Now you have both and can participate in the same transaction.
This solution is also documented in the Spring Data JPA reference guide in the FAQ section.
#Autowired
private EntityManager entitymanager;
public List<SpBooking> list() {
// #SuppressWarnings("unchecked")
System.out.println("******************************");
// #SuppressWarnings("unchecked")
List<SpBooking> listUser = (List<SpBooking>)((Session)entitymanager.getDelegate())
.createCriteria(SpBooking.class)
.list();
for (SpBooking i:listUser)
System.out.println("------------------"+i.getBookingId());
return listUser;
}
And as of JPA 2.1, EntityManagerFactory.unwrap(java.lang.Class) provides a nice approach, documented here: https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/orm/jpa/vendor/HibernateJpaSessionFactoryBean.html
#Bean
public SessionFactory sessionFactory(#Qualifier("entityManagerFactory") EntityManagerFactory emf) {
return emf.unwrap(SessionFactory.class);
}

Spring Transactions not rolling back on Exception (Oracle JNDI datasource)

I am using annotation based transactions in a Spring MVC 3.1 project, and my transactions are not being rolled back when an exception is thrown.
Here is my Service code
#Service
public class ImportService {
#Autowired
ImportMapper importMapper;
#Transactional(propagation=Propagation.REQUIRED, isolation=Isolation.READ_COMMITTED, rollbackFor=Throwable.class)
public void processImport() throws ServiceException, DatabaseException {
iImport import = new Import();
createImport(import);
throw new ServiceException("");
}
#Transactional(propagation=Propagation.REQUIRED, isolation=Isolation.READ_COMMITTED, rollbackFor=Throwable.class)
private void createImport(Import import) throws DatabaseException {
try {
importMapper.createImport(eventImport);
} catch (Exception e) {
throw new DatabaseException(e);
}
}
So, hopefully, the createImport method should be rolled back after the exception is thrown. But unfortunately it's not.
I am defining my datasource in the server context.xml
<Resource name="datasource.import" auth="Container" type="javax.sql.DataSource"
maxActive="100" maxIdle="30" maxWait="10000"
username="user" password="password" driverClassName="oracle.jdbc.driver.OracleDriver"
url="jdbc:oracle:thin:#INFO" />
And I'm looking it up with JNDI:
<jee:jndi-lookup id="dataSource" jndi-name="datasource.import"/>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource" />
</bean>
<tx:annotation-driven />
I'm using an Oracle database, and the JDBC spec says that auto commit is true by default. I thought that if I set it to false explicitly that would help, but I can't figure out how to do that.
Is there any way to get rollbacks working, while looking up your Oracle datasource by JNDI.
Please be aware that Spring's transaction management rolls-back transactions only for unchecked exceptions (RuntimeException) by default. If you wish to perform rollback also on checked exceptions, you need to define that.
When using annotations as attribute source, you need to provide rollbackFor attribute with the list of exception classes, which should cause a transaction to be rolled-back (quote from the JavaDoc):
/**
* Defines zero (0) or more exception {#link Class classes}, which must be a
* subclass of {#link Throwable}, indicating which exception types must cause
* a transaction rollback.
* <p>This is the preferred way to construct a rollback rule, matching the
* exception class and subclasses.
* <p>Similar to {#link org.springframework.transaction.interceptor.RollbackRuleAttribute#RollbackRuleAttribute(Class clazz)}
*/
Class<? extends Throwable>[] rollbackFor() default {};
It is said that, if a #transational method is invoked by another #transational method, the first one will not work. You may try to remove the first #transational and have a try.

Spring Data JPA repositories, transactions and eventing in Spring

The amount of grey hair has dramatically increased in last couple of days while trying to resolve the following problem. I'm using Spring Data JPA repositories in custom event listeners that utilises simple Spring 3.2 eventing mechanism. The problem I'm having is that if ListenerA creates an entity and calls assetRepository.save(entity) or assetRepository.saveAndFlash(entity) the subsequent calls to retrieve this same entity from another listener fails. The cause seems to be that the ListenerB can not find the original entity in the database, it seem to be still in Hibernate's cache.
The trigger for ListenerB to lock up the entity is an event fired as a result of a runnable task execution from a thread pool.
Here is my configuration:
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="persistenceUnitName" value="spring-jpa" />
<property name="jpaVendorAdapter">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="generateDdl" value="false" />
<property name="database" value="#{appProps.database}" />
</bean>
</property>
<property name="jpaProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.Oracle10gDialect</prop>
<prop key="hibernate.hbm2ddl.auto">#{appProps['hibernate.hbm2ddl.auto']}</prop>
<prop key="hibernate.show_sql">#{appProps['hibernate.show_sql']}</prop>
<prop key="hibernate.format_sql">#{appProps['hibernate.format_sql']}</prop>
<prop key="hibernate.search.default.directory_provider">org.hibernate.search.store.impl.FSDirectoryProvider</prop>
<prop key="hibernate.search.default.indexBase">#{appProps.indexLocation}</prop>
<prop key="hibernate.search.lucene_version">#{appProps['hibernate.search.lucene_version']}</prop>
</props>
</property>
</bean>
<tx:annotation-driven transaction-manager="transactionManager" />
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
<property name="jpaDialect">
<bean class="org.springframework.orm.jpa.vendor.HibernateJpaDialect" />
</property>
</bean>
I'm omitting the dataSource configuration which is an instance of ComboPooledDataSource that defines connection to Oracle database. As a side note, component scanning is used and the project is Spring MVC.
Now Java classes.
ListenerA
#Sevice
public class ListenerA implements ApplicationListener<FileUploadedEvent> {
#Autowired
private AssetRepository assetRepository;
#Autowired
private ExecutorService executor; // Triggers runnable task on a Job in Spring's TaskExecutor
#Override
#Transactional
public void onApplicationEvent(FileUploadedEvent event) {
Asset target = event.getTarget();
Job job = new Job(target);
assetRepository.save(job);
executor.execute(job);
}
ListenerB
#Sevice
public class ListenerB implements ApplicationListener<JobStartedEvent> {
#Autowired
private AssetRepository assetRepository;
#Override
#Transactional
public void onApplicationEvent(JobStartedEvent event) {
String id = event.getJobId();
Job job = assetRepository.findOne(id); // at this point we can not find the job, returns null
job.setStartTime(new DateTime());
job.setStatus(Status.PROCESSING);
assetRepository.save(job);
}
JobStartedEvent is fired from a runnable task within TaskExecutor.
What I'm doing wrong here? I have tried to use custom event publisher that is transaction aware, but that doesn't seem to solve the problem. I have also tried to wire appropriate service instead of data repository and remove #Transactional annotations from listeners, which also have failed. Any reasonable suggestions of how to solve the problem would be welcome.
I have managed to resolve the problem thanks to the hint from #Kresimir Nesek. So the solution was to replace Spring Data repositories with appropriate services.
Here are modified classes.
Listener A
#Sevice
public class ListenerA implements ApplicationListener<FileUploadedEvent> {
#Autowired
private JobService service;
#Autowired
private ExecutorService executor; // Triggers runnable task on a Job in Spring's TaskExecutor
#Override
public void onApplicationEvent(FileUploadedEvent event) {
Job job = service.initJobForExecution(event.getTarget());
executor.execute(job);
}
}
In the JobService method initJobForExecution(Asset target) had to be annotated with #Transactional(propagation=Propagation.REQUIRES_NEW) for everything to work.
Listener B
#Sevice
public class ListenerB implements ApplicationListener<JobStartedEvent> {
#Autowired
private JobService service;
#Override
public void onApplicationEvent(JobStartedEvent event) {
service.updateStatus(event.getJobId(), Status.PROCESSING);
}
}
While this is slightly old, I ran in to this same problem but now with Spring 4.1.1.RELEASE, Spring Data JPA 1.7.0 and Hibernate 4.3.5.Final
My scenario occurred during testing with some tests failing. During testing, our problems were caused by H2 in single connection mode, broadcasting asynchronous events and event transactionality.
Solutions
First problem was due to transaction timeout and was solved by adding MVCC=true to the H2 URL string. See: https://stackoverflow.com/a/6357183/941187
Asynchronous Events were causing issues during tests since they executed on different threads. In the event configuration, a task executor and thread pool were used. To fix, just provided an overridden configuration bean using the SyncTaskExecutor as the task executor. This will cause all events to occur synchronously.
The event transactionality was tricky. In our framework, event get broadcast from within a transaction (#Transactional). The event was then handled on another thread outside of the transaction context. This introduced a race condition since the handler often depended on the object from the transaction to have had been committed. We didn't notice the problem on our Windows development machines but it became apparent when deployed to production on Linux. The solution uses TransactionSynchronizationManager.registerSynchronization() with an implementation of TransactionSynchronization.afterCommit() to broadcast the event after committing. See http://www.javacodegeeks.com/2013/04/synchronizing-transactions-with-asynchronous-events-in-spring.html for more info and examples.
Related to #3, we had to add #Transactional(propagation = REQUIRES_NEW) for some of the service methods called from some of the event handlers.
Hopefully this helps some late comers.

transactions not working with Spring 3.1 – H2 – junit 4– hibernate 3.2

I have the following test..
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = {"/schedule-agents-config-context.xml"})
#TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true)
#Transactional
public class H2TransactionNotWorkingTest extends SubmitAgentIntegratedTestBase {
private static final int FIVE_SUBMISSIONS = 5;
#Autowired
private ApplicationSubmissionInfoDao submissionDao;
private FakeApplicationSubmissionInfoRepository fakeRepo;
#Before
public void setUp() {
fakeRepo = fakeRepoThatNeverFails(submissionDao, null);
submitApplication(FIVE_SUBMISSIONS, fakeRepo);
}
#Test
#Rollback(true)
public void shouldSaveSubmissionInfoWhenFailureInDatabase() {
assertThat(fakeRepo.retrieveAll(), hasSize(FIVE_SUBMISSIONS));
}
#Test
#Rollback(true)
public void shouldSaveSubmissionInfoWhenFailureInXmlService() {
assertThat(fakeRepo.retrieveAll().size(), equalTo(FIVE_SUBMISSIONS));
}
}
...and the following config...
<jdbc:embedded-database id="dataSource" type="H2">
<jdbc:script location="classpath:/db/h2-schema.sql" />
</jdbc:embedded-database>
<tx:annotation-driven transaction-manager="transactionManager"/>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"/>
</bean>
<bean id="transactionalSessionFactory"
class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="dataSource" ref="dataSource"/>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">org.hibernate.dialect.H2Dialect</prop>
<prop key="hibernate.show_sql">false</prop>
<prop key="hibernate.cache.use_second_level_cache">false</prop>
<prop key="hibernate.cache.use_query_cache">false</prop>
</props>
</property>
<property name="namingStrategy">
<bean class="org.hibernate.cfg.ImprovedNamingStrategy"/>
</property>
<property name="configurationClass" value="org.hibernate.cfg.AnnotationConfiguration"/>
<property name="packagesToScan" value="au.com.mycomp.life.snapp"/>
</bean>
<bean id="regionDependentProperties" class="org.springframework.core.io.ClassPathResource">
<constructor-arg value="region-dependent-service-test.properties"/>
</bean
>
I have also set auto commit to false in the sql script
SET AUTOCOMMIT FALSE;
There are not REQUIRES_NEW in the code.
Why is the rollback not working in the test?
Cheers
Prabin
I have faced the same problem but I have finally solved it albeit I don't use Hibernate (shouldn't really matter).
The key item making it work was to extend the proper Spring unit test class, i.e. AbstractTransactionalJUnit4SpringContextTests. Note the "Transactional" in the class name. Thus the skeleton of a working transactional unit test class looks like:
#ContextConfiguration(locations = {"classpath:/com/.../testContext.xml"})
public class Test extends AbstractTransactionalJUnit4SpringContextTests {
#Test
#Transactional
public void test() {
}
}
The associated XML context file has the following items contained:
<jdbc:embedded-database id="dataSource" type="H2" />
<tx:annotation-driven transaction-manager="transactionManager"/>
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"></property>
</bean>
Using this setup the modifications by each test method is properly rolled back.
Regards, Ola
I'm experiencing similar problems, I'm also using TestNG + Spring test support and Hibernate. What happens is that Hibernate disables autocommit on the connection before the transaction begins and it remembers the original autocommit setting:
org.hibernate.engine.transaction.internal.jdbc#JdbcTransaction:
#Override
protected void doBegin() {
try {
if ( managedConnection != null ) {
throw new TransactionException( "Already have an associated managed connection" );
}
managedConnection = transactionCoordinator().getJdbcCoordinator().getLogicalConnection().getConnection();
wasInitiallyAutoCommit = managedConnection.getAutoCommit();
LOG.debugv( "initial autocommit status: {0}", wasInitiallyAutoCommit );
if ( wasInitiallyAutoCommit ) {
LOG.debug( "disabling autocommit" );
managedConnection.setAutoCommit( false );
}
}
catch( SQLException e ) {
throw new TransactionException( "JDBC begin transaction failed: ", e );
}
isDriver = transactionCoordinator().takeOwnership();
}
Later on, after rolling back the transaction, it will release the connection. Doing so hibernate will also restore the original autocommit setting on the connection (so that others who might be handed out the same connection start with the original setting). However, setting the autocommit during a transaction triggers an explicit commit, see JavaDoc
In the code below you can see this happening. The rollback is issued and finally the connection is released in releaseManagedConnection. Here the autocommit will be re-set which triggers a commit:
org.hibernate.engine.transaction.internal.jdbc#JdbcTransaction:
#Override
protected void doRollback() throws TransactionException {
try {
managedConnection.rollback();
LOG.debug( "rolled JDBC Connection" );
}
catch( SQLException e ) {
throw new TransactionException( "unable to rollback against JDBC connection", e );
}
finally {
releaseManagedConnection();
}
}
private void releaseManagedConnection() {
try {
if ( wasInitiallyAutoCommit ) {
LOG.debug( "re-enabling autocommit" );
managedConnection.setAutoCommit( true );
}
managedConnection = null;
}
catch ( Exception e ) {
LOG.debug( "Could not toggle autocommit", e );
}
}
This should not be a problem normally, because afaik the transaction should have ended after the rollback. But even more, if I issue a commit after a rollback it should not be committing any changes if there were no changes between the rollback and the commit, from the javadoc on commit:
Makes all changes made since the previous commit/rollback permanent
and releases any database locks currently held by this Connection
object. This method should be used only when auto-commit mode has been
disabled.
In this case there were no changes between rollback and commit, since the commit (triggered indirectly by re-setting autocommit) happens only a few statements later.
A work around seems to be to disable autocommit. This will avoid restoring autocommit (since it was not enabled in the first place) and thus prevent the commit from happening. You can do this by manipulating the id for the embedded datasource bean. The id is not only used for the identification of the datasource, but also for the databasename:
<jdbc:embedded-database id="dataSource;AUTOCOMMIT=OFF" type="H2"/>
This will create a database with name "dataSource". The extra parameter will be interpreted by H2. Spring will also create a bean with name "dataSource;AUTOCOMMIT=OFF"". If you depend on the bean names for injection, you can create an alias to make it cleaner:
<alias name="dataSource;AUTOCOMMIT=OFF" alias="dataSource"/>
(there isn't a cleaner way to manipulate the embedded-database namespace config, I wish Spring team would have made this a bit better configurable)
Note: disabling the autocommit via the script (<jdbc:script location="...") might not work, since there is no guarantee that the same connection will be re-used for your test.
Note: this is not a real fix but merely a workaround. There is still something wrong that cause the data to be committed after a rollback occured.
----EDIT----
After searching I found out the real problem. If you are using HibernateTransactionManager (as I was doing) and you use your database via the SessionFactory (Hibernate) and directly via the DataSource (plain JDBC), you should pass both the SessionFactory and the DataSource to the HibernateTransactionManager. From the Javadoc:
Note: To be able to register a DataSource's Connection for plain JDBC code, this instance >needs to be aware of the DataSource (setDataSource(javax.sql.DataSource)). The given >DataSource should obviously match the one used by the given SessionFactory.
So eventually I did this:
<bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
<property name="dataSource" ref="dataSource" />
</bean>
And everything worked for me.
Note: the same goes for JpaTransactionManager! If you both use the EntityManager and perform raw JDBC access using the DataSource, you should supply the DataSource separately next to he EMF. Also don't forget to use DataSourecUtils to obtain a connection (or JDBCTemplate which uses DataSourceUtils internally to obtain the connection)
----EDIT----
Aight, while the above did solve my problem, it is not the real cause after all :)
In normal cases when using Spring's LocalSessionFactoryBean, setting the datasource will have no effect since it's done for you.
If the SessionFactory was configured with LocalDataSourceConnectionProvider, i.e. by Spring's LocalSessionFactoryBean with a specified "dataSource", the DataSource will be auto-detected: You can still explicitly specify the DataSource, but you don't need to in this case.
In my case the problem was that we created a caching factory bean that extended LocalSessionFactoryBean. We only use this during testing to avoid booting the SessionFactory multiple times. As told before, Spring test support does boot multiple application contexts if the resource key is different. This caching mechanism mitigates the overhead completely and ensures only 1 SF is loaded.
This means that the same SessionFactory is returned for different booted application contexts. Also, the datasource passed to the SF will be the datasource from the first context that booted the SF. This is all fine, but the DataSource itself is a new "object" for each new application context. This creates a discrepancy:
The transaction is started by the HibernateTransactionManager. The datasource used for transaction synchronization is obtained from the SessionFactory (so again: the cached SessionFactory with the DataSource instance from the application context the SessionFactory was initially loaded from). When using the DataSource in your test (or production code) directly, you'll be using the instance belonging to the app context active at that point. This instance does not match the instance used for the transaction synchronization (extracted from the SF). This result into problems as the connection obtained will not be properly participating in the transaction.
By explicitly setting the datasource on the transactionmanager this appeared to be solved since the post initialization will not obtain the datasource from the SF but use the injected one instead. The appropriate way for me was to adjust the caching mechanism and replace the datasource in the cached SF with the one from the current appcontext each time the SF was returned from cache.
Conclusion: you can ignore my post:) as long as you're using HibernateTransactionManager or JtaTransactionManager in combination with some kind of Spring support factory bean for the SF or EM you should be fine, even when mixing vanilla JDBC with Hibernate. In the latter case don't forget to obtain connections via DataSourceUtils (or use JDBCTemplate).
Try this:
remove the org.springframework.jdbc.datasource.DataSourceTransactionManager
<bean id="transactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="dataSource"/>
</bean>
and replace it with the org.springframework.orm.jpa.JpaTransactionManager
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="dataSource" ref="dataSource"/>
</bean>
or you inject an EntityManagerFactory instead ...
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory" />
</bean>
you need an EntityManagerFactory then, like the following
<bean id="entityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="jpaVendorAdapter">
<bean id="jpaAdapter"
class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter">
<property name="showSql" value="true" />
<property name="generateDdl" value="true" />
</bean>
</property>
</bean>
You haven't shown all the pieces to the puzzle. My guess at this point would be that your ApplicationSubmissionInfoDao is transactional and is committing on its own, though I'd think that would conflict with the test transactions if everything were configured properly. To get more of an answer, ask a more complete question. The best thing would be to post an SSCCE.
Thanks Ryan
The test code is something like this.
#Test
#Rollback(true)
public void shouldHave5ApplicationSubmissionInfo() {
for (int i = 0; i < 5; i++) {
hibernateTemplate.saveOrUpdate(new ApplicationSubmissionInfoBuilder()
.with(NOT_PROCESSED)
.build());
}
assertThat(repo.retrieveAll(), hasSize(5));
}
#Test
#Rollback(true)
public void shouldHave5ApplicationSubmissionInfoAgainButHas10() {
for (int i = 0; i < 5; i++) {
hibernateTemplate.saveOrUpdate(new ApplicationSubmissionInfoBuilder()
.with(NOT_PROCESSED)
.build());
}
assertThat(repo.retrieveAll(), hasSize(5));
}
I figure out that embedded DB define using jdbc:embedded-database don't have transaction support. When I used commons DBCP to define the datasource and set default auto commit to false, it worked.
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" scope="singleton">
<property name="driverClassName" value="org.h2.Driver" />
<property name="url" value="jdbc:h2:~/snappDb"/>
<property name="username" value="sa"/>
<property name="password" value=""/>
<property name="defaultAutoCommit" value="false" />
<property name="connectionInitSqls" value=""/>
</bean>
None of the above worked for me!
However, the stack i am using is [spring-test 4.2.6.RELEASE, spring-core 4.2.6.RELEASE, spring-data 1.10.1.RELEASE]
The thing is, using any unit test class annotated with [SpringJUnit4ClassRunner.class] will result in an auto-rollback functionality by spring library design
check ***org.springframework.test.context.transaction.TransactionalTestExecutionListener*** >> ***isDefaultRollback***
To overcome this behavior just annotate the unit test class with
#Rollback(value = false)

Ability to switch Persistence Unit dynamically within the application (JPA)

My application data access layer is built using Spring and EclipseLink and I am currently trying to implement the following feature - Ability to switch the current/active persistence unit dynamically for a user. I tried various options and finally ended up doing the following.
In the persistence.xml, declare multiple PUs. Create a class with as many EntityManagerFactory attributes as there are PUs defined. This will act as a factory and return the appropriate EntityManager based on my logic
public class MyEntityManagerFactory {
#PersistenceUnit(unitName="PU_1")
private EntityManagerFactory emf1;
#PersistenceUnit(unitName="PU_2")
private EntityManagerFactory emf2;
public EntityManager getEntityManager(int releaseId) {
// Logic goes here to return the appropriate entityManeger
}
}
My spring-beans xml looks like this..
<!-- First persistence unit -->
<bean class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" id="emFactory1">
<property name="persistenceUnitName" value="PU_1" />
</bean>
<bean class="org.springframework.orm.jpa.JpaTransactionManager" id="transactionManager1">
<property name="entityManagerFactory" ref="emFactory1"/>
</bean>
<tx:annotation-driven transaction-manager="transactionManager1"/>
The above section is repeated for the second PU (with names like emFactory2, transactionManager2 etc).
I am a JPA newbie and I know that this is not the best solution. I appreciate any assistance in implementing this requirement in a better/elegant way!
Thanks!
First of all thanks to user332768 and bert. I tried using AbstractRoutingDataSource as mentioned in the link provided by bert, but got lost trying to hook up my jpa layer (eclipselink). I reverted to my older approach with some modifications. The solution looks cleaner (IMHO) and is working fine. (switching database at runtime and also writing to multiple databases in the same transaction)
public class MyEntityManagerFactoryImpl implements MyEntityManagerFactory, ApplicationContextAware {
private HashMap<String, EntityManagerFactory> emFactoryMap;
public EntityManager getEntityManager(String releaseId) {
return SharedEntityManagerCreator.createSharedEntityManager(emFactoryMap.get(releaseName));
}
#Override
public void setApplicationContext(ApplicationContext applicationContext)
throws BeansException {
Map<String, LocalContainerEntityManagerFactoryBean> emMap = applicationContext.getBeansOfType(LocalContainerEntityManagerFactoryBean.class);
Set<String> keys = emMap.keySet();
EntityManagerFactory entityManagerFactory = null;
String releaseId = null;
emFactoryMap = new HashMap<String, EntityManagerFactory>();
for (String key:keys) {
releaseId = key.split("_")[1];
entityManagerFactory = emMap.get(key).getObject();
emFactoryMap.put(releaseId, entityManagerFactory);
}
}
}
I now inject my DAO's with an instance (singleton) of MyEntityManagerFactoryImpl. The dao will then simply call createSharedEntityManager with the required release and will get the correct EntityManager for that database. (Note that i am now using application managed EntityManager and hence i have to explicitly close them in my dao)
I also moved to jta transaction manager (to manage transaction across multiple databases)
This is how my spring xml looks like now.
...
<bean class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" id="em_Rel1">
<property name="persistenceUnitName" value="PU1" />
</bean>
<bean class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" id="em_Rel2">
<property name="persistenceUnitName" value="PU2" />
</bean>
<bean class="org.springframework.transaction.jta.JtaTransactionManager" id="jtaTransactionManager">
</bean>
<tx:annotation-driven transaction-manager="jtaTransactionManager"/>
....
Cheers! (comments are welcome)
I am not sure if this is a clean method. Instead of declaring the enitiymanagerfactory multiple times, we can use the spring application context to get the entitymanagerfactory declared in the spring application.xml.
hm = applicationContext.getBeansOfType(org.springframework.orm.jpa.LocalEntityManagerFactoryBean.class);
EntityManagerFactory emf = ((org.springframework.orm.jpa.LocalEntityManagerFactoryBean) hm.get("&emf1")).getNativeEntityManagerFactory();
EntityManagerFactory emf2 = ((org.springframework.orm.jpa.LocalEntityManagerFactoryBean) hm.get("&emf2")).getNativeEntityManagerFactory();
This is something i need to do in the future too, for this i have bookmarked Spring DynamicDatasourceRouting
http://blog.springsource.com/2007/01/23/dynamic-datasource-routing/
As far as i understand, this is using one PU, which gets assigned different DataSources. Perhaps it is helpful.

Resources