We have developed a data persistence framework using Mybatis. The framework uses plain MyBatis APIs. (We were prohibited from using any mybatis-spring, do not ask… why?)
Now we have to integrate this persistence framework with another framework developed by other teams. This other framework heavily uses spring transactions for everything. Our persistent framework DAOs will be used by this framework within its own API ….that means the spring managed transactions will be propagated to MyBatis DAO. It is expected that our MyBatis based persistence framework should participate in spring managed transactions without any issues.
There are two options for us to make this work
(1)Change our persistent framework to use mybatis-spring module. Change DAOs to use mappers directly injected using spring and spring’s SqlSessionFactoryBean. I did build a small example simulating both the frameworks and everything works without any issue. The problem is with this approach that it requires changing almost all the DAOs to use spring injected mapper, extensively test the framework again. We simply do not have time available due to delivery timeline.
(2)Use mybatis-spring, define SqlSeeionFactory using spring – set the datasource and transaction manager used by other framework. Something like
<bean id="smpDataSource" class="oracle.jdbc.pool.OracleDataSource" destroy-method="close">
<property name="connectionCachingEnabled" value="true" />
<property name="URL"> <value>${db.thin.url}</value></property>
<property name="user"> <value>${db.user}</value></property>
<property name="password"><value>${db.password}</value>
</property>
</bean>
<bean id="dbTransactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="smpDataSource" />
</bean>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="smpDataSource" />
<property name="typeAliasesPackage" value="spike.smp51.domain" />
<property name="mapperLocations" value="classpath*:spike/smp51/mappers/*.xml" </bean>
Then in applicataion code MyBatis DAO gets the sqlseesionfactory from spring like
public static SqlSessionFactory getSqlSessionFactory() throws Exception
{
DefaultSqlSessionFactory sessionFactory = (DefaultSqlSessionFactory)ctx.getBean("sqlSessionFactory");
return sessionFactory;
}
All DAOs already use SqlSeesionFactory to open and close sessions. Just replace that mybatis created sqlseeionfactory with spring created sqlseeionfactory. That way we will have only few lines of changes.
This approach is outlined here
http://mybatis.github.io/spring/using-api.html
The mybatis documentation warns about this approach – specifically that it will not participate in spring transactions.
When I tried the 2nd approach, our framework was able to participate in spring transactions. This is strange. Is the MyBatis documentation incorrect then? I did verify it extensively by creating various transaction boundaries using spring transactions + AOP . MyBatis DAOs are able to participate in spring managed transactions every time. Since this second approach will save us 90% of the development time – we really like to use it – but worried since MyBatis warns following this approach. Has anyone tried this approach? Any feedback is greatly appreciated.
Did you have any return on that ?
I'm wondering if in the doc they're talking about org.apache.ibatis.session.SqlSessionFactory from Mybatis-api while the SqlSessionFactory you're using is from the mybatis-spring lib : org.mybatis.spring.SqlSessionFactoryBean
Related
We have a batch job running in Spring XD which reads from MongoDB using the standard MongoItemReader which converts mongo records to our domain model. Up to Spring XD version 1.1.3 this worked fine, however in versions 1.2.0 and 1.2.1 the job is failing with the following error (package name shortened)
java.lang.NoClassDefFoundError: c/s/r/b/b/domain/IndexId
at c.s.r.b.b.domain.IndexId_Instantiator_hxmj4p.newInstance(Unknown Source) ~[na:na]
at
org.springframework.data.convert.BytecodeGeneratingEntityInstantiator$EntityInstantiatorAdapter.createInstance(BytecodeGeneratingEntityInstantiator.java:193) ~
[spring-data-commons-1.10.0.RELEASE.jar:na]
at org.springframework.data.convert.BytecodeGeneratingEntityInstantiator.createInstance
(BytecodeGeneratingEntityInstantiator.java:76) ~[spring-data-commons-1.10.0.RELEASE.jar:na]
at
org.springframework.data.mongodb.core.convert.MappingMongoConverter.read(MappingMongoConverter.java:250) ~[spring-data-mongodb-1.7.0.RELEASE.jar:na]
at
org.springframework.data.mongodb.core.convert.MappingMongoConverter.read(MappingMongoConverter.java:231) ~[spring-data-mongodb-1.7.0.RELEASE.jar:na]
Looking into this I found the threads NoClassDefFoundError when making a query in spring-data-solr within a play framework application, and NoClassDefFoundError after upgrading to 1.7.0.RELEASE which suggest this is due to a change in spring-data-mongo 1.7.0 and the underlying spring-data-commons to change the default entity instantiation technique to improve performance.
Based on the suggested fix in those threads I've modified the mongo template in my job module XML definition as follows and this fixes the problem:
<bean id="mappingConverter" class="org.springframework.data.mongodb.core.convert.MappingMongoConverter">
<constructor-arg ref="dbRefResolver"/>
<constructor-arg ref="mongoMappingContext"/>
<property name="instantiators" ref="entityInstantiators" />
</bean>
<bean id="dbRefResolver" class="org.springframework.data.mongodb.core.convert.DefaultDbRefResolver">
<constructor-arg ref="mongoDbFactory"/>
</bean>
<bean id="mongoMappingContext" class="org.springframework.data.mongodb.core.mapping.MongoMappingContext"/>
<bean id="entityInstantiators" class="org.springframework.data.convert.EntityInstantiators">
<constructor-arg value="#{T(org.springframework.data.convert.ReflectionEntityInstantiator).INSTANCE}"/>
</bean>
<bean id="mongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate">
<constructor-arg name="mongoDbFactory" ref="mongoDbFactory" />
<constructor-arg name="mongoConverter" ref="mappingConverter" />
</bean>
However this is verbose and obviously isn't an ideal fix. The problem doesn't show up in our job module integration test so I have a hunch its caused by a combination of the default entity instantiation change and the fact that when a module executes in Spring XD the domain classes will be in the module's class loader and not visible to the spring data mongo classes in XD's main class loader.
So should this be regarded as a bug in Spring XD or Spring Data Mongo? One fix might be an improvement to the Spring Data Mongo mongo:mapping-converter XML configuration to allow forcing the use of the ReflectionEntityInstantiator which would at least reduce the amount of XML needed above. Alternatively maybe Spring XD should handle this scenario automatically?
I don't think there is anything we can do from the XD side since this is a custom job. We have to rely on the spring-data-mongodb functionality.
It looks like you're running into DATACMNS-710, which is fixed in Fowler SR1 (equivalent to Spring Data MongoDB 1.7.1). You might wanna try the just released Gosling release, too.
I have developed a small webapp using and SpringMVC(3.1.3.RELEASE) and Hibernate 4.2.0.Final.
I'm trying to convert it to be a multi-tenant application.
Similar topics have been covered in other threads, but I couldn't find a definitive solution to my problem.
What I am trying to achieve is to design a web app which is able to:
Read a datasource configuration at startup (an XML file containing multiple datasource definitions, which is placed outside the WAR file and it's not the application-context or hibernate configuration file)
Create a session factory for each one of them (considering that each datasource is a database with a different schema).
How can i set my session factory scope as session? ( OR Can i reuse the same session factory ?) .
Example:
Url for client a - URL: http://project.com/a/login.html
Url for client b - URL: http://project.com/b/login.html
If client "a" make request,read the datasource configuration file and Create a session factory using that XML file for the client "a".
This same process will be repeating if the client "b" will send a request.
What I am looking, how to implement datasource creation upon customer subscription without editing the Spring configuration file. It needs to be automated.
Here is my code ,that i have done so far.
Please anyone tell me,What modifications i need to be made?
Please give an answer with some example code..I am quite new in spring and hibernate world.
Spring.xml
<bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close" p:driverClassName="${jdbc.driverClassName}"
p:url="${jdbc.databaseurl}"
p:username="${jdbc.username}" p:password="${jdbc.password}" />
<bean id="sessionFactory"
class="org.springframework.orm.hibernate4.LocalSessionFactoryBean">
<property name="dataSource" ref="dataSource" />
<property name="configLocation">
<value>classpath:hibernate.cfg.xml</value>
</property>
<property name="hibernateProperties">
<props>
<prop key="hibernate.dialect">${jdbc.dialect}</prop>
<prop key="hibernate.show_sql">true</prop>
</props>
</property>
</bean>
<bean id="transactionManager"
class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="sessionFactory" ref="sessionFactory" />
JDBC.properties File
jdbc.driverClassName=com.mysql.jdbc.Driver
jdbc.dialect=org.hibernate.dialect.MySQLDialect
jdbc.databaseurl=jdbc:mysql://localhost:3306/Logistics
jdbc.username=root
jdbc.password=rot#pspl#12
hibernate.cfg.xml File
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE hibernate-configuration PUBLIC
"-//Hibernate/Hibernate Configuration DTD//EN"
"http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">
<hibernate-configuration>
<session-factory>
<mapping class="pepper.logis.organizations.model.Organizaions" />
<mapping class="pepper.logis.assets.model.Assets" />
</session-factory>
</hibernate-configuration>
Thanks,
First create a table for Tenant with tenant_id and associate it with all users.Now, you can fetch this details while the user logs in and set it in session.
We are using AbstractRoutingDataSource to switch DataSource for every request on Spring Boot. I think it is Hot Swapable targets/datasource mentioned by #bhantol above.
It solves our problems but I don't think it is sound solution. I guess JNDI could be a better one than AbstractRoutingDataSource.
Wondering what you ended up with.
Here are some ideas for you.
Option 1) Single Application Instance.
It is somewhat ambitious to to this using what you are actually trying to achieve.
The gist is to simply deploy the same exact application with different context root on the same JVM. You can still tune the JVM as a whole like you would have if you had a truely multi-tenant application. But this comes at the expense of duplication of classes, contexts, local caching, start up times etc.
But as of today the Spring Framework 4.0 does not provide much of an multi-tenancy support (other than Hot Swapable targets/datasource) etc. I am looking for a good framework but it may be a wash to move away from Spring at this time for me.
Option 2) Multiple deployments of same application (more practical as of today)
Just have your same exact application deploy to the same application server JVM instance or even different.
If you use the same instance you may now need to bootstrap your app to pickup a DataSource based on what the instance should serve e.g. client=a property would be enough to pickup a **a**DataSource" or **b**DataSource I myself ended up going this approach.
If you have a different application server instance you could just configure a different JNDI path and treat things generically. No need for client="a" property because you have liberty to define your datasource differently with the same name.
We are using Spring to do JNDI look-up for our data access requirements. We are making calls to multiple stored procedures (in Sybase). I wanted a way to control the auto-commit property of the transactions while calling these stored procs. I was able to achieve it with BasicDataSource (property name="defaultAutoCommit").
But I got stuck when we moved to doing JNDI look-up (from Weblogic).
<jee:jndi-lookup jndi-name="WL_data_source" id="dataSource" />
<bean id="procedureHandlerBean" class="myclass">
<property name="procDataSource" ref="dataSource"/>
</bean>
After researching for a few days, it appears to me that I can use JtaTransactionManager. However, I am not sure if I can use the transaction manager to configure auto-commit on a per procedure call basis (due to different chained/unchained mode requirements of different procs). Is there a clean way to achieve this? Thanks
I am having multiple datasource and one one database configured with JPA. I am using websphere 7. I want all these datasouces to be configured as global transactions. I am using below spring configurations but the transactions are not working as expected global transaction. If one db is failing then the other db is getting commited which is not expected as single global transactions. Can you please help me where i m doing incorrect,
I am having 2 datasouce one as configured below with id="us_icfs_datasource" and another using JPA
<jee:jndi-lookup id="entityManagerFactory" jndi-name="persistence/persistenceUnit"/>
<bean id="pabpp" class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor"/>
<bean id="transactionManager" class="org.springframework.transaction.jta.JtaTransactionManager" />
<!-- Needed for #Transactional annotation -->
<tx:annotation-driven/>
<jee:jndi-lookup id="US_ICFS_DATASORCE"
jndi-name="jdbc/financing_tools_docgen_txtmgr"
cache="true"
resource-ref="true"
proxy-interface="javax.sql.DataSource" />
also I have added below code in web.xml
<persistence-unit-ref>
<persistence-unit-ref-name>persistence/persistenceUnit</persistence-unit-ref-name>
<persistence-unit-name>persistenceUnit</persistence-unit-name>
</persistence-unit-ref>
<persistence-context-ref>
<persistence-context-ref-name>persistence/persistenceUnit</persistence-context-ref-name>
<persistence-unit-name>persistenceUnit</persistence-unit-name>
</persistence-context-ref>
below is my code where i m using transaction
> #Transactional public TemplateMapping addTemplateMapping(User user,
> TemplateMapping templateMapping) throws
> TemplateMappingServiceException { .... }
On Websphere you should use this bean to hook into the Websphere transaction manager:
<bean id="transactionManager"
class="org.springframework.transaction.jta.WebSphereUowTransactionManager"/>
See also this article
EDIT:
In order to use 2-phase commit (i.e. ensuring consistency across multiple resources), you will need to use XA data sources. See this article for details.
First of all your data sources that participate in global transaction must be of javax.sql.XADataSource type.
You also have to set transaction type in your persistence unit to JTA (not RESOURCE_LOCAL).
And you need to inform your JPA implementation that you want to do global transactions.
I'm using the simple object/graph mapping in Spring Data Neo4j 2.0, where I perform persistence operations using the Spring Data repository framework. I'm working with the repositories rather than working with the Neo4jTemplate. I inject the repositories into my Spring Web MVC controllers, and the controllers call the repos directly. (No intermediate service layer--my operations are generally CRUDs and finder queries.)
When I do read operations, there are no issues. But when I do write operations, I get "NotInTransactionException". My understanding is that read ops in Neo4j don't require transactions, but write ops do.
What's the best way to get transactions into the picture here, assuming I want to stick with the simple OGM? I'm wanting to use #Transactional, but putting that on the various repository interfaces doesn't work. If I introduce an intermediate service tier in between the controllers and the repositories and then annotate the service beans with #Transactional, then it works, but I'm wondering whether there's a simpler way to do it. Without Spring Data, I'd typically have access to the DAO (repository) implementations, so I'd be able to annotate the concrete DAOs with #Transactional if I wanted to avoid a pass-through service tier. With Spring Data the repos are dynamically generated so that doesn't appear to be an option.
First, note that having transactional DAOs is not generally a good practice. But if you don't have a service layer, then let it be on the DAOs.
Then, you can enable declarative transactions. Here's how I did it:
First, define an annotation called #GraphTransactional:
#Retention(RetentionPolicy.RUNTIME)
#Transactional("neo4jTransactionManager")
public #interface GraphTransactional {
}
Update: spring-data-neo4j have added such an annotation, so you can reuse it instead of creating a new one: #Neo4jTransactional
Then, in applicationContext.xml, have the following (where neo4jdb is your EmbeddedGraphDatabase):
<bean id="neo4jTransactionManagerService"
class="org.neo4j.kernel.impl.transaction.SpringTransactionManager">
<constructor-arg ref="neo4jdb" />
</bean>
<bean id="neo4jUserTransactionService" class="org.neo4j.kernel.impl.transaction.UserTransactionImpl">
<constructor-arg ref="neo4jdb" />
</bean>
<bean id="neo4jTransactionManager"
class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManager" ref="neo4jTransactionManagerService" />
<property name="userTransaction" ref="neo4jUserTransactionService" />
</bean>
<tx:annotation-driven transaction-manager="neo4jTransactionManager" />
Have in mind that if you use another transaction manager as well, you'd have to specify order="2" for this annotation-driven definition, and also have in mind that you won't have two-phase commit if you have one method that is declared to be both sql and neo4j transactional.