Spring cache vs. cachemanger - spring

The following sample from the Spring manual confuses the heck out of me.
<bean id="cacheManager"
class="org.springframework.cache.ehcache.EhCacheCacheManager"
p:cache-manager-ref="ehcache"/>
<!-- Ehcache library setup -->
<bean id="ehcache"
class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean"
p:config-location="ehcache.xml"/>
The naming convention mandates that EhCacheManagerFactoryBean produces a cache manager, more precisely it's a net.sf.ehcache.CacheManager instance. Yet, the bean is called ehcache not ehcachemanager. The actual cacheManager, however, references this bean.
In prose you could say that one is the Ehcache cache manager while the other is the Spring cache manager (which is backed by the former).
It gets worse if you use the EhCacheFactoryBean:
<bean
id="myCache"
class="org.springframework.cache.ehcache.EhCacheFactoryBean">
<property name="cacheManager">
<ref local="ehcache" />
</property>
There's a property called cacheManager which references a bean called ehcache.
Did I misunderstand anything or is it really that confusing? Bad design or just bad naming in the example?

Spring framework recently introduced caching abstraction with org.springframework.cache.CacheManager central interface. This interface has few built-in implementations, including:
ConcurrentMapCacheManager
EhCacheCacheManager
NoOpCacheManager
SimpleCacheManager
This design allows you to switch caching library without touching application code. As you can see one of these built-in implementations is backed by EhCache. However notice that EhCacheCacheManager is just a bridge between Spring caching abstraction and EhCache library. Thus it needs existing net.sf.ehcache.CacheManager. You can either create an instance of this cache manager yourself or take advantage of existing factory bean, namely EhCacheManagerFactoryBean.
I understand it's confusing because of overlapping names, but it should be clear from the above which classes come from which library and why are they used.

Related

Configure singleton CacheManager for multiple web applications with Spring Caching

I have multiple web applications deployed in Tomcat and service jar shared in TOMCAT_HOME/lib/ext. All of the application are using Spring, and in the service jar I have beans, annotated with Spring 3.1 Caching annotations . I am using Ehcache provider. And I want to have one single CacheManager used by all the web applications. If I define spring cache configurations at web application level, caching works, but separate cacheManager is created for every app/context. 'Shared' Cache Manager is causing problems, because if one the those applications gets undeployed, this shared cacheManager is shut down. So I want a single CacheManager , configured in my service jar, and used for all the calls to methods made from beans from the web apps. My current try is to define following confuguration in service.jar's applicationContext.xml:
<context:annotation-config/>
<context:component-scan base-package="com.mycompany.app" />
<context:component-scan base-package="com.mycompany.portal.service" />
<bean id="cacheManager" class="org.springframework.cache.ehcache.EhCacheCacheManager" p:cacheManager-ref="ehCacheManager"/>
<bean id="ehCacheManager" class="org.springframework.cache.ehcache.EhCacheManagerFactoryBean" p:configLocation="ehcache.xml" ></bean>
<cache:annotation-driven cache-manager="cacheManager"/>
I have defined parent application context via beanRefContext.xml:
<bean id="service.parent.context" class="org.springframework.context.support.ClassPathXmlApplicationContext">
<constructor-arg>
<list>
<value>applicationContext.xml</value>
</list>
</constructor-arg>
</bean>
And I am using this context as a parent context for all of my web apps with following contextParam in web app's web.xml:
<context-param>
<param-name>parentContextKey</param-name>
<param-value>service.parent.context</param-value>
</context-param>
The result is that this parentContext is loaded, but caching doesn't work at all
How can I solve this? Am I on the right way with the defining of the parentContext in the service.jar?
I don't think so. It looks like you are trying to have a single cache for multiple applications by "hacking" the root classloader.
If you need to share your cache across several applications, use a cache manager that supports that use case (i.e. that provides you a service you can reach from each application).

Spring conversion strategy in getBean for JAXB setterless collection

Question
How can I inject a bean containing a list generated by JAXB ?
Detail
These lists have no setters.
You populate them through
getMyList().getList().add(stuff);
For standard java Collections, you usually rely on spring-utils, but Spring does not support these JAXB lists.
Message: no matching editors or conversion strategy found
Context
WSDL-first - CXF server
mock responses are pulled from Spring Application context files
Hints
I'm reluctant to introduce a second JAXB runtime just for the sake of mock response, especially considering this will involve generating a slew of new classes to model my domain objects (i.e. thereby duplicating the objects generated by wsdl2java).
Try creating a Bean for the list:
<bean id="list" class="java.util.ArrayList">
<constructor-arg>
<list>
<ref bean="element1" />
<ref bean="element2" />
</list>
</constructor-arg>
</bean>
Then creating a bean for the inner class using the A$B syntax:
<bean id="myList" class="myPackage.MyOuterClass$MyList" >
<property name="list" ref="list" />
</bean>
Finally the OuterClass Bean:
<bean id="myOuterClass" class="myPackage.MyOuterClass" >
<property name="myList" ref="myList" />
</bean>
My Solution:
I ended up using eclipselink MOXy.
The choice of MOXy was driven by the following characteristics
MOXy allows to declare root elements outside of the classes generated by CXF. So that there is no need to interfere with these classes.
MOXy being a JAXB implementation has no problem dealing with the way JAXB populates lists (without setters).
Remarks:
MOXy XPath support is still weak. I needed to access a specific XML element (a response) out of the total XML file (the list of possible mock responses) and hoped I could unmarshal only a portion of this XML file based on an XPath predicate but this is not supported yet (in 2.5). Support is planned for 2.6.
I did not use any Spring JAXB front-end as a façade for MOXy, as my Mock SEI are already injected through Spring.
Using MOXy proved a pleasant experience and is quite easy to get started with.
I did not experience any collision between MOXy as a JAXB payload jar and the way CXF uses the JDK JAXB implementation for its SOAP layer.

Using Spring Transactions in MyBatis API

We have developed a data persistence framework using Mybatis. The framework uses plain MyBatis APIs. (We were prohibited from using any mybatis-spring, do not ask… why?)
Now we have to integrate this persistence framework with another framework developed by other teams. This other framework heavily uses spring transactions for everything. Our persistent framework DAOs will be used by this framework within its own API ….that means the spring managed transactions will be propagated to MyBatis DAO. It is expected that our MyBatis based persistence framework should participate in spring managed transactions without any issues.
There are two options for us to make this work
(1)Change our persistent framework to use mybatis-spring module. Change DAOs to use mappers directly injected using spring and spring’s SqlSessionFactoryBean. I did build a small example simulating both the frameworks and everything works without any issue. The problem is with this approach that it requires changing almost all the DAOs to use spring injected mapper, extensively test the framework again. We simply do not have time available due to delivery timeline.
(2)Use mybatis-spring, define SqlSeeionFactory using spring – set the datasource and transaction manager used by other framework. Something like
<bean id="smpDataSource" class="oracle.jdbc.pool.OracleDataSource" destroy-method="close">
<property name="connectionCachingEnabled" value="true" />
<property name="URL"> <value>${db.thin.url}</value></property>
<property name="user"> <value>${db.user}</value></property>
<property name="password"><value>${db.password}</value>
</property>
</bean>
<bean id="dbTransactionManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="smpDataSource" />
</bean>
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="smpDataSource" />
<property name="typeAliasesPackage" value="spike.smp51.domain" />
<property name="mapperLocations" value="classpath*:spike/smp51/mappers/*.xml" </bean>
Then in applicataion code MyBatis DAO gets the sqlseesionfactory from spring like
public static SqlSessionFactory getSqlSessionFactory() throws Exception
{
DefaultSqlSessionFactory sessionFactory = (DefaultSqlSessionFactory)ctx.getBean("sqlSessionFactory");
return sessionFactory;
}
All DAOs already use SqlSeesionFactory to open and close sessions. Just replace that mybatis created sqlseeionfactory with spring created sqlseeionfactory. That way we will have only few lines of changes.
This approach is outlined here
http://mybatis.github.io/spring/using-api.html
The mybatis documentation warns about this approach – specifically that it will not participate in spring transactions.
When I tried the 2nd approach, our framework was able to participate in spring transactions. This is strange. Is the MyBatis documentation incorrect then? I did verify it extensively by creating various transaction boundaries using spring transactions + AOP . MyBatis DAOs are able to participate in spring managed transactions every time. Since this second approach will save us 90% of the development time – we really like to use it – but worried since MyBatis warns following this approach. Has anyone tried this approach? Any feedback is greatly appreciated.
Did you have any return on that ?
I'm wondering if in the doc they're talking about org.apache.ibatis.session.SqlSessionFactory from Mybatis-api while the SqlSessionFactory you're using is from the mybatis-spring lib : org.mybatis.spring.SqlSessionFactoryBean

Why is autowiring required? What is the explanation of the concept of autowiring?

Why is autowiring required? What is the explanation for the concept of autowiring?
#autowired annotation in Spring Framework.
Autowiring is not required, just convenient.
It means that if you have a property that requires an InterfaceA and a single bean has been declared in Spring that is of type InterfaceA, instead of using XML to manually "wire up" the relationship (setting a bean reference as a property of another), you can let Spring do the wiring for you.
This is a common question for the beginners. As the beans are injected using DI (setter injections, constructor injections), why do we need auto-wiring? Auto-wiring also doing the same thing, right?
The answer is, it saves you from writing more code.
If using an XML file, using autowire attribute saves you from writing the wiring code in the bean definition.
Please look at code below.
Configuration code without Auto-wiring:
<bean id="employee" class="com.Employee">
<property name="name" value="Dexter"></property>
</bean>
<bean id="employeeService" class="com.EmployeeService">
<property name="employee" ref="employee"></property>
</bean>
Configuration code with Auto-wiring:
<bean id="employee" class="com.Employee">
<property name="name" value="Dexter"></property>
</bean>
<bean id="employeeService" class="com.EmployeeService" autowire="byName" />
Note that we did not have to write anything to refer property of EmployeeService, i.e., Employee. But still it was injected. Autowiring makes the container to search the bean configurations and do the collaboration among beans, without the developer specifically mentioning these.
If we use annotation, even we don’t have to write anything in XML files, including this autoware="byName", etc. Simply #Autowired on bean's setter/field/constructor is sufficient.

Correct way to get transactions using Spring Data Neo4j's simple object/graph mapping?

I'm using the simple object/graph mapping in Spring Data Neo4j 2.0, where I perform persistence operations using the Spring Data repository framework. I'm working with the repositories rather than working with the Neo4jTemplate. I inject the repositories into my Spring Web MVC controllers, and the controllers call the repos directly. (No intermediate service layer--my operations are generally CRUDs and finder queries.)
When I do read operations, there are no issues. But when I do write operations, I get "NotInTransactionException". My understanding is that read ops in Neo4j don't require transactions, but write ops do.
What's the best way to get transactions into the picture here, assuming I want to stick with the simple OGM? I'm wanting to use #Transactional, but putting that on the various repository interfaces doesn't work. If I introduce an intermediate service tier in between the controllers and the repositories and then annotate the service beans with #Transactional, then it works, but I'm wondering whether there's a simpler way to do it. Without Spring Data, I'd typically have access to the DAO (repository) implementations, so I'd be able to annotate the concrete DAOs with #Transactional if I wanted to avoid a pass-through service tier. With Spring Data the repos are dynamically generated so that doesn't appear to be an option.
First, note that having transactional DAOs is not generally a good practice. But if you don't have a service layer, then let it be on the DAOs.
Then, you can enable declarative transactions. Here's how I did it:
First, define an annotation called #GraphTransactional:
#Retention(RetentionPolicy.RUNTIME)
#Transactional("neo4jTransactionManager")
public #interface GraphTransactional {
}
Update: spring-data-neo4j have added such an annotation, so you can reuse it instead of creating a new one: #Neo4jTransactional
Then, in applicationContext.xml, have the following (where neo4jdb is your EmbeddedGraphDatabase):
<bean id="neo4jTransactionManagerService"
class="org.neo4j.kernel.impl.transaction.SpringTransactionManager">
<constructor-arg ref="neo4jdb" />
</bean>
<bean id="neo4jUserTransactionService" class="org.neo4j.kernel.impl.transaction.UserTransactionImpl">
<constructor-arg ref="neo4jdb" />
</bean>
<bean id="neo4jTransactionManager"
class="org.springframework.transaction.jta.JtaTransactionManager">
<property name="transactionManager" ref="neo4jTransactionManagerService" />
<property name="userTransaction" ref="neo4jUserTransactionService" />
</bean>
<tx:annotation-driven transaction-manager="neo4jTransactionManager" />
Have in mind that if you use another transaction manager as well, you'd have to specify order="2" for this annotation-driven definition, and also have in mind that you won't have two-phase commit if you have one method that is declared to be both sql and neo4j transactional.

Resources