More than 1 caching storage in spring boot app - spring-boot

I am facing a strange issue - I have hazelcast and redis in my project. Suddenly all #Cacheable annotations are putting entries only to hazelcast cache, even if the particular cache name is configured via redis cache builder:
#Bean
fun redisCacheManagerBuilderCustomizer(): RedisCacheManagerBuilderCustomizer? {
return RedisCacheManagerBuilderCustomizer { builder: RedisCacheManagerBuilder ->
builder
.withCacheConfiguration(
MY_CACHE,
RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofDays(3))
)
}
}
Using cache:
#Cacheable(cacheNames = [CacheConfig.MY_CACHE])
#Cacheable(value= [CacheConfig.MY_CACHE])
Both does not work and forwards requests to hazelcast only. How to solve this? Using different cacheManager?

Typically, only 1 caching provider is in use to cache data, such as in the service or data access tier of your Spring [Boot] application using Spring's Cache Abstraction and infrastructure components, such as the CacheManager and caching annotations.
When multiple caching providers (e.g. Hazelcast and Redis) are on the classpath of your Spring Boot application, then it might be necessary to declare which caching provider (e.g. Redis) you want to [solely] use for caching purposes. With this arrangement, Spring Boot allows you to declare your intentions using the spring.cache.type property as explained in the ref doc, here (see first Tip). Valid values of this property are defined by the enumerated values in the CacheType enum.
However, if you want to cache data using multiple caching providers at once, then you need to explicitly declare your intentions using this approach as well.
DISCLAIMER: It has been awhile since I have traced through Spring Boot auto-configuration where caching is concerned, and how it specifically handles multiple caching providers on the application classpath, especially when a specific caching provider has not been declared, such as by explicitly declaring the spring.cache-type property. However, and again, this may actually be your intention, to use multiple caching providers in a single #Cacheable (or #CachePut) service or data access operation. If so, continue reading...
To do so, you typically use 1 of 2 approaches. These approaches are loosely described in the core Spring Framework's ref doc, here.
1 approach is to declare the cacheNames of the caches from each caching provider along with the CacheManager, like so:
#Service
class CustomerService {
#Cacheable(cacheNames = { "cacheOne", "cacheTwo" }, cacheManager="compositeCacheManager")
public Customer findBy(String name) {
// ...
}
}
In this case, "cacheOne" would be the name of the Cache managed by caching provider one (e.g. Redis), and "cacheTwo" would be the name of the Cache managed by caching provider two (i.e. "Hazelcast").
DISCLAIMER: You'd have to play around, but it might be possible to simply declare a single Cache name here (e.g. "Customers"), where the caches (or cache data structures in each caching provider) are named the same, and it would still work. I am not certain, but it seems logical this would work as well.
The key (no pun intended) to this example, however, is the declaration of the CacheManager using the cacheManager attribute of the #Cacheable annotation. As you know, the CacheManager is the Spring SPI infrastructure component used to find and manage Cache objects (caches from the caching providers) used for caching purposes in your Spring managed beans (such as CustomerService).
I named this CacheManager deliberately, "compositeCacheManager". Spring's Cache Abstraction provides the CompositeCacheManager implementation, which as the name suggests, composes multiple CacheManagers for use in single cache operation.
Therefore, you could do the following in you Spring [Boot] application configuration:
#Configuration
class MyCachingConfiguration {
#Bean
RedisCacheManager cacheManager() {
// ...
}
#Bean
HazelcastCacheManager hazelcastCacheManager() {
// ...
}
#Bean
CompositeCacheManager compositeCacheManager(RedisCacheManager redis, HazelcastCacheManager hazelcast) {
return new CompositeCacheManager(redis, hazelcast);
}
}
NOTE: Notice the RedisCacheManager is the "default" CacheManager declaration and cache provider (implementation) used when no cache provider is explicitly declared in a caching operation, since the bean name is "cacheManager".
Alternatively, and perhaps more easily, you can choose to implement the CacheResolver interface instead. The Javadoc is rather self-explanatory. Be aware of the Thread-safety concerns.
In this case, you would simply declare a CacheResolver implementation in your configuration, like so:
#Configuration
class MyCachingConfiguration {
#Bean
CacheResolver customCacheResolver() {
// return your custom CacheResolver implementation
}
}
Then in your application service components (beans), you would do:
#Service
class CustomerService {
#Cacheable(cacheNames = "Customers", cacheResolver="customCacheResolver")
public Customer findBy(String name) {
// ...
}
}
DISCLAIMER: I have not tested either approach I presented above here, but I feel reasonably confident this should work as expected. It may need some slight modifications, but should generally be the approach(es) you should follow.
If you have any troubles, please post back in the comments and I will try to follow up.

Related

Is it possible to #CacheEvict keys that match a pattern?

Is there something along the lines of #CacheEvict(value = "FOO", key = "baz*") so that when the cache FOO contains keys baz_1 and baz_2 they get evicted?
Assuming that you have spring-boot-starter-cache as dependency, spring boot auto-configures a CacheManager bean named cacheManager.
Also, assuming you have spring-boot-starter-data-redis as a dependency, RedisCacheManager is picked as the CacheManager implementation.
#CacheEvict (and the caching abstraction API) doesn't let you the option to evict by prefix, but using an AOP advice (or elsewhere where fits), you can take advantage of the underlying implementation:
RedisCache redisCache = (RedisCache) cacheManager.getCache("FOO");
redisCache.getNativeCache().clean("FOO", "baz*".getBytes());
Didn't try it actually, but I think this should work.
Likewise, you can adapt to other caching implementation.
The shortcoming of this approach, is that you'll have to change your code upon changing the cache implementation.

Factory design pattern and Spring

I am wondering what is the current best practice as to the use of factory pattern within the context of Spring framework in using dependency injection. My wonder arises about whether the factory pattern is still relevant nowadays in light of the use of Spring dependency injection. I did some searching and see some past discussion (Dependency Injection vs Factory Pattern) but seem there is different view.
I see in some real life project in using a Map to hold all the beans and rely on autowiring to create those beans. When the bean is needed, it get it via the map using the key.
public abstract class Service {
//some methods
}
#Component
public class serviceA extends Service {
//implementation
}
#Component
public class serviceB extends Service {
//implementation
}
Map<String, Service> services;
But I see there is some difference among the two approaches.
Using the above method, all beans are created on application start up and the creation of object is handled by the framework. It also implies there is only one bean for each type.
While for factory pattern, the factory class creates the object on request. And it can create a new object for each request.
I think a deeper question may be, when Spring framework is used in a project, should it be strived to not create any object inside a class, which means the factory pattern ( or any creational design patterns?) should not be used, as Spring is supposed to be the central handler of the objects dependency ?
The answer to this question can be really deep and broad, I'll try to provide some points that hopefully will help.
First off, spring stores its beans (singletons) in the ApplicationContext. Essentially this is the map you're talking about. In a nutshell, it allows getting the bean by name, type, etc.
ApplicationContext, while being a really important concept, is not the whole Spring, in fact Spring framework allows much more flexibility:
You say, using a map implies that all the beans will be created at the beginning of the application and there is one instance of the bean.
Spring has a concept of Lazy beans, basically supporting a concept of beans being actually created only when they're required for the first time, so Spring supports the "delayed" beans initialization
Spring also allows more than one instance of a bean per type. So this map is more "advanced". For example you can create more than one implementation of the interface and use declare both as beans. As long as you provide enough information about what bean should be injected to the class that might use them (for example with a help of qualifiers suppored in spring), you're good to go. In addition, there are features in spring IoC container that allow injecting all registered implementations of an interface into a list:
interface Foo {}
#Component
class FooImpl1 implements Foo {}
#Component
class FooImpl2 implements Foo {}
class Client {
#Autowired
List<Foo> allFoos;
}
Now you say:
While for factory pattern, the factory class creates the object on request. And it can create a new object for each request.
Actually Spring can create objects per request. Not all beans have to be singletons, in general spring has a concept of scopes for this purposes.
For example, scope prototype means that Spring will create a bean upon each usage. In particular one interesting usage that spring supports in variety of ways is Injecting prototype bean into singleton. Some solutions use exactly like a factory (read about annotation #Lookup others rely on auto-generated proxy in runtime (like javax.inject.Provider). Prototype scope beans are not held in the application context, so here again spring goes beyond a simple map abstraction.
Last feature that you haven't mentioned is that sometimes even for singletons the initialization can be a little bit more complicated then calling a constructor with Parameters. Spring can address that by using Java Configurations:
#Configuration
public class MyConfig {
public SomeComplicatedObject foo(#Value("...") config, Bar bar) {
SomeComplicatedObject obj = new SomeComplicatedObject() // lets pretend this object is from some thirdparty, it only has no-op constructor, and you can't place spring annotations on it (basically you can't change it):
obj.setConfig(config);
obj.setBar(bar);
return obj;
}
}
The method foo here initializes the object SomeComplicatedObject and returns it. This can be used instead of factories to integrate "legacy" code (well, java configurations go way beyond this, but its out of scope for this question).
So bottom line, you Spring as an IoC container can provide many different ways to deal with object creation, in particular it can do everything that factory design pattern offers.
Now, I would like to also refer to your last sentense:
I think a deeper question may be, when Spring framework is used in a project, should it be strived to not create any object inside a class, which means the factory pattern ( or any creational design patterns?) should not be used, as Spring is supposed to be the central handler of the objects dependency ?
Indeed you don't have to use Factory Pattern when using Spring, since (as I hopefully have convinced you) provides everything that factory can do and more.
Also I agree that spring is supposed to be the central handler of the objects dependency (unless there are also parts of the application which are written in a different manner so you have to support both :) )
I don't think we should avoid using "new" altogether, not everything should/can be a bean, but I do see (from my subjective experience, so this is arguable) that you use it much less leaving the creation of most of the objects to Spring.
Should we avoid a usage of any creation design pattern? I don't think so, sometimes you can opt for implementing "builder" design pattern for example, its also a creational pattern but spring doesn't provide a similar abstraction.
I think if your project uses Spring framework you should use it. Although it depends on your project design e.g. You may use creational patterns along side with Spring IoC. e.g when you have abstraction layers not framework dependant (agnostic code)
interface ServiceFactory {
Service create(String type);
}
#Component
class SpringServiceFactory implements ServiceFactory {
#Autowired private ApplicationContext context;
Service create(String type) {
return context.getBean(type)
}
}
I use Factory pattern as well when I refactor legacy not unit testable code which also uses Spring Framework in order to implement unit tests.
// legacy service impossible to mock
class LegacyApiClient implements Closeable {...}
#Component
class LegacyApiClientFactory {
LegacyApiClient create(String endpoint) {
return new LegacyApiClient(endpoint);
}
}
#Component
class OtherService {
private final String endpoint
private final LegacyApiClientFactory factory;
OtherService(#Value("${post.endpoint}") String endpoint,
LegacyApiClientFactory factory) {...}
void doCall {
try (LegacyApiClient client = factory.create(endpoint)) {
client.postSomething();
}
}
}
....
// a random unit test
LegacyApiClient client = mock(LegacyApiClient.class)
LegacyApiClientFactory factory = mock(LegacyApiClientFactory.class)
OtherService service = new OtherService("http://scxsc", factory);
when(factory.create(any())).thenReturn(client)
service.doCall()
....

Gemfire NoSuchBeanDefinitionException Autowiring Cache (Spring 5.0.2 / Gemfirev9.2.7)

We are migrating from Gemfire 8.2.7 to 9.2.1
As part of Gemfire startup, we leverage SpringContextBootstrappingInitializer to initialize the spring-beans which #Autowire the Cache.
The same code when migrated to Gemfire 9.2.1 (along with the other stack) is failing on server startup with below error.
Gemfire 8.2.7 --> Gemfire 9.2.1
Spring-data-Gemfire 1.8.4 --> 2.0.2
Spring-Boot 1.4.7 --> 2.0.0.M7
Spring --> 5.0.2
Caused by:
org.springframework.beans.factory.NoSuchBeanDefinitionException: No
qualifying bean of type 'org.apache.geode.cache.Cache' available:
expected at least 1 bean which qualifies as autowire candidate.
Dependency annotations:
{#org.springframework.beans.factory.annotation.Autowired(required=true)}
Any pointers / changes required for GemfireConfig? Below is our JavaConfig.
#Bean
public CacheFactoryBean gemfireCache() {
return new CacheFactoryBean();
}
Looks like the ComponentScan is kicking in prior to Configuration processor. Any idea on controlling this behavior? This was lasted tested to work in Spring-Boot 1.4.6 (Spring- 4.3.8) and gets resolved with a #Depends option - but just wanted to understand if there are any fundamental changes with the ordering of bean initialization with newer Spring version.
#Configuration
#EnableAutoConfiguration(exclude = { HibernateJpaAutoConfiguration.class, BatchAutoConfiguration.class })
#Import(value = { GemfireServerConfig.class, JpaConfiguration.class, JpaConfigurableProperties.class })
#ComponentScan(basePackages = "com.test.gemfire", excludeFilters = #ComponentScan.Filter(type = FilterType.ANNOTATION, classes = Configuration.class) )
To begin, let me give you some tips since there are 3 issues with your problem statement above...
1) First, you have not made it clear why or how you are using the o.s.d.g.support.SpringContextBootstrappingInitializer Docs here.
I can only assume it is because you are launching your GemFire servers with Gfsh
using the following command...
gfsh> start server --name=MyServer --cache-xml-file=/path/to/cache.xml ...
Where your cache.xml is defined similar to this. After all, this was the original intent for using the SpringContextBootstrappingInitializer.
If this is the case, why not use the Gfsh, start server command, --spring-xml-location option instead. For example:
gfsh> start server --name=MyServer --spring-xml-location=/by/default/a/classpath/to/applicationContext.xml --classpath=/path/to/spring-data-gemfire-2.0.2.RELEASE.jar:...
By doing so, you no longer need to provide cache.xml just to declare the SpringContextBootstrappingInitializer in order to bootstrap a Spring container inside the GemFire JVM process. You can simply use the --spring-xml-location option and put SDG on the server's classpath when starting the server.
2) Second, it is not apparent what type of application component/bean you are injecting a GemFire Cache reference into (e.g. a Region or another application component class, like a DAO, etc). Providing a snippet of code showing how you injected the Cache reference, i.e. the injection point using the #Autowired annotation would have been helpful. For example:
#Service
class MyService {
#Autowired
private Cache gemfireCache;
...
}
3) #2 would have been more apparent if you included the full stack trace rather than just the NoSuchBeanDefinitionException message.
Despite the issues with your problem statement, I can infer the following:
Clearly, you are using "classpath component scanning" (with the #ComponentScan annotation) and are auto-wiring "by type"; which maybe key actually; I will come back to this later below.
You are using Spring's #Autowired annotation on a bean class field (field injection) or property (setter injection), maybe even a constructor.
The type of this field/property (or constructor parameter) is definitely org.apache.geode.cache.Cache.
Moving on...
In general, Spring will follow dependency order first and foremost. That is, if A depends on B, then B must be created before and destroyed after A. Typically, Spring will and can honor this without incident.
Beyond "dependency order" bean creation and satisfying dependencies between beans (including with the #DependsOn annotation), the order of bean creation is pretty loosely defined.
There are several factors that can influence it, such as "registration order" (i.e. the order in which bean definitions are declared, which is particularly true for beans defined in XML), "import order" (when using the #Import annotation on #Configuration classes), Java reflection (includes #Bean definitions declared in #Configuration classes), etc. Configuration organization is definitely important and should not be taken lightly.
This is 1 reason why I am not a big proponent of "classpath component scanning. While it may be convenient, it is always better, IMO, to be more "explicit" in your configuration, and the organization of your configuration, for reasons outlined here in addition to other non-apparent limitations. At worst, you should definitely be limiting the scope of the scan.
Ironically, you excluded/filtered the 1 thing that could actually help your organizational concerns... components of type #Configuration:
... excludeFilters = #ComponentScan.Filter(type = FilterType.ANNOTATION, classes = Configuration.class)
NOTE: given the exclusion, are you certain you did not exclude the the 1 #Configuration class containing your CacheFactoryBean definition? I suppose not since you say this worked after including the #DependsOn annotation.
Clearly there is a dependency defined between some application component of yours (??) and a bean of type o.a.g.cache.Cache (using #Autowired), yet Spring is failing to resolve it.
My thinking is, Spring cannot resolve the Cache dependency because 1) the GemFire cache bean either has not been created yet and 2) Spring cannot find an appropriate bean definition of the desired type (i.e. o.a.g.cache.Cache) in your configuration that would resolve the dependency and force the GemFire Cache to be created first, or 3) the GemFire Cache bean has been created first but Spring is unable to resolve the type as o.a.g.cache.Cache.
I have encountered both scenarios before and it is not exactly clear to me when each scenario happens because I simply have not traced this through yet. I have simply corrected it and moved on. I have noticed that it is version related though.
There are several ways to solve this problem.
If the problem is the later, 3), then simply declaring your dependency as type o.a.g.cache.GemFireCache should resolve the problem. So, for example:
#Repository
class MyDataAccessObject {
#Autowired
private GemFireCache gemfireCache;
...
}
The reason for this is because the o.s.d.g.CacheFactoryBean class's getObjectType() method returns a Class type generically extending o.a.g.cache.GemFireCache. This was by design since o.s.d.g.client.ClientCacheFactoryBean extends o.s.d.g.CacheFactoryBean, though I probably would not have done it that way if I had created these classes. However, it is consistent with the fact that the actual cache type in GemFire is o.a.g.internal.cache.GemFireCacheImpl which indirectly implements both the o.a.g.cache.Cache interface as well as the o.a.g.cache.client.ClientCache interface.
If your problem is the former (1) + 2), which is a bit trickier), then I would suggest you employ a smarter organization of your configuration, separated by concern. For example, you can encapsulate your GemFire configuration with:
#Configuration
class GemFireConfiguration {
// define GemFire components (e.g. CacheFactoryBean) here
}
Then, your application components, where some are dependent on GemFire components, can be defined with:
#Configuration
#Import(GemFireConfiguration.class)
class ApplicationConfiguration {
// define application beans, including beans dependent on GemFire components
}
By importing the GemFireConfiguration you are ensuring the GemFire components/beans are created (instantiated, configured and initialized) first.
You can even employ more targeted, limited "classpath component scanning" at the ApplicationConfiguration class-level in cases where you have a large number of application components (services, DAO, etc).
Then, you can have your main, Spring Boot application class drive all this:
#Configuration
#Import(ApplicationConfiguration.class)
class MySpringBootApplication {
public static void main(String[] args) {
SpringApplication.run(MySpringBootApplication.class, args);
}
}
The point is, you can be as granular as you choose. I like to encapsulate configuration by concern and clearly organize the configuration (using imports) to reflect the order in which I want my components created (constructed, configured and initialized).
Honestly, I basically organize my configuration in the order of dependencies. If my application ultimately depends on a data store and cannot function without that data store, then it makes since to ensure that is initialized first, otherwise, what is the point of starting the application.
Finally, you can always rely on the #DependsOn annotation, as you have appropriately done, to ensure that Spring will create the component before the component that expects it.
Based on the fact that the #DependsOn annotation solved your problem, then I would say this is an organizational problem and falls under the 1) / 2) category I outlined above.
I am going to dig into this a bit deeper and respond to my answer in comments with what I find.
Hope this helps!
-John

EhCache: #CacheEvict on Multiple Objects Using Annotations

I understand that using Spring's (3.1) built in CacheManager using the EhCache implementation, there are certain limitations when in proxy mode (the default) as per this post:
Spring 3.1 #Cacheable - method still executed
Consider the scenario I have:
#CacheEvict(value = "tacos", key = "#tacoId", beforeInvocation = true)
removeTaco(String tacoId) {
// Code to remove taco
}
removeTacos(Set<String> tacoIds) {
for (String tacoId : tacoIds) {
removeTaco(tacoId);
}
}
In this repository method, calling removeTacos(tacoIds) will not actually Evict anything from the Cache because of the limitation described above. My workaround, is that on a service layer above, if I wanted to delete multiple tacos, I'd be looping through each taco Id and passing it into removeTaco(), and never using removeTacos()
However, I'm wondering if there's another way to accomplish this.
1) Is there an SpEL expression that I could pass into the key that would tell EhCache to expire every id in the Set?
e.g. #CacheEvict(value = "tacos", key = "#ids.?[*]") // I know this isn't valid, just can't find the expression.
Or is there a way I can have removeTacos() call removeTaco and actually expire the Cached objects?
The #Caching annotation can be used to combine multiple annotations of the same type such as #CacheEvict or #CachePut, this is the example from the Spring documentation
#Caching(evict = { #CacheEvict("primary"), #CacheEvict(value="secondary", key="#p0") })
public Book importBooks(String deposit, Date date)
You can do one of two things
#CacheEvict(value = "tacos", allEntries = true)
removeTacos(Set<String> tacoIds)
which is not so bad if tacos are read a lot more than they are removed
OR
removeTacos(Set<String> tacoIds) {
for (String tacoId : tacoIds) {
getTacoService().removeTaco(tacoId);
}
}
by calling the service (proxy) you invoke the cache eviction.
AFAIK #CacheEvict supports only removing single entry (by key) or all entries in given cache, there's no way to remove at once multiple entries. If you want to put, update or remove multiple objects from cache (using annotations) and you may switch to memcached take a look at my project Simple Spring Memcached (SSM).
Self invocations don't go through the proxy so one of the solution is to switch to other mode than proxy. Anther solution (I'm not recommending it) may be keeping reference to the service in service (as an autowired field) and use it to invoke removeTaco.
Several months ago I had similar issue in one of my projects. It didn't use Spring Cache but SSM which also requires proxy. To made it work I moved caching (annotations) from service to DAO (repositories) layer. It solved problem with self invocation.

How to manage transactions with JAX-RS, Spring and JPA

I'm using JAX-RS to provide an HTTP-based interface to manage a data model. The data model is stored in a database and interacted with via JPA.
This allows me to modify the interface to the data model to suit REST clients and mostly seems to work quite well. However, I'm not sure how to handle the scenario where a method provided by a JAX-RS resource requires a transaction, which affects the JPA get, update, commit-on-tx-end pattern, because there is only a transaction wrapping the get operation, so the update is never committed. I can see the same problem occurring if a single REST operation requires multiple JPA operations.
As I'm using Spring's transaction support, the obvious thing to do is to apply #Transactional to these methods in the JAX-RS resources. However, in order for this to work, Spring needs to manage the lifecycle of the JAX-RS resources, and the useage examples I'm aware of have resources being created via `new' when needed, which makes me a little nervous anyway.
I can think of the following solutions:
update my JPA methods to provide a transaction-managed version of everything I want to do from my REST interface atomically. Should work, keeps transactions out of the JAX-RS layer, but prevents the get, update, commit-on-tx-end pattern and means I need to create a very granular JPA interface.
Inject Resource objects; but they are typically stateful holding at least the ID of the object being interacted with
Ditch the hierarchy of resources and inject big, stateless super resources at the root that manage the entire hierarchy from that root; not cohesive, big services
Have a hierarchy of injected, stateless, transaction-supporting helper objects that 'shadow' the actual resources; the resources are instantiated and hold ths state but delegate method invocations to the helper objects
Anyone got any suggestions? It's quite possible I've missed some key point somewhere.
Update - to work around the lack of a transaction around the get, update, commit-on-tx-close flow, I can expose the EntityManager merge(object) method and call it manually. Not neat and doesn't solve the larger problem though.
Update 2 #skaffman
Code example:
In JPA service layer, injected, annotations work
public class MyEntityJPAService {
...
#Transactional(readOnly=true) // do in transaction
public MyEntity getMyEntity(final String id) {
return em.find(MyEntity.class, id);
}
In JAX-RS resource, created by new, no transactions
public class MyEntityResource {
...
private MyEntityJPAService jpa;
...
#Transactional // not injected so not effective
public void updateMyEntity(final String id, final MyEntityRepresentation rep) {
MyEntity entity = jpa.getMyEntity(id);
MyEntity.setSomeField(rep.getSomeField());
// no transaction commit, change not saved...
}
I have a few suggestions
Introduce a layer between your JPA and JAX-RS layers. This layer would consist of Spring-managed #Transactional beans, and would compose the various business-level operations from their component JPA calls. This is somewhat similar to your (1), but keeps the JPA layer simple.
Replace JAX-RS with Spring-MVC, which provides the same (or similar) functionality, including #PathVariable, #ResponseBody, etc.
Programmatically wrap your JAX-RS objects in transactional proxies using TransactionProxyFactorybean. This would detct your #Transactional annotations and generate a proxy that honours them.
Use #Configurable and AspectJ LTW to allow Spring to honour #Transactional even if you create the object using `new. See 8.8.1 Using AspectJ to dependency inject domain objects with Spring

Resources