I am using spring cache mechanism with SimpleCacheManager/ConcurrentMapCache.
And I am using a web service to clear the cache and the following is the code .
for(String cacheName : cacheManager.getCacheNames()){
Cache cache =cacheManager.getCache(cacheName);
if(cache!=null){
cache.clear();
}
}
When I called this code from a Rest webservice on local vm , I can see its clearing the cache and can see the changes that we done in the database with other service , However on the production environment , the webservice returning 200 status in the logs. but it still shows the old data.
On production we have 2 servers
We have to restart our application to refresh the cache and get the latest data from the database.
I used to do this creating a void method annotated with #CacheEvict(allEntries=true), this annotation is similar to #CacheRemoveAll from JSR-107.
Something like that:
#CacheEvict(allEntries=true)
public void evictAll() {
// Do nothing
}
I know, it's ugly, but works to me.
My two cents, avoid to use the default spring cache manager in production, use a cache manager more sophisticated instead like Guava or EhCache.
Cheers.
Related
I'm currently having troubles with my redis cache configuration.
I was previously using a Redis CRUD repository with RedisHash objects. Everything was working fine.
I need to use #cacheable annotation for stuff which aren't linked to my crud repository.
So I had cache configuration with #EnableCaching annotation
#Bean
public RedisCacheManagerBuilderCustomizer redisCacheManagerBuilderCustomizer() {
return (builder) -> builder
.withCacheConfiguration("default",
RedisCacheConfiguration.defaultCacheConfig())
.withCacheConfiguration("ttlCache",
RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofSeconds(10)));
}
Everything about my cache configuration is OK.
But now there is some trouble to store entities in the redis.
After searching it seems that my RedisHash object have to implements Serializable. Ok I do it.
But now all my methods in the repository are doing some strange thing (essentialy the get return empty). When i look in my redis repository i see some new items indicating that a cache is used for my repository.
My question is, is there a way to disable usage of cache for my repository ?
Thanks in advance.
I am working on a project that has a requirement of Pivotal GemFire.
I am unable to find a proper tutorial about how to configure gemFire with Spring Boot.
I have created a partitioned Region and I want to configure Locators as well, but I need only server-side configuration as client is handled by someone else.
I am totally new to Pivotal GemFire and really confused. I have tried creating a cache.xml but then somehow a cache.out.xml gets created and there are many issues.
#Priyanka-
Best place to start is with the Guides on spring.io. Specifically, have a look at...
"Accessing Data with GemFire"
There is also...
"Cache Data with GemFire", and...
"Accessing GemFire Data with REST"
However, these guides focus mostly on "client-side" application concerns, "data access" (over REST), "caching", etc.
Still, you can use Spring Data GemFire (in a Spring Boot application even) to configure a GemFire Server. I have many examples of this. One in particular...
"Spring Boot GemFire Server Example"
This example demonstrates how to bootstrap a Spring Boot application as a GemFire Server (technically, a peer node in the cluster). Additionally, the GemFire properties are specified Spring config and can use Spring's normal conventions (property placeholders, SpEL expression) to configure these properties, like so...
https://github.com/jxblum/spring-boot-gemfire-server-example/blob/master/src/main/java/org/example/SpringBootGemFireServer.java#L59-L84
This particular configuration makes the GemFire Server a "GemFire Manager", possibly with an embedded "Locator" (indicated by the start-locator GemFie property, not to be confused with the "locators" GemFire property which allows our node to join and "existing" cluster) as well as a GemFire CacheServer to serve GemFire cache clients (with a ClientCache).
This example creates a "Factorials" Region, with a CacheLoader (definition here) to populate the "Factorials" Region on cache misses.
Since this example starts an embedded GemFire Manager in the Spring Boot GemFire Server application process, you can even connect to it using Gfsh, like so...
gfsh> connect --jmx-manager=localhost[1099]
Then you can run "gets" on the "Factorial" Region to see it compute factorials of the numeric keys you give it.
To see more advanced configuration, have a look at my other repos, in particular the Contacts Application RI (here).
Hope this helps!
-John
Well, I had the same problem, let me share with you what worked for me, in this case I'm using Spring Boot and Pivotal GemFire as cache client.
Install and run GemFire
Read the 15 minutes quick start guide
Create a locator(let's call it locator1) and a server(server1) and a region(region1)
Go to the folder where you started the 'Gee Fish'(gfsh) and then go to the locator's folder and open the log file, in that file you can get the port your locator is using.
Now let's see the Spring boot side:
In you Application with the main method add the #EnablegemFireCaching annotation
In the method(wherever it is) you want to cache, add the #Cacheable("region1") annotation.
Now let's create a configuration file for the caching:
//this is my working class
#Configuration
public class CacheConfiguration {
#Bean
ClientCacheFactoryBean gemfireCacheClient() {
return new ClientCacheFactoryBean();
}
#Bean(name = GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME)
PoolFactoryBean gemfirePool() {
PoolFactoryBean gemfirePool = new PoolFactoryBean();
gemfirePool.addLocators(Collections.singletonList(new ConnectionEndpoint("localhost", HERE_GOES_THE_PORT_NUMBER_FROM_STEP_4)));
gemfirePool.setName(GemfireConstants.DEFAULT_GEMFIRE_POOL_NAME);
gemfirePool.setKeepAlive(false);
gemfirePool.setPingInterval(TimeUnit.SECONDS.toMillis(5));
gemfirePool.setRetryAttempts(1);
gemfirePool.setSubscriptionEnabled(true);
gemfirePool.setThreadLocalConnections(false);
return gemfirePool;
}
#Bean
ClientRegionFactoryBean<Long, Long> getRegion(ClientCache gemfireCache, Pool gemfirePool) {
ClientRegionFactoryBean<Long, Long> region = new ClientRegionFactoryBean<>();
region.setName("region1");
region.setLookupEnabled(true);
region.setCache(gemfireCache);
region.setPool(gemfirePool);
region.setShortcut(ClientRegionShortcut.PROXY);
return region;
}
That's all!, also do not forget to serialize(implements Serializable) the class is being cached(The class your cached method is returning)
We have a spring application where redis cache has been implemented along with the database MySQL. Here we are using redis cache to store the temporary values for the server validations instead of hitting the database every time, hence hitting the database calls every time gets reduces system performance.
Now i explain my problem while hitting the spring boot action endpoints,
if suddenly my redis cache server stops, we would like to know how to get the notification that my redis cache server is down. So we need solution / example java application to get the notification using redis cache listener context or anything like that.
Redis doesn't work that way. In fact, no remote service will notify your application that it's down. Usually, it's the other way round: If the service you're consuming is accessed with a more or less sophisticated client, you might take advantage of the client's features.
Asynchronous clients that run I/O, or monitoring threads can help here. More specific, it depends on the client you're using with Spring Boot and Redis. Jedis is a plain client that reacts on a request basis. Lettuce allows you to register a RedisConnectionStateListener that is called on specific connection events, such as connected/disconnected:
RedisClient redisClient = …;
redisClient.addListener(new RedisConnectionStateListener() {
#Override
public void onRedisConnected(RedisChannelHandler<?, ?> redisChannelHandler) {
}
#Override
public void onRedisDisconnected(RedisChannelHandler<?, ?> redisChannelHandler) {
}
#Override
public void onRedisExceptionCaught(RedisChannelHandler<?, ?> redisChannelHandler, Throwable throwable) {
}
});
When using Spring Data Redis, retrieving the RedisClient from LettuceConnectionFactory might be a bit tricky as it is a private field. Hence it requires reflection.
So I already success implement SSO using spring session and redis on development localhost domain.
But when I deploy to server using two sub domain.
login.example.com
apps.example.com
They always create new session Id on each sub domain.
I already try to configure using Context in tomcat configuration.
<Context sessionCookieDomain=".example.com" sessionCookiePath="/">
But no luck.
Spring session moves the session management on application level, so no surprise that trying to configure the container (in your case tomcat) has no effect. Currently there is a TODO in spring-session code to allow setting the domain, but is not implemented.
Maybe it is best to open an issue to allow setting the domain or comment/vote on https://github.com/spring-projects/spring-session/issues/112.
Meanwhile a workaround would be to go with your own implementation of MultiHttpSessionStrategy based on CookieHttpSessionStrategy.
Finally I succeeded to setdomain on application level.
You're right, I hope in the future they implement the feature to set domain.
For now I create CustomCookieHttpSessionStrategy for my own implmentation.
private Cookie createSessionCookie(HttpServletRequest request,
Map<String, String> sessionIds) {
...
sessionCookie.setDomain(".example.com");
// TODO set domain?
...
}
And then register bean as HttpSessionStrategy.
Spring cloud config client helps to change the properties in run time. Below are 2 ways to do that
Update GIT repository and hit /refresh in the client application to get the latest values
Update the client directly by posting the update to /env and then /refresh
Problem here in both the approaches is that there could be multiple instances of client application running in cloud foundry and above rest calls will reach any one of the instances leaving application in inconsistent state
Eg. POST to /env could hit instance 1 and leaves instance 2 with old data.
One solution I could think of is to continuously hit these end points "n" times using for loop just to make sure all instance will be updated but it is a crude solution. Do any body have better solution for this?
Note: We are deploying our application in private PCF environment.
The canonical solution for that problem is the Spring Cloud Bus. If your apps are bound to a RabbitMQ service and they have the bus on the classpath there will be additional endpoints /bus/env and /bus/refresh that broadcast the messages to all instances. See docs for more details.
Spring Cloud Config Server Not Refreshing
see org.springframework.cloud.bootstrap.config.RefreshEndpoint code here:
public synchronized String[] refresh() {
Map<String, Object> before = extract(context.getEnvironment()
.getPropertySources());
addConfigFilesToEnvironment();
Set<String> keys = changes(before,
extract(context.getEnvironment().getPropertySources())).keySet();
scope.refreshAll();
if (keys.isEmpty()) {
return new String[0];
}
context.publishEvent(new EnvironmentChangeEvent(keys));
return keys.toArray(new String[keys.size()]);
}
that means /refresh endpoint pull git first and then refresh catch,and public a environmentChangeEvent,so we can customer the code like this.