How to minimize interaction with Redis when using it as a Spring Session cache? - spring-boot

We are using Spring Cloud Gateway for OAuth2 authentication, after which it stores users session information in Redis with default settings set by #EnableRedisWebSession and
#Bean
fun redisConnectionFactory(): LettuceConnectionFactory {
return LettuceConnectionFactory("redis-cache", 6379);
}
#Bean
fun authorizedClientRepository(): ServerOAuth2AuthorizedClientRepository {
return WebSessionServerOAuth2AuthorizedClientRepository()
}
application.yml cache settings:
spring:
session:
store-type: redis
redis:
save-mode: on_set_attribute
flush-mode: on_save
It works fine, though I can see it makes requests to Redis on every user requests like it doesn't have in-memory cache at all. Is there any option to change this behaviour (i.e. make requests though the network to Redis only if current user session is not found in local memory cache)? May be I can reimplement some classes or there is no way to do it except of rewriting all cache logic? Sorry, for quite a broad question, but I didn't find any information on this topic in the documentation. Or maybe you could point me at classes in Spring Session source code, where this logic is implemented, so I could figure out what are my options.
I'm using spring-cloud-starter-gateway 2.2.5.RELEASE, spring-session-core 2.3.1.RELEASE, spring-boot-starter-data-redis-reactive and spring-session-data-redis.

From reading the documentation, I don't believe it is possible out of the box as using a local cache could result in inconsistent state amongst all connecting SCG instances to that Redis Instance.
You would need to define your own implementation of a SessionRepository that will try a local caffeine cache, and if not found then go to Redis instead. As a starting point, you could duplicate or trying extending the RedisSessionRepository.
The only thing you'd need to be careful of then is if you have multiple instances of SCG running how to handle if another instance updates redis, how the other instances would handle it if they have a locally cached instance already.

Related

Implement Write-Behind Cache using Hazelcast

I am doing a PoC on "write-behind cache" using Hazelcast.
Let's say I have two services/microservices:
"HZServer" (running on ports 9091, 9092, 9093). I have included the below dependencies in this service:
'com.hazelcast:hazelcast-all: 4.0.3'
'org.springframework.boot:spring-boot-starter-data-jpa'
I have implemented MapStore in this service and connected to PostgreSQL using CRUDRepository. only HZServer will be communicating with the database.
I have configured this as a Hazelcast server. Also, if my understanding is correct, Hazelcast is running as an embedded server here.
Defined a MapConfig named "Country" with its MapStoreConfig implementation 'CountryMapStore'.
"MyClient" (running on ports 8081, 8082, 8083.... ). I have included the below dependencies in this service:
'com.hazelcast:hazelcast-all: 4.0.3' (I could have used just hazelcast-client).
I have configured it as a Hazelcast client using "Hazelcast-client.yaml". I also have some RestControllers defined in MyClient service. So, MyClient service will be communicating with the HZServer (Cache) only, and not the DB. I am fetching the "Country" map from the HZInstance in the below manner:
IMap<String, Country> iMap = hazelcastInstance.getMap("Country");
Fetching and Putting the key value pairs in the below manner:
Country country = iMap.get(code); // Fetching
iMap.put(code, country); // Inserting or Updating
Please suggest me if this is the only way of achieving "Write-Behind" cache in Hazelcast?
Please find the architecture diagram below:
Very detailed context, this is great!
True "Write-behind" means the interactions between Hazelcast server and the database are asynchronous. Thus, it depends on the exact configuration of the MapStore.
Note that in that case, you may lose data. Again, this depends on your specific implementation (e.g. you may retry until the transaction has been acknowledged).

How to use Apache Ignite as a layer between Spring Boot app and MongoDB?

I have a Spring Boot application that uses MongoDB. My plan is to store data in a distributed caching system before it gets inserted into Mongo. If the database fails, the caching will have a queue and send to the DB once it is up. So, the plan is to make the caching layer in between the application and Mongo.
Can you suggest some ideas on how to implement this using Apache Ignite?
Take a look at write-behind cache store mode. It retries writing to the underlying database if insertion to the underlying DB fails. Let me know how it works for you.
You can also implement a custom CacheStore for an Ignite cache that will do the caching and enable write through for it. If the connection is lost, then you'll be able to collect entries in a buffer, while retrying to establish the connection back.
See more: https://apacheignite.readme.io/docs/3rd-party-store

Database caching with Spring Cloud (or maintaining consistent lists between microservices in general)

I'm quite new to Spring Cloud and microservices in general, and this is a concept I'm struggling to understand.
Let's say I have microservice X which connects to a mongo database, and I've enabled Spring caching using the #EnableCaching annotation. I have set things up so that whenever I save/persist an object to mongo, it also gets added to my cache (#cachePut), and similarly when I remove an object from mongo it gets removed from the cache (#CacheEvict).
That all works fine if I have a single instance of microservice X, but what happens if I stand up 50 instance of microservice X? Do they all share the same cache, if so how does that work? If they all have their own individual caches, what happens if objectA gets added to one instance's cache and the database, then removed from the database by another instance? objectA will still be in the first instance's cache even though it has been removed from mongo.
Hopefully someone can clear this up

communication between spring instances behind a load balancer

I have a few instances of Spring apps running behind a load balancer. I am using EHCache as a caching system on each of these instances.
Let's say I receive a request that is refreshing a part of the cache on one instance. I need a way to tell the other instances to refresh their cache (or to replicate it).
I'm more interested in a solution based on Spring and not just cache replication and that's because there are other scenarios similar with this one that require the same solution.
How can I achieve this?
There is no simple Spring solution for this. Depends on the requirements. You can use any kind of PubSub like a JMS topic to notify your nodes. This way the problem can be that you cannot guarantee consistency. The other nodes can still read the old data for a while. In my current project we use Redis. We configured it as cache with Spring Data Redis and theres no need to notify the other nodes since the cache is shared. In non cache scenarios we also use redis as a PubSub service.

Using Spring Cloud Connector for Heroku in order to connect to multiple RedisLabs databases

I have a requirement for multiple RedisLabs databases for my application as described in their home page:
multiple dedicated databases in a plan
We enable multiple DBs in a single plan, each running in a dedicated process and in a non-blocking manner.
I rely on Spring Cloud Connectors in order to connect to Heroku (or Foreman in local) and it seems the RedisServiceInfoCreator class allows for a single RedisLabs URL i.e. REDISCLOUD_URL
Here is how I have configured my first redis connection factory:
#Configuration
#Profile({Profiles.CLOUD, Profiles.DEFAULT})
public class RedisCloudConfiguration extends AbstractCloudConfig {
#Bean
public RedisConnectionFactory redisConnectionFactory() {
PoolConfig poolConfig = ...
return connectionFactory().redisConnectionFactory("REDISCLOUD", new PooledServiceConnectorConfig(poolConfig));
}
...
How I am supposed to configure a second connection factory if I intend to use several redis labs databases?
Redis Cloud will set for you an env var only for the first resource in each add-on that you create.
If you create multiple resources in an add-on, you should either set an env var yourself, or use the new endpoint directly in your code.
In short the answer is yes, RedisConnectionFactory should be using Jedis in order to connect to your redis db. it is using jedis pool that can only work with a single redis endpoint. in this regard there is no different between RedisLabs and a basic redis.
you should create several connection pools to work with several redis dbs/endpoints.
just to extend, if you are using multiple dbs to scale, there is no need with RedisLabs as they support clustering with a single endpoint. so you can simple create a single db with as much memory as needed, RedisLabs will create a cluster for you and will scale your redis automatically.
if you app does require logical seperation, then creation multiple dbs is the right way to go.

Resources