Implement Write-Behind Cache using Hazelcast - caching

I am doing a PoC on "write-behind cache" using Hazelcast.
Let's say I have two services/microservices:
"HZServer" (running on ports 9091, 9092, 9093). I have included the below dependencies in this service:
'com.hazelcast:hazelcast-all: 4.0.3'
'org.springframework.boot:spring-boot-starter-data-jpa'
I have implemented MapStore in this service and connected to PostgreSQL using CRUDRepository. only HZServer will be communicating with the database.
I have configured this as a Hazelcast server. Also, if my understanding is correct, Hazelcast is running as an embedded server here.
Defined a MapConfig named "Country" with its MapStoreConfig implementation 'CountryMapStore'.
"MyClient" (running on ports 8081, 8082, 8083.... ). I have included the below dependencies in this service:
'com.hazelcast:hazelcast-all: 4.0.3' (I could have used just hazelcast-client).
I have configured it as a Hazelcast client using "Hazelcast-client.yaml". I also have some RestControllers defined in MyClient service. So, MyClient service will be communicating with the HZServer (Cache) only, and not the DB. I am fetching the "Country" map from the HZInstance in the below manner:
IMap<String, Country> iMap = hazelcastInstance.getMap("Country");
Fetching and Putting the key value pairs in the below manner:
Country country = iMap.get(code); // Fetching
iMap.put(code, country); // Inserting or Updating
Please suggest me if this is the only way of achieving "Write-Behind" cache in Hazelcast?
Please find the architecture diagram below:

Very detailed context, this is great!
True "Write-behind" means the interactions between Hazelcast server and the database are asynchronous. Thus, it depends on the exact configuration of the MapStore.
Note that in that case, you may lose data. Again, this depends on your specific implementation (e.g. you may retry until the transaction has been acknowledged).

Related

How to minimize interaction with Redis when using it as a Spring Session cache?

We are using Spring Cloud Gateway for OAuth2 authentication, after which it stores users session information in Redis with default settings set by #EnableRedisWebSession and
#Bean
fun redisConnectionFactory(): LettuceConnectionFactory {
return LettuceConnectionFactory("redis-cache", 6379);
}
#Bean
fun authorizedClientRepository(): ServerOAuth2AuthorizedClientRepository {
return WebSessionServerOAuth2AuthorizedClientRepository()
}
application.yml cache settings:
spring:
session:
store-type: redis
redis:
save-mode: on_set_attribute
flush-mode: on_save
It works fine, though I can see it makes requests to Redis on every user requests like it doesn't have in-memory cache at all. Is there any option to change this behaviour (i.e. make requests though the network to Redis only if current user session is not found in local memory cache)? May be I can reimplement some classes or there is no way to do it except of rewriting all cache logic? Sorry, for quite a broad question, but I didn't find any information on this topic in the documentation. Or maybe you could point me at classes in Spring Session source code, where this logic is implemented, so I could figure out what are my options.
I'm using spring-cloud-starter-gateway 2.2.5.RELEASE, spring-session-core 2.3.1.RELEASE, spring-boot-starter-data-redis-reactive and spring-session-data-redis.
From reading the documentation, I don't believe it is possible out of the box as using a local cache could result in inconsistent state amongst all connecting SCG instances to that Redis Instance.
You would need to define your own implementation of a SessionRepository that will try a local caffeine cache, and if not found then go to Redis instead. As a starting point, you could duplicate or trying extending the RedisSessionRepository.
The only thing you'd need to be careful of then is if you have multiple instances of SCG running how to handle if another instance updates redis, how the other instances would handle it if they have a locally cached instance already.

communication between spring instances behind a load balancer

I have a few instances of Spring apps running behind a load balancer. I am using EHCache as a caching system on each of these instances.
Let's say I receive a request that is refreshing a part of the cache on one instance. I need a way to tell the other instances to refresh their cache (or to replicate it).
I'm more interested in a solution based on Spring and not just cache replication and that's because there are other scenarios similar with this one that require the same solution.
How can I achieve this?
There is no simple Spring solution for this. Depends on the requirements. You can use any kind of PubSub like a JMS topic to notify your nodes. This way the problem can be that you cannot guarantee consistency. The other nodes can still read the old data for a while. In my current project we use Redis. We configured it as cache with Spring Data Redis and theres no need to notify the other nodes since the cache is shared. In non cache scenarios we also use redis as a PubSub service.

Using Spring Cloud Connector for Heroku in order to connect to multiple RedisLabs databases

I have a requirement for multiple RedisLabs databases for my application as described in their home page:
multiple dedicated databases in a plan
We enable multiple DBs in a single plan, each running in a dedicated process and in a non-blocking manner.
I rely on Spring Cloud Connectors in order to connect to Heroku (or Foreman in local) and it seems the RedisServiceInfoCreator class allows for a single RedisLabs URL i.e. REDISCLOUD_URL
Here is how I have configured my first redis connection factory:
#Configuration
#Profile({Profiles.CLOUD, Profiles.DEFAULT})
public class RedisCloudConfiguration extends AbstractCloudConfig {
#Bean
public RedisConnectionFactory redisConnectionFactory() {
PoolConfig poolConfig = ...
return connectionFactory().redisConnectionFactory("REDISCLOUD", new PooledServiceConnectorConfig(poolConfig));
}
...
How I am supposed to configure a second connection factory if I intend to use several redis labs databases?
Redis Cloud will set for you an env var only for the first resource in each add-on that you create.
If you create multiple resources in an add-on, you should either set an env var yourself, or use the new endpoint directly in your code.
In short the answer is yes, RedisConnectionFactory should be using Jedis in order to connect to your redis db. it is using jedis pool that can only work with a single redis endpoint. in this regard there is no different between RedisLabs and a basic redis.
you should create several connection pools to work with several redis dbs/endpoints.
just to extend, if you are using multiple dbs to scale, there is no need with RedisLabs as they support clustering with a single endpoint. so you can simple create a single db with as much memory as needed, RedisLabs will create a cluster for you and will scale your redis automatically.
if you app does require logical seperation, then creation multiple dbs is the right way to go.

How to safely and efficiently connect to a MongoDB replicaset instance with the C# Driver

I am using MongoDB with the C# driver and am wondering what is the most efficient yet safe way to create connections to the database.
Thread Safety
According to the Mongo DB C# driver documentation the MongoClient, MongoServer, MongoDatabase, MongoCollection and MongoGridFS classes are thread safe. Does this mean I can have a singleton instance of MongoClient or MongoDatabase?
The documentation also states that a connection pool is used for MongoClient, so the management of connections to MongoDB is abstracted from the MongoClient class anyway.
Example Scenario
Let's say I have three MongoDB instances in my replicaset; so I create MongoClient and MongoDatabase objects based upon the three server addresses for these instances. Can I create a static singleton for the database and client objects and use them across multiple requests simultaneously? What if one of the instances dies; if I cache the Mongo objects, how can I make sure this scenario is dealt with safely?
In my project I'm using a singleton MongoClient only, then get MongoServer and other stuff from MongoClient.
This is because what you said, the connection pool is in the MongoClient, I definitely don't want more than one connection pool. and here's what the document says:
When you are connecting to a replica set you will still use only one
instance of MongoClient, which represents the replica set as a whole.
The driver automatically finds all the members of the replica set and
identifies the current primary.
Actually the MongoClient is added to C# driver since 1.7, to represent the whole replica set and handle failover, load balancing stuff. Because MongoServer doesn't have the ability to to that. Thus you shouldn't cache MongoServer because once a server is offline you can't know it.
EDIT: Just had a look at the source code. I may have made a mistake. The MongoClient doesn't handle connection pool. the MongoServer does (at least until driver 1.7, haven't looked at the latest driver source yet). This makes sense because MongoServer represents a real Mongo instance. And one connection pool stores connections only to that server.

Reuse jax-ws client proxies for different addresses

I have a bunch of web services servers (around 200) running on the same machine which expose the same service on different ports.
I have a client which perform tasks which include calling the service on different servers.
Something like:
while (true) {
task = readTask();
runHelloService(task.serverAddress)
}
I was wondering what is the best way to generate the HelloService client proxy.
Can I generate one and replace the target address before each call?
Should i generate a client per server (which means 200 client proxies) and use the relevant one?
I will probably want to run the above loop concurrently on several threads.
Currently I have only one proxy which is generated by spring and cxf with the jaxws:client declaration.
This is an interesting use case. I believe that changing the endpoint whilst sharing the proxy amongst multiple threads will not work. There is a one-to-one relationship between a client proxy and a conduit definition. Changes to a conduit are explicitly not thread safe.
I recommend eschewing Spring configuration altogether to create client proxies and instead use programmatic construction of the 200 client proxies.
See also Custom CXF Transport - Simplified Client Workflow.

Resources