How to listen to keyspace events using Spring Data Redis with a GCP managed cluster? - spring

I am using secondary indexes with Redis thanks to Spring Data Redis #Indexed annotations. My entry has a TTL.
This has a side effect of keeping the indexes after the expiration of the main entry. This is expected, and Spring can listen to keyspace expiry events to remove those indexes once the main TTL is done.
However, enabling the listening to keyspace expiry events with Spring, I face the following error at startup:
ERR unknown command 'CONFIG'
This is how I configured the listener:
#EnableRedisRepositories(enableKeyspaceEvents = EnableKeyspaceEvents.ON_STARTUP)
What can I do to make this work?

This problem is linked to the fact that the Redis cluster is managed, and as such remote clients can't call CONFIG on it. When enabling the Spring keyspace event listener, it tries to configure Redis to emit keyspace expiry events, by setting the notify-keyspace-events config key to "Ex".
The workaround to this is:
Configure your MemoryStore on GCP, adding the notify-keyspace-events key with "Ex" as value.
Use #EnableRedisRepositories(enableKeyspaceEvents = EnableKeyspaceEvents.ON_STARTUP, keyspaceNotificationsConfigParameter = "") for your client configuration. The explicitely empty String prevents Spring from trying to override the remote configuration.

Related

Spring boot Redis cache TTL property change on the fly

is it possible to change the TTL property of Redis cache during the runtime if the same property has been changed in app config server? is there a way to automate the process to refresh the Redis instance properties during runtime on the event of config server change?
If you want to get the latest property in Config Server, it is recommended to do it through the client polling method, which can be found at
https://learn.microsoft.com/en-us/azure/spring-apps/how-to-config-server#config-server-refresh
Regarding the load to Redis instances, you may need to write some code to send out the event.

Adding prefix to RabbitMQ queues and exchanges in Spring Cloud dataflow backend

I would like to utilize my own RabbitMQ instance as the middleware broker for Spring Cloud Data Flow.
The problem is that we have a prefix and suffix policy on exchange and queue creation that has to be in place.
Is it possible to force Spring Cloud Data Flow to add this prefix and suffix?
Example:
RABBITMQ_QUEUE_PREFIX="TEAM1"
RABBITMQ_QUEUE_SUFFIX="IN"
RABBITMQ_EXCHANGE_PREFIX="TEAM1"
RABBITMQ_EXCHANGE_SUFFIX="OUT"
To result in queues and exchanges:
TEAM1.queuename.IN
TEAM1.exchangename.OUT
You can configure the prefix at the application level (e.g. in application.properties) or set as a deployer property in SCDF. See https://docs.spring.io/spring-cloud-stream-binder-rabbit/docs/current/reference/html/spring-cloud-stream-binder-rabbit.html#_rabbitmq_consumer_properties

How to minimize interaction with Redis when using it as a Spring Session cache?

We are using Spring Cloud Gateway for OAuth2 authentication, after which it stores users session information in Redis with default settings set by #EnableRedisWebSession and
#Bean
fun redisConnectionFactory(): LettuceConnectionFactory {
return LettuceConnectionFactory("redis-cache", 6379);
}
#Bean
fun authorizedClientRepository(): ServerOAuth2AuthorizedClientRepository {
return WebSessionServerOAuth2AuthorizedClientRepository()
}
application.yml cache settings:
spring:
session:
store-type: redis
redis:
save-mode: on_set_attribute
flush-mode: on_save
It works fine, though I can see it makes requests to Redis on every user requests like it doesn't have in-memory cache at all. Is there any option to change this behaviour (i.e. make requests though the network to Redis only if current user session is not found in local memory cache)? May be I can reimplement some classes or there is no way to do it except of rewriting all cache logic? Sorry, for quite a broad question, but I didn't find any information on this topic in the documentation. Or maybe you could point me at classes in Spring Session source code, where this logic is implemented, so I could figure out what are my options.
I'm using spring-cloud-starter-gateway 2.2.5.RELEASE, spring-session-core 2.3.1.RELEASE, spring-boot-starter-data-redis-reactive and spring-session-data-redis.
From reading the documentation, I don't believe it is possible out of the box as using a local cache could result in inconsistent state amongst all connecting SCG instances to that Redis Instance.
You would need to define your own implementation of a SessionRepository that will try a local caffeine cache, and if not found then go to Redis instead. As a starting point, you could duplicate or trying extending the RedisSessionRepository.
The only thing you'd need to be careful of then is if you have multiple instances of SCG running how to handle if another instance updates redis, how the other instances would handle it if they have a locally cached instance already.

Autoscaling up in Mongo with Spring Boot

I am setting up an application connecting to mongoDB with high availability.
I have studied the documentation and setup the replica set successfully through
spring.data.mongodb.uri=mongodb://user:secret#mongo1.example.com:12345,mongo2.example.com:23456/test
As the application property file is fixed, the application is required to restart if I change the spring.data.mongodb.uri.
What if I have a new replica member in mongo, should I need to restart my application with the update in application property?
Or, is it fair enough to use the old configuration? Mongo driver will automatically connect to the new replica member for me with the old configuration.
If you are loading properties from the file you need to restart the application once the property is updated.
Otherwise, you need to use some global property management apps like consul which when the properties are changed it will reload the properties value in the application(#RefreshScope).
In your case, once the property is changed you need to disconnect and reconnect to the mongodb by code.

How Spring BOOT Logger Actuator behaves in clustered environment?

I have a query related to Spring Boot Actuator. Through Actuator I can change the log level dynamically.
In clustered environment how it works?
If I do the REST (POST) call to change the log level then in which node it will be applied?
Or will it be applied to all the nodes?
If it gets applied to all the nodes in the cluster then how to restrict it to only a particular node?
You should use external configuration server (spring cloud config) and use spring cloud bus to reflect configuration changes into all the servers of your cluster.
Place your log configuration on the configuration server, on each change, a message will be sent to a message broker (like rabbitMq) to all the servers listening to the config.

Resources