I am setting up an application connecting to mongoDB with high availability.
I have studied the documentation and setup the replica set successfully through
spring.data.mongodb.uri=mongodb://user:secret#mongo1.example.com:12345,mongo2.example.com:23456/test
As the application property file is fixed, the application is required to restart if I change the spring.data.mongodb.uri.
What if I have a new replica member in mongo, should I need to restart my application with the update in application property?
Or, is it fair enough to use the old configuration? Mongo driver will automatically connect to the new replica member for me with the old configuration.
If you are loading properties from the file you need to restart the application once the property is updated.
Otherwise, you need to use some global property management apps like consul which when the properties are changed it will reload the properties value in the application(#RefreshScope).
In your case, once the property is changed you need to disconnect and reconnect to the mongodb by code.
Related
cosmos DB has multi master set up with multiple regions. cosmos connection parameters are present in application.properties.
But my observation is, when we use CosmosRepository interface and use getById method,the call goes to any random region and not the preferred region.this is adding latency.
How to correctly setup preferred location / region in application.properties?
"preferredLocation" property seems to be not working.
I have used Jhipster to develop quite a few applications however today I run into a little problem and it is how to configure Transaction Routing Datasource in Jhipster. I am using Kubegres to set up a Postgresql server with multiple read replicas and one write replica. I have configured a project to use Spring's Transaction Routing Datasource and I have succeeded in configuring it, but when I try to do the same configuration with Jhipter for another project and although I manage to get it to run without problems, it does not route for read-only replicas. Has anyone had the same problem and managed to get it to work
is it possible to change the TTL property of Redis cache during the runtime if the same property has been changed in app config server? is there a way to automate the process to refresh the Redis instance properties during runtime on the event of config server change?
If you want to get the latest property in Config Server, it is recommended to do it through the client polling method, which can be found at
https://learn.microsoft.com/en-us/azure/spring-apps/how-to-config-server#config-server-refresh
Regarding the load to Redis instances, you may need to write some code to send out the event.
I am using springcloud config server to refresh my application properties at the runtime on scheduled basis in production environment. My schedule runs biweekly without any issues.
My application is running on Kubernetes cloud on multiple pods. Pods tends to crash or restart at any moment. What happens in case of pod crash/restart it fetches the latest property file from Config Server and repository at application startup rather waiting for next scheduled refresh cycle.
This lead to inconsistencies across the pods configuration and application behavior.
What I am looking for a strategy to avoid property refresh at the app startup and "Spring cloud config" client to only refresh based on refresh cycle.
Any suggestions to solve above would be greatly appreciate.
You want to use old properties even your application gets restarted so you need to keep old properties detail somewhere. You can not do that in your application as property detail is coming from config server so its better to set refresh rate for config server also how frequent it should pull config detail from git or whatever is your source.
If you set refresh-rate to two week of config server it will contain old property detail only and does not matter how frequent your application restarted it will get old properties from config-server.
Gemfire cluster suddenly goes down because of ClusterConfigurationNotAvailableException: Unable to retrieve cluster configuration from the locator
We have a 2 locator and 2 server Gemfire cluster. We bootstrap Gemfire cache server using cache.xml and spring data gemfire xml using spring boot initializer.
We have a client spring boot service which connect to cluster.
Gemfire cluster suddenly goes down randomly due to ClusterConfigurationNotAvailableException: Unable to retrieve cluster configuration from the locator. What could be the reason for it?. After restart it works fine for a day or 2 without issues and then this issue comes. It impacts our High availability. Please help us fixing this.
org.apache.geode.GemFireConfigException: cluster configuration service not available
at org.apache.geode.internal.cache.GemFireCacheImpl.requestSharedConfiguration(GemFireCacheImpl.java:1025)
at org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1149)
at org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate(GemFireCacheImpl.java:758)
at org.apache.geode.internal.cache.GemFireCacheImpl.create(GemFireCacheImpl.java:735)
at org.apache.geode.distributed.internal.InternalDistributedSystem.reconnect(InternalDistributedSystem.java:2748)
at org.apache.geode.distributed.internal.InternalDistributedSystem.tryReconnect(InternalDistributedSystem.java:2518)
at org.apache.geode.distributed.internal.InternalDistributedSystem.disconnect(InternalDistributedSystem.java:993)
at org.apache.geode.distributed.internal.DistributionManager$MyListener.membershipFailure(DistributionManager.java:4354)
at org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManager.uncleanShutdown(GMSMembershipManager.java:1556)
at org.apache.geode.distributed.internal.membership.gms.mgr.GMSMembershipManager.lambda$forceDisconnect$0(GMSMembershipManager.java:2593)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.geode.internal.config.ClusterConfigurationNotAvailableException: Unable to retrieve cluster configuration from the locator.
at org.apache.geode.internal.cache.ClusterConfigurationLoader.requestConfigurationFromLocators(ClusterConfigurationLoader.java:259)
at org.apache.geode.internal.cache.GemFireCacheImpl.requestSharedConfiguration(GemFireCacheImpl.java:988)
... 10 more
Expected behavior is high availability of Gemfire cluster
By default, whenever a GemFire server starts up (or automatically reconnects to the cluster after an unexpected shutdown), it tries to recover the Cluster Configuration from any locator, if it fails to do so then the member will just shutdown itself, which is what's happening looking at the stack trace attached (see the occurrence of org.apache.geode.distributed.internal.InternalDistributedSystem.tryReconnect in the stack). I'd focus my analysis in why the member was disconnected in the first place, the subsequent failure to reconnect is just a consequence and not the root cause of the issue.
Either way, if you're just using individual xml files to configure your members and don't want to use the Cluster Configuration Service at all, then you can just start your locator with the property --enable-cluster-configuration=false (the default is true) and your servers with --use-cluster-configuration=false (the default is also true), this will prevent the servers from trying to start up using the cluster configuration from the locators.
Hope this helps. Cheers.