Let's say I have started to use the spring-cloud-config-server and get it working (using a git repository in background). So now I will deploy that config-server on a cluster (mesos cluster or AWS cloud etc.)
So for reliability etc. I would like to start two instances of the same config-service within a cluster. By using a service registry all other services can now connect to that config-server and get their configuration.
But the question: How can the synchronisation between those config-servers being handled... So for example If I change the configuration in the git repository and now there is some lack of time between both instance will deliver the exact same information...
Does there exist a solution for that ? Some kind of a raft census protocol / setup etc. ? Or is there the only solution not to use spring-config-server and use etcd instead or other solutions ?
Update:
It might be an option to make a fore-update option for the git repositories. This makes sure to get the most recent state with the drawback of performance.
In Spring Cloud Services v3.1.2 and later, you can use the periodic parameter when configuring a Config Server service instance to cause the mirror service to automatically refresh a Git repository mirror periodically
pcf link
Related
I am using springcloud config server to refresh my application properties at the runtime on scheduled basis in production environment. My schedule runs biweekly without any issues.
My application is running on Kubernetes cloud on multiple pods. Pods tends to crash or restart at any moment. What happens in case of pod crash/restart it fetches the latest property file from Config Server and repository at application startup rather waiting for next scheduled refresh cycle.
This lead to inconsistencies across the pods configuration and application behavior.
What I am looking for a strategy to avoid property refresh at the app startup and "Spring cloud config" client to only refresh based on refresh cycle.
Any suggestions to solve above would be greatly appreciate.
You want to use old properties even your application gets restarted so you need to keep old properties detail somewhere. You can not do that in your application as property detail is coming from config server so its better to set refresh rate for config server also how frequent it should pull config detail from git or whatever is your source.
If you set refresh-rate to two week of config server it will contain old property detail only and does not matter how frequent your application restarted it will get old properties from config-server.
I am creating a spring boot microservice project with intelij IDEA.
Currently I have developed three seperate spring boot rest services as customer service, vehicle service and spring cloud config server. Spring cloud config server is pointing to a github repository.
The issue is sometimes above projects take more than 10 minutes to run and sometimes does't run and give an error message as "failed to check application readystate intellij attached provider for the vm is not found". I have no idea why this happens ?
There are two possible causes:
1. IntelliJ IDEA and the Spring application are running in different JVMs.
There is a bug for IntelliJ IDEA regarding that:
https://youtrack.jetbrains.com/issue/IDEA-210665
Here is short summary:
IntelliJ IDEA uses local JMX connector for retrieving Spring Boot actuator endpoint's data by default. However, it could be impossible to get local JMX connector address via attach api if Spring Boot application and IntelliJ IDEA are run by different JVMs. In this case, add the following lines to VM options of your Spring Boot run configuration:
-Dcom.sun.management.jmxremote.port={some_port}
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremote.ssl=false
As mentioned in the official Oracle documentation, this configuration is insecure. Any remote user who knows (or guesses) your port number and host name will be able to monitor and control your Java applications and platform.
2. Prolonged time to retrieve local hostname
You can check that time using inetTester. Normally it should take only several milliseconds to complete. If it takes a long time then you can add the hostname returned by inetTester to /etc/hosts file like this:
127.0.0.1 localhost winsky
::1 localhost winsky
I have two docker instances that I launch with docker-compose.
One holds a Cassandra instance
One holds a Spring Boot application that tries to connect to that application.
However, the Spring Boot application will always fail, because it's trying to connect to a Cassandra instance that is not ready yet to take connections.
I have tried:
Using restart:always in Docker-compose
This still doesn't always work, because the Cassandra might be up 'enough' to no longer crash the Spring Boot application, but not up 'enough' to have successfully created the Table/Column family. On top of that, this is a very hacky solution.
Using healthcheck
It seems like healthcheck in compose doesn't have restart capabilities
Using a bash script as entrypoint
In the hope that I could use netstat,ping,... whatever to determine that readiness state of Cassandra
Right now the only thing that really works is using that same bash script and sleep the process for x seconds, then start the jar. This is even more hacky...
Does anyone have an idea on how to solve this?
Thanks!
Does the spring boot service defined in the docker-compose.yml depends_on the cassandara service? If yes then the service is started only if the cassandra service is ready.
https://docs.docker.com/compose/compose-file/#depends_on
Take a look at this github repository, to find a healthcheck for the cassandra service.
https://github.com/docker-library/healthcheck
CONCLUSION
After some discussion we found out that docker-compose seems not to provide a functionality for waiting until services are up and healthy, such as Kubernetes and Openshift provide (See comments below). They recommend to use wrapper script (docker-entrypoint.sh) which waits for the depending service to come up, which make binaries necessary, the actual service shouldn't use such as the cassandra client binary. Additionally the service depending on cassandra could never get up if cassandra doesn't, which shouldn't happen.
A main thing with microservices is that they have to be resilient for failures and are not supposed to die or not to come up if a depending service is currently not available or unexpectedly disappears. Therefore the microservice should be implemented in a way so that it retries to get connection after startup or an unexpected disappearance. Unexpected is a word actually wrongly used in this context, because you should always expect such issues in a distributed environment, and even with docker-compose you will face issues like that as discussed in this topic.
The following link points to a tutorial which helped to integrate cassandra properly into a spring boot application. It provides a way to implement the retrieval of a cassandra connection with a retry behavior, therefore the service is resilient to a non existing cassandra database and will not fail to start anymore. Hope this helps others as well.
https://dzone.com/articles/containerising-a-spring-data-cassandra-application
I'm going to use Spring Config Service (SCS) for our Microservices Architecture.
Currently our Cloud stack is on AWS.
Since SCS will run on a Docker, thanks to a Pipeline + Cloud Formation, and our config repository will be on a private GitHub repository with encrypted values:
Is there any best practice to refresh the repository that will be "pulled" inside the Docker?
How can I update it on any instances? (since my service will be load balanced with HA).
pleas refer following poc
https://github.com/pooja-varma/cloud-config-and-eureka-server
may be it helps you
Config clients don't poll for changes. It has to be triggered and there is a EnvironemntChanged listened by the application and any changes in the properties will be loaded again. If you need more control over when it has to be refreshed and if you want that to be atomic I would recommend you to use #RefreshScope which are lazy proxies and initialized only when they are used. The Environment of your application is pulled every time and it also actuator endpoints to the rescue as well.
Please refer the documentation here.
http://cloud.spring.io/spring-cloud-static/docs/1.0.x/spring-cloud.html#_refresh_scope
In my project we have a requirement to run two instances of spring cloud config server so if one instance goes down, other will take care the config server responsibilities.
Currently, you would need to put config server behind a load balancer. It is stateless, so that wouldn't hurt. There is an open issue to configure multiple config server url's in the client, so it could do failover there.
If you are running multiple instances of the config server, you can have them all register themselves in Eureka, and maybe do a lookup to the config server with it's application name via Eureka in all the other microservices. This way, Zuul (and Ribbon) will take care of the load balancing.
Edit:
I guess spencergibb is right. It's best to use a load balancer, for eg: ELB, if you're going to deploy on AWS.
Consider multiple spring-cloud-config-uris for high availability