We are exploring Archaius for our microservices. We want to setup a Configuration Server as microservice and store configuration files of other microservices.
We have other microservices (Springboot based) say
a) Producer
b) Consumer
deployed in different environment/vm. We also deploy all these microservices as a cluster (ie., run multiple instances of Producer and Consumer) to support high availability.
Please let us know how to get dynamically changed values in Configuration Server to be available to other microservices (multiple Producer and Consumers).
Thanks
Related
I have a set of microservices using springboot rest. These microservices will be deployed in a autoscaled and load balanced environment. One of these services is responsible for managing the system's configuration. When other microservices startup, they obtain the configuration from this service. If and when the configuration is updated, I need to inform all currently running microservices instances to update their cached configuration.
I am considering using RabbitMQ with a fanout exchange. In this solution, each instance at startup will create its queue and bind that queue to the exchange. When there is a configuration change, the configuration service will publish an update to all queues currently bound to that exchange.
However, as service instances are deleted, I cannot figure out how would I delete the queue specific to that instance. I googled but could not find a complete working example of a solution.
Any help or advise?
The idea and solution is correct. What you just miss that those queues, created by your consumer services could be declared as auto-delete=true: https://www.rabbitmq.com/queues.html. As long as your service is UP, the queue is there as well. You stop your service, its consumers are stopped and unsubscribed. At the moment the last consumer is unsubscribed the queue is deleted from the broker.
On the other hand I would suggest to look into Spring Cloud Bus project which really is aimed for tasks like this: https://spring.io/projects/spring-cloud-bus.
I am new to Kafka.
I have existing microservice with spring-boot, ribbon, eruka, and zuul.
If I use Kafka as the messaging platform between each microservice call, does kafka provide load balancer for microservice and I can get rid of ribbon ?
Please give me some suggestions.
Thanks
Kafka stores data in a distributed log and provides external clients for building a streaming platform. It is not a load balancer; but data is partitioned amongst servers so load is distributed as part of its custom TCP protocol.
Ribbon is a stateless service for spreading load over other services. I haven't used it, but it does not have an asynchronous, client, push-pull model to anything
You could use them together... A Kafka consumer would start an HTTP / RPC call to a Ribbon server
Is it possible to configure Kafka for working with two separate clusters in single spring boot application?
Use case: I have two clusters with replicas + zookeeper:
Cluster #1 bootstrap-servers: server1.example.com,server2.example.com,server3.example.com
Cluster #2 bootstrap-servers: target-server1.example.com,target-server2.example.com,target-server3.example.com
I need to consume the message from Cluster #1 then do some calculations based on that data and produce the results to Cluster #2 topic. Is there any way to configure Kafka in single Spring application to handle this approach?
Yes, but if you want to consume and produce with both, you have to manually configure the consumer and producer factory #Beans etc.
Boot can only auto configure one of each from properties.
But if you are simply consuming from one cluster and producing to another, it can be done with properties.
Use
spring.kafka.producer.bootstrap-servers=...
...
spring.kafka.consumer.bootstrap-servers=...
these will override the common properties.
I have messages coming in from Kafka. So I am planning to write a listener and "onMessage". I want to process it and push it in to solr.
So my question is more architectural, like I have worked on web apps all my career, so in big data how to deploy the spring kafka listener, so I can process thousands of messages a second.
How do I make my spring code use multiple nodes to distribute the
load?
I am planning to write a SpringBoot application to run in
a tomcat container.
If you use the same group id for all instances, different partitions will be assigned to different consumers (instances of your application).
So, be sure that you specified enough partitions in the topic you are going to consume.
I have a query related to Spring Boot Actuator. Through Actuator I can change the log level dynamically.
In clustered environment how it works?
If I do the REST (POST) call to change the log level then in which node it will be applied?
Or will it be applied to all the nodes?
If it gets applied to all the nodes in the cluster then how to restrict it to only a particular node?
You should use external configuration server (spring cloud config) and use spring cloud bus to reflect configuration changes into all the servers of your cluster.
Place your log configuration on the configuration server, on each change, a message will be sent to a message broker (like rabbitMq) to all the servers listening to the config.