I have a Spring boot application (Micro-service) running on Two nodes and registered with Eureka Naming server. My requirement is as follows:
An Autosys job will trigger one complex calculation in micro-service which will take about 45 minutes to complete. Result of this calculation will be saved to Gemfire cache and database. I want these two nodes act as Master-Slave where only Master node will take up and execute the request of complex calculation. If master goes down then only slave will become master and will be responsible for execution of complex calculation.
Another catch is while complex calculation is running, if adhoc request for the same calculation comes; latest request needs to be rejected saying calculation is already running.
I explored the possibility to use Apache ZooKeeper but it doesn't seem to satisfy my requirement of serving the request only using Master node.
Is there any way of achieving this?
What about Kafka? It uses ZooKeeper under the covers: https://kafka.apache.org/
You are probably looking for leader election: When does Kafka Leader Election happen?
Related
Resilience4j version: 1.7.1
Java version: 11
Micronaut version: 3.2.7
In Production, we will have multiple instances of the same service running. Currently, the way we are handling CircuitBreaker state on demand is, we opened up an endpoint to get the status of the circuit breaker (from CircuitBreakerRegistry.circuitBreaker("myInstanceA")) or change the state (disabled, open, closed, etc using myCircuitBreaker.transitionToClosedState()), when we want. But this means, only the instance of the service receiving the request will update the state of the CircuitBreaker or respond with the status (#of successful calls vs failed calls, etc) and not for the overall cluster.
I believe the Circuit Breaker does need to work on a per instance basis calculating the number of failed calls over a slidingWindow to open the circuit and act accordingly, but what is the best way to change the state (let's say disable) of the circuit breaker when we want to across the whole cluster? Since the load balancer in the front of the cluster might not provide us with an option to route the request explicitly to a specific instance, it would probably be helpful to have an option to enable/disable across the cluster.
For now, the only option I can think of is updating the common properties of the cluster and restart all instances of the service across the cluster, but it would be nice to change the state on demand and not restart instances.
I am working on Microservice architecture. One of my service is exposed to source system which is used to post the data. This microservice published the data to redis. I am using redis pub/sub. Which is further consumed by couple of microservices.
Now if the other microservice is down and not able to process the data from redis pub/sub than I have to retry with the published data when microservice comes up. Source can not push the data again. As source can not repush the data and manual intervention is not possible so I tohught of 3 approaches.
Additionally Using redis data for storing and retrieving.
Using database for storing before publishing. I have many source and target microservices which use redis pub/sub. Now If I use this approach everytime i have to insert the request in DB first than its response status. Now I have to use shared database, this approach itself adding couple of more exception handling cases and doesnt look very efficient to me.
Use kafka inplace if redis pub/sub. As traffic is low so I used Redis pub/sub and not feasible to change.
In both of the above cases, I have to use scheduler and I have a duration before which I have to retry else subsequent request will fail.
Is there any other way to handle above cases.
For the point 2,
- Store the data in DB.
- Create a daemon process which will process the data from the table.
- This Daemon process can be configured well as per our needs.
- Daemon process will poll the DB and publish the data, if any. Also, it will delete the data once published.
Not in micro service architecture, But I have seen this approach working efficiently while communicating 3rd party services.
At the very outset, as you mentioned, we do indeed seem to have only three possibilities
This is one of those situations where you want to get a handshake from the service after pushing and after processing. In order to accomplish the same, using a middleware queuing system would be a right shot.
Although a bit more complex to accomplish, what you can do is use Kafka for streaming this. Configuring producer and consumer groups properly can help you do the job smoothly.
Using a DB to store would be a overkill, considering the situation where you "this data is to be processed and to be persisted"
BUT, alternatively, storing data to Redis and reading it in a cron-job/scheduled job would make your job much simpler. Once the job is run successfully, you may remove the data from cache and thus save Redis Memory.
If you can comment further more on the architecture and the implementation, I can go ahead and update my answer accordingly. :)
I am developing a series of microservices using Spring Boot and plan to deploy them on Kubernetes.
Some of the microservices are composed of an API which writes messages to a kafka queue and a listener which listens to the queue and performs the relevant actions (e.g. write to DB etc, construct messsages for onward processing).
These services work fine locally but I am planning to run multiple instances of the microservice on Kubernetes. I'm thinking of the following options:
Run multiple instances as is (i.e. each microservice serves as an API and a listener).
Introduce a FRONTEND, BACKEND environment variable. If the FRONTEND variable is true, do not configure the listener process. If the BACKEND variable is true, configure the listener process.
This way I can start scale how may frontend / backend services I need and also have the benefit of shutting down the backend services without losing requests.
Any pointers, best practice or any other options would be much appreciated.
You can do as you describe, with environment variables, or you may also be interested in building your app with different profiles/bean configuration and make two different images.
In both cases, you should use two different Kubernetes Deployments so you can scale and configure them independently.
You may also be interested in a Leader Election pattern where you want only one active replica if it only make sense if one single replica processes the events from a queue. This can also be solved by only using a single replica depending on your availability requirements.
Our axon backed service runs on several nodes. Our event processors are tracking (1 segment, thus active on one node). If I subscribe to a query on node A and the event that should trigger the update is handled on node B, node A will miss this.
Is this by design or should this work and am I misconfiguring the application?
In case of the former, what could we do to implement a likewise functionality in the most axon idiomatic manner?
(currently we poll the data source / projection directly for x seconds)
The QueryBus you are using is a SimpleQueryBus which stays within a single JVM, always.
If you need a distributed version of the QueryBus, you should turn towards using Axon Server as the centralized means to route queries between your nodes.
Note that although you could create this yourself, people have tried to do so (as shown in this Pull Request on the framework) and decided against it in favor of the optimizations made in Axon Server.
So, in short, I am assuming you are currently excluding the Axon Server connector.
Thus the framework gives you the SimpleQueryBus, which is indeed designed to not span several nodes.
And lastly, the quickest way to achieve distributed routing of queries is to use Axon Server.
Is it possible to use same ZooKeeper instance for coordinating Apache Kafka and Apache Hadoop clusters? If yes, what would be the appropriate configuration of ZooKeeper?
Thanks!
Yes, as far as my understanding goes, ideally there should be a single zookeeper cluster with dedicated machines for managing the co-ordination between different application in a distributed system. i would try to share few points here
The zookeeper cluster consisting of several servers are typically called ensemble and basically manages to track and share states of your application.e.g Kafka uses it to commit offset changes to it so that in case of failure it can identify from where to start again. from the doc page :
Like the distributed processes it coordinates, ZooKeeper itself is intended to be replicated over a sets of hosts(ensemble). whenever a change is made, it is not considered successful until it has been written to a quorum (at least half) of the servers in the ensemble.
Now
Imagine both Kafka & Hadoop are having a dedicated cluster of 3 zookeeper servers each, in case couple of nodes get down in any of the two clusters it will result a service failure (ZK works based on simple majority voting, so it will tolerate up to 1 node failure keeping the service alive but not 2 ) . Instead if there is One Single cluster of 5zk servers managing both the applications and in case two of the nodes are down you still have the service available.Not only this offer better reliability also it reduces the hardware expenses as instead of managing 6 servers you only have to take care of 5.