I am creating a camel project which polls the local directory and pushes the files to a FTP location. I want to apply failover mechanism to my design. If want another fuse instance to be ready, if the current node fails.
Without any failover, two fuse instances can together poll the files. But I wanted the second node to poll the files, when the first node fails.
Is this scenario possible if I use FuseFabric ? I dont want anyother product choice, I have this product with me. But I want to know whether I can achieve this using Fabric ?
I am sure, this is possible in Web service endpoints. Not sure about file based endpoints.
Yes there is a master component you can use, then only one is active at a time. And if the node fails, the another node is elected as the master.
Just put "master:someNameHere:" in front of any Camel endpoint in a route from, when you use Fuse Fabric. Where "someNameHere" is a logical name for the group.
For example
from("master:foo:ftp://bla-bla")
.to("someWhere");
Related
I am developing a series of microservices using Spring Boot and plan to deploy them on Kubernetes.
Some of the microservices are composed of an API which writes messages to a kafka queue and a listener which listens to the queue and performs the relevant actions (e.g. write to DB etc, construct messsages for onward processing).
These services work fine locally but I am planning to run multiple instances of the microservice on Kubernetes. I'm thinking of the following options:
Run multiple instances as is (i.e. each microservice serves as an API and a listener).
Introduce a FRONTEND, BACKEND environment variable. If the FRONTEND variable is true, do not configure the listener process. If the BACKEND variable is true, configure the listener process.
This way I can start scale how may frontend / backend services I need and also have the benefit of shutting down the backend services without losing requests.
Any pointers, best practice or any other options would be much appreciated.
You can do as you describe, with environment variables, or you may also be interested in building your app with different profiles/bean configuration and make two different images.
In both cases, you should use two different Kubernetes Deployments so you can scale and configure them independently.
You may also be interested in a Leader Election pattern where you want only one active replica if it only make sense if one single replica processes the events from a queue. This can also be solved by only using a single replica depending on your availability requirements.
Our axon backed service runs on several nodes. Our event processors are tracking (1 segment, thus active on one node). If I subscribe to a query on node A and the event that should trigger the update is handled on node B, node A will miss this.
Is this by design or should this work and am I misconfiguring the application?
In case of the former, what could we do to implement a likewise functionality in the most axon idiomatic manner?
(currently we poll the data source / projection directly for x seconds)
The QueryBus you are using is a SimpleQueryBus which stays within a single JVM, always.
If you need a distributed version of the QueryBus, you should turn towards using Axon Server as the centralized means to route queries between your nodes.
Note that although you could create this yourself, people have tried to do so (as shown in this Pull Request on the framework) and decided against it in favor of the optimizations made in Axon Server.
So, in short, I am assuming you are currently excluding the Axon Server connector.
Thus the framework gives you the SimpleQueryBus, which is indeed designed to not span several nodes.
And lastly, the quickest way to achieve distributed routing of queries is to use Axon Server.
I have a Spring boot application (Micro-service) running on Two nodes and registered with Eureka Naming server. My requirement is as follows:
An Autosys job will trigger one complex calculation in micro-service which will take about 45 minutes to complete. Result of this calculation will be saved to Gemfire cache and database. I want these two nodes act as Master-Slave where only Master node will take up and execute the request of complex calculation. If master goes down then only slave will become master and will be responsible for execution of complex calculation.
Another catch is while complex calculation is running, if adhoc request for the same calculation comes; latest request needs to be rejected saying calculation is already running.
I explored the possibility to use Apache ZooKeeper but it doesn't seem to satisfy my requirement of serving the request only using Master node.
Is there any way of achieving this?
What about Kafka? It uses ZooKeeper under the covers: https://kafka.apache.org/
You are probably looking for leader election: When does Kafka Leader Election happen?
How do I use Consul to make sure only one service is performing a task?
I've followed the examples in http://www.consul.io/ but I am not 100% sure which way to go. Should I use KV? Should I use services? Or should I use a register a service as a Health Check and make it be callable by the cluster at a given interval?
For example, imagine there are several data centers. Within every data center there are many services running. Every one of these services can send emails. These services have to check if there are any emails to be sent. If there are, then send the emails. However, I don't want the same email be sent more than once.
How would it make sure all emails are sent and none was sent more than once?
I could do this using other technologies, but I am trying to implement this using Consul.
This is exactly the use case for Consul Distributed Locks
For example, let's say you have three servers in different AWS availability zones for fail over. Each one is launched with:
consul lock -verbose lock-name ./run_server.sh
Consul agent will only run the ./run_server.sh command on which ever server acquires the lock first. If ./run_server.sh fails on the server with the lock Consul agent will release the lock and another node which acquires it first will execute ./run_server.sh. This way you get fail over and only one server running at a time. If you registered your Consul health checks properly you'll be able to see that the server on the first node failed and you can repair and restart the consul lock ... on that node and it will block until it can acquire the lock.
Currently, Distributed Locking can only happen within a single Consul Datacenter. But, since it is up to you to decide what a Consul Servers make up a Datacenter, you should be able to solve your issue. If you want locking across Federated Consul Datacenters you'll have to wait for it, since it's a roadmap item.
First Point:
The question is how to use Consul to solve a specific problem. However, Consul cannot solve that specific problem because of intrinsic limitations in the nature of a gossip protocol.
When one datacenter cannot talk to another you cannot safely determine if the problem is the network or the affected datacenter.
The usual solution is to define what happens when one DC cannot talk to another one. For example, if we have 3 datacenters (DC1, DC2, and DC3) we can determine that whenever one DC cannot talk to the other 2 DCs then it will stop updating the database.
If DC1 cannot talk to DC2 and DC3 then DC1 will stop updating the database, and the system will assume DC2 and DC3 are still online.
Let's imagine that DC2 and DC3 are still online and they can talk to each other, then we have quorum to continue running the system.
When DC1 comes online again it will play catch up with the database.
Where can Consul help here? It can communicate between DCs and check if they are online... but so can ICMP.
Take a look at the comments. Did this answer your question? Not really. But I don't think the question has an answer.
Second point: The question is "How to use Consul in leader election?" It would have been better to ask how does Consul elect a new leader. Or "Given the documentation in Consul.io, can you give me an example on how to determine the leader using Consul".
If that is what you really want, then the question was already answered: How does a Consul agent know it is the leader of a cluster?
I am considering the feasibility that if we can replace our message-queue-middleware with ØMQ.
I have two set of servers.
The first set of the servers, they don't talk to another server from the same set, they only append the requests into specific message-queue.
The 2nd set of the servers, they don't talk to another server from the same set, they only receive the requests from specific message-queue to handle the requests.
It looks like a producer-consumer model.
And I think it can be replaced by the ØMQ's freelance pattern http://zguide.zeromq.org/page:all#Brokerless-Reliability-Freelance-Pattern.
But the questions are:
How to support dynamic discovery for both server & clients?
How to support dynamic discovery for both server & clients?
There are probably a hundred ways you could implement that, and greatly depend on your situation. If all the servers will always be on the same LAN you could bootstrap using the broadcast address on the local network and ask all responders who they are. Quick and dirty.
I would personally implement a bootstrap service that everyone knows about. They all can ask this always-available service for who is 'online' for the type of server they're after.
Another option, you could also use pub-sub. This would require a central publisher. newly connecting nodes would notify the publisher who would notify all other nodes of the new join, possibly including the new nodes ID, ip:port (if desired) etc. All nodes will still be able to communicate if the publisher crashes since its only used for global notifications, and a backup publisher could be used to make the system failsafe. Each node can also send heartbeats to publisher, with publisher notifying all other nodes when a node leaves/crashes.