Spring task scheduler Multiple Instances on multiple machines detection - spring

I have made an Spring Task Scheduler services for sending e-mails at particular condition. This service is running on multiple machines.
If one machine service sends the e-mail then I have to stop the other service for sending the email.
How can I detect this without using persistent storage flag that one machine service has executed its e-mail code?

You have basically 3 options:
Use a shared data store of some form (e.g. data base) that all nodes connect to. That's what we do usually.
Make the nodes "talk to each other" so a particular node could the check the state of its peers before sending email. For this you could use JGroups.
Have your email service run on only one of the nodes.

Related

How to broadcast/share a variable/property from a slave to the others?

I have a JMeter script that is executed in distributed mode with 4 nodes. One of them is the controller and does not do any request and the other 3, as workers do the requests.
I can currently set one of the workers as a master worker, setting a property in the user.properties file for that specific worker. This "master" worker perform some requests that has to be done only once, so these requests can't be done by the other workers.
Now I have the need of extract some values from the response of these unique request and send this information to the other slaves.
Is it possible to do this?
How can data be sent form one worker to the other workers at run time?
You can use HTTP Simple Table Server plugin and populate it with data from the "master" worker using ADD command so once you do the setup of the pre-requisites all other workers including the master could access the generated data via READ command
HTTP Simple Table server can be installed using JMeter Plugins Manager
No it's not possible.
The communication between Controller and servers is very reduced:
Controller send start / stop / shutdown commands to servers
Servers send sample result to controller
That's it.
To communicate you'll need to use 3rd party tiers like Redis DB or similar means.

Use Hazelcast Executor Service to be executed on clients

I all the documentation and all the "Google search results" I saw, the hazelcast executor service can be used to be executed on "Members".
I wonder if it is possible to also have things being executed on hazelcast clients?
The distributed executor service is intended to run processing where the data is hosted, on the servers. This is a similar idea to a stored procedure, run the processing where the data lives, save data transfer.
In general, you can't run a Java Runnable or Callable on the clients as the clients may not be Java.
Also, the clients don't host any data, so they'd have to fetch what data they need from the servers potentially.
If you want something to run on all or some connected clients, you could implement this yourself using the publish/subscribe mechanism. A payload could be sent to an ITopic with the necessary execution parameters, and clients listening can act on the message.
You can also create a Near Cache on client side and use JDK’s ExecutorService that runs in your local jvm app.

Spring boot applications high availability

We have a microservice which is developed using spring boot. couple of the functionalities it implements is
1) A scheduler that triggers, at a specified time, a file download using webhdfs and process it and once the data is processed, it will send an email to users with the data process summary.
2) Read messages from kafka and once the data is read, send an email to users.
We are now planning to make this application high available either in Active-Active or Active-passive set up. The problem we are facing now is if both the instances of the application are running then both of them will try to download the file/read the data from kafka, process it and send emails. How can this be avoided? I mean to ensure that only one instance triggers the download and process it ?
Please let me know if there is known solution for this kind of scenarios as this seems to be a common scenario in most of the projects? Is master-slave/leader election approach a correct solution?
Thanks
Let the service download that file, extract the information and publish them via kafka.
Check beforehand if the information was already processed by querying kafka or a local DB.
You also could publish an DataProcessed-Event that triggers the EmailService, that sends the corresponding E-Mail.

Parse Server with independent workers

Image we want to check two weeks after a user's registration if she has been active and otherwise I want to notify her.
To achieve this we currently use the following setup (this runs on Heroku):
The parse server puts a task into the redis queue. The worker fetches tasks from that queue. Then it performs checks on the activity of the user. For this it needs to access the parse server to fetch that information. This puts additional load on our api.
I image the following scenario to be better:
I wonder: is it possible to achieve this scenario using parse server? (The worker dynos don't have a HTTP interface to run a parse server...)

ØMQ N-to-M message queue

I am considering the feasibility that if we can replace our message-queue-middleware with ØMQ.
I have two set of servers.
The first set of the servers, they don't talk to another server from the same set, they only append the requests into specific message-queue.
The 2nd set of the servers, they don't talk to another server from the same set, they only receive the requests from specific message-queue to handle the requests.
It looks like a producer-consumer model.
And I think it can be replaced by the ØMQ's freelance pattern http://zguide.zeromq.org/page:all#Brokerless-Reliability-Freelance-Pattern.
But the questions are:
How to support dynamic discovery for both server & clients?
How to support dynamic discovery for both server & clients?
There are probably a hundred ways you could implement that, and greatly depend on your situation. If all the servers will always be on the same LAN you could bootstrap using the broadcast address on the local network and ask all responders who they are. Quick and dirty.
I would personally implement a bootstrap service that everyone knows about. They all can ask this always-available service for who is 'online' for the type of server they're after.
Another option, you could also use pub-sub. This would require a central publisher. newly connecting nodes would notify the publisher who would notify all other nodes of the new join, possibly including the new nodes ID, ip:port (if desired) etc. All nodes will still be able to communicate if the publisher crashes since its only used for global notifications, and a backup publisher could be used to make the system failsafe. Each node can also send heartbeats to publisher, with publisher notifying all other nodes when a node leaves/crashes.

Resources