How to broadcast/share a variable/property from a slave to the others? - jmeter

I have a JMeter script that is executed in distributed mode with 4 nodes. One of them is the controller and does not do any request and the other 3, as workers do the requests.
I can currently set one of the workers as a master worker, setting a property in the user.properties file for that specific worker. This "master" worker perform some requests that has to be done only once, so these requests can't be done by the other workers.
Now I have the need of extract some values from the response of these unique request and send this information to the other slaves.
Is it possible to do this?
How can data be sent form one worker to the other workers at run time?

You can use HTTP Simple Table Server plugin and populate it with data from the "master" worker using ADD command so once you do the setup of the pre-requisites all other workers including the master could access the generated data via READ command
HTTP Simple Table server can be installed using JMeter Plugins Manager

No it's not possible.
The communication between Controller and servers is very reduced:
Controller send start / stop / shutdown commands to servers
Servers send sample result to controller
That's it.
To communicate you'll need to use 3rd party tiers like Redis DB or similar means.

Related

Spring batch Remote partitioning | can Master step complete without completion of slave step

I have written spring batch remote partitioning approach.I dont want my master step to waits for slave step acknowledgment.I want my master step to complete as soon as it partitions the data.
Is there any configuration for same in spring batch.
If the manager should not wait for workers, what you are describing is not a manager/worker configuration anymore. In a manager/worker setup, the manager divides the work among workers and waits for them to finish (in Spring Batch, you can configure the manager to wait in two different ways: poll the job repository for worker statuses, or gather replies from workers up to a given timeout).
I don't see the rationale behind this "fire-and-forget" approach (who would monitor the status of workers and drive the process accordingly?), but remote partitioning is definitely not suitable to implement this pattern (at least in my opinion). If you really want to (ab)use remote partitioning for that, you can register a custom PartitionHandler that does not wait for workers (ie remove this section).

Use Hazelcast Executor Service to be executed on clients

I all the documentation and all the "Google search results" I saw, the hazelcast executor service can be used to be executed on "Members".
I wonder if it is possible to also have things being executed on hazelcast clients?
The distributed executor service is intended to run processing where the data is hosted, on the servers. This is a similar idea to a stored procedure, run the processing where the data lives, save data transfer.
In general, you can't run a Java Runnable or Callable on the clients as the clients may not be Java.
Also, the clients don't host any data, so they'd have to fetch what data they need from the servers potentially.
If you want something to run on all or some connected clients, you could implement this yourself using the publish/subscribe mechanism. A payload could be sent to an ITopic with the necessary execution parameters, and clients listening can act on the message.
You can also create a Near Cache on client side and use JDK’s ExecutorService that runs in your local jvm app.

Parse Server with independent workers

Image we want to check two weeks after a user's registration if she has been active and otherwise I want to notify her.
To achieve this we currently use the following setup (this runs on Heroku):
The parse server puts a task into the redis queue. The worker fetches tasks from that queue. Then it performs checks on the activity of the user. For this it needs to access the parse server to fetch that information. This puts additional load on our api.
I image the following scenario to be better:
I wonder: is it possible to achieve this scenario using parse server? (The worker dynos don't have a HTTP interface to run a parse server...)

Spring Batch : Remote Chunking & Partitioning without using jms

I am new to spring batch. I want to run spring batch jobs using remote chunking & partitioning technique on multiple servers without using jms.
I want to use HTTP Invoker or RMI rather than using jms.
But, All examples of remote chunking & partitioning use jms.
I can't find examples that use HTTP Invoker or RMI.
I wonder if it is possible..
English is not my mother language.. please excuse any errors on my part
You can use any form of communication you want for remote partitioning. However, remote chunking does require persistent communication which is why JMS is typically used.
The reason you see JMS for remote partitioning is because it's easier to configure a clustered environment with JMS than it is for HTTP. The reason for that is everyone (master and all the slaves) only need to know where the queue is to talk to. Using HTTP as a communication mechanism requires the master and slaves to know a lot more. The master needs to know how to evenly distribute the partitions over all the slaves and where to send the requests to for each slave. All the slaves also need to know where the master is. JMS's centralized distribution model also allows you to dynamically add new slaves during processing where HTTP would require you to have some way to register a new slave with the master.
The reason persistent communication is required for remote chunking is that there is nothing in the remote partition model to prevent an item from being processed twice since it's sent over the wire (remote partitioning just sends descriptions of the data across and the job repository prevents data from being processed twice).
You can read more about the difference between the two in my answer here: Difference between spring batch remote chunking and remote partitioning

Spring task scheduler Multiple Instances on multiple machines detection

I have made an Spring Task Scheduler services for sending e-mails at particular condition. This service is running on multiple machines.
If one machine service sends the e-mail then I have to stop the other service for sending the email.
How can I detect this without using persistent storage flag that one machine service has executed its e-mail code?
You have basically 3 options:
Use a shared data store of some form (e.g. data base) that all nodes connect to. That's what we do usually.
Make the nodes "talk to each other" so a particular node could the check the state of its peers before sending email. For this you could use JGroups.
Have your email service run on only one of the nodes.

Resources