How can I change the heap size of all slave machines while using NON-GUI mode in JMeter distributed test?
Eg: I want to trigger this from the master machine.
C:\jmeter\bin\jmeter.bat -n -t C:\test.jmx -Jusers=10000 -R192.168.0.19,192.168.0.29......
Is there some parameter that I can pass here so that the heap size of all the slave machines will be changed from the master machine?
Heap size is something you set on JVM startup, once it's defined it cannot be controlled
JMeter is being launched inside the JVM so first JVM is initialized then it loads JMeter classes
So if you need to control JMeter startup arguments dynamically depending on how you prepare the slaves and your technologies stack you might want to go for something like Chef, Puppet, Ansible, K8S, Docker Swarm, Terraform, etc.
Related
We are running test from distributed master slave machines. We need to add additional load generators during the execution of test whenever the memory consumption of the slaves increase from a certain threshold.
Main Question: Can we add more ips(slave machines/load generators) in the jmeter cmd during execution.
I think you can not add additional slaves machine once test started. Alternatively, you can start separate new test.
i have a jmeter distributed system with 1 master and 4 slaves.
the test is configured to run for 60 minutes.
somehow suddenly a random slave finish the test and the load is distributed between the other 3.
all the slaves configured the same way.
the instances are aws ec2 instances on the same subnet
is there any explanation for this behaviour?
It might be the case you configured JMeter to stop thread when the error occurs yourself:
if you have marked settings under Thread Group it might be the case the Threads (virtual users) are being stopped or the whole test gets stopped on error
If unexpected error occurs there should be a corresponding entry in jmeter.log file, make sure to execute JMeter slave process providing log file location via -j command-line argument like:
./jmeter -s -j jmeter-slave.log .....
It might be the case your JMeter instance runs out of memory and the whole JVM gets terminated so make sure to properly tune it for high loads
Check operating system log of your Amazon instance
There could be multiple reason for it:
Possibly load balancing was not happening properly, more sets of request are getting drived toward one instance. That can cause the VM to crash
OR It could be the crashed AWS instance. The disk space got full.
I suggest you check the disk usage of crashed vm.
About JMeter Distributed configuration for load testing (not in cloud),
I can setup X JMeter masters in different machines and execute them with shared files using shared folder(s).
The benefits are:
Each master is oblivious to other and can be shutdown and start when needed with dynamic/different properties.
Each master have its own logs and results that can be explored separately.
I don't need network connection between JMeter masters' machines.
What are the benefits for using master-slave configuration in such case? It seems like an unnecessary overhead when focusing on load test.
The benefits are:
centralization of results on 1 node (master), you can follow results in Summarizer from the master node, you have the CSV/XML file generated there and you can generate the web report at end of test using this
centralization of jmx plan on 1 node (master)
synchronization of the test from master, ie the master will start/stop the test from master
Besides the drawbacks you describe there are:
- network configuration complexity
- need to deploy csv on each node (although there are options with plugins (redis, simple table server)
- network traffic between nodes and master
It was created at time deployment automation was not available through things like vagrant, ansible, cloud ...
I was hoping to get some help/suggestions regarding my JMeter Master/slave test set up.
Here is my scenario:
I need to do load testing using Jmeter master slave set up. I am planning to launch the master and slave nodes on AWS (window boxes, dependency on one of the tool I launch via jmeter). I want to launch these master-slave set up in AWS on demand where I can tell how many slave nodes I want. I looked around a lot of blogs around using Jmeter with AWS and everywhere they assume these nodes will be launched manually and needs further configuration for master and slave nodes to talk to each other. For the tests where we might have 5 or 10 slave nodes this will be fine but for my tests I want to launch 50 instances(again the tool I use with jmeter has limitation that forces me to use each jmeter slave node as 1 user, instead of using 1 slave node to act as multiple users) like this and manually updating each of the slave nodes will be very cumbersome. So I was wondering if anybody else ran into this issue and have any suggestions. In the mean time I am looking into other solutions that will help me to use same slave node to mimic multiple users, which will help me to reduce the need to launch these many slave nodes.
Regards,
Vikas
Have you seen JMeter ec2 Script? It seems to be something you're looking for.
If for any reason you don't want to use particularly this script be aware that Amazon has the API to you should be able to automate instances creation by using a script AWS Java SDK or Amazon CLI.
You can even automate instances creation using a separate JMeter script with either JSR223 Sampler
or OS Process Sampler (this approach will require a separate JMeter script of course)
When I look at my logs, I see that my oozie java actions are actually running on multiple machines.
I assume that is because they're wrapped inside m/r job? (is this correct)
Is there a way to have only a single instance of the java action executing on the entire cluster?
The Java action runs inside an Oozie "launcher" job, with just one YARN "map" container.
The trick is that every YARN job requires an application master (AM) container for coordination.
So you end up with 2 containers, _0001 for the AM and _0002 for the Oozie action, probably on different machines.
To control the resource allocation for each one, you can set the following Action properties to override your /etc/hadoop/conf/*-site.xml config and/or hard-coded defaults (which are specific to each version and each distro, by the way):
oozie.launcher.yarn.app.mapreduce.am.resource.mb
oozie.launcher.yarn.app.mapreduce.am.command-opts (to align the max heap size with the global memory max)
oozie.launcher.mapreduce.map.memory.mb
oozie.launcher.mapreduce.map.java.opts (...)
oozie.launcher.mapreduce.job.queuename (in case you've got multiples queues with different priorities)
Well, actually, the explanation above is not entirely true... On a HortonWorks distro you end up with 2 containers, as expected.
But with a Cloudera distro, you typically end up with just one container, running both the AM and the action in the same Linux process.
And I have no idea how they do that. Maybe there's a generic YARN config somewhere, maybe it's a Cloudera-specific feature.