Is there a way to create a counter which is unique in each slave from terraform when using JMeter distributed testing mode?
I got the load testing script from here https://github.com/marcosborges/terraform-aws-loadtest-distribuited. But my requirement is to assign an offset for the slave counter since we would need more than 10M unique.
The solution reference is somewhat related to this -> JMeter discributed testing create counter unique in each slave
But it would be a hassle to individually assign offset to more than 50 slaves.
If solution described does not suit you, I don't think there is a way to do that in JMeter.
You'll have to use a 3rd party like Redis for example:
https://redis.io/commands/incrby/
And use JSR223 + Groovy to handle the incrby call
Related
Jmeter tests are run in master slave fashion with around 8 slave machines. However with the remote batching mode set to MODE_STRIPPED_BATCH, I am not able to run tests for more than 64 hours. Throughput is around 450 requests per minute, and per slave machine it results in the creation of jtl files that are around 1.5 gb. All 8 slaves are going to send this to the master (1.5 gb x 8) and probably the I/O gets too much for the master to handle. The master machines memory is at 16 gb ram and has disk storage of around 250 gb. I was wondering if the jmeter distributed architecture has any provision to make long running soak tests possible without any un explained stress on the master machine. Obviously I have the option to abandon master slave setup and go for 8 independent nodes, however I'll in that case run into complications with respect to serving data csv files ( which I currently serve using simple table server plugin from the master m) and also around aggregating result files. Any suggestions please. It would be great to be able to run tests atleast for around 4 days (96 hours or so).
I would suggest to go for an independent JMeter workers + external data collector setup.
Actually, the JMeter right-out-of-the-box "distributed scaling" abilities are weak, way outdated & overall pretty ridiculous. As well as it's data collection/agregation/processing abilities.
This situation actually puzzles me a lot - mind you, rivals are even worse, so there's literally NOTHING in the field (except for, perhaps, some SaaS solutions trying to monetize on this gap).
But is is what it is...
So that's about why-s, now to how-s.
If I were you, I would:
Containerize the JMeter worker
Equip each container with a watchdog to quickly restart the worker if things go south locally (or probably even on schedule to refresh it ultimately). Be that an internal one, or external like cloud services have - doesn't matter.
Set up a timeseries database - I recommend InfluxDB, it's an excellent product & it's free in basic version (which is going to be enough for your purposes).
Flow your test results/metrics into that DB - do not collect them locally! You can do it right from your tests with pretty simple custom listener (Influx line protocol is ridiculously simple & fast), or you can have external agent watching the result files as they flow. I just suggest you not to use so called Backend Listner to do the job - it's garbage, it won't shape your data right, so you'd have to do additional ops to bring them to order.
If you shape your test result/metrics data properly, you've get 'em already time-synced into a single set - and the further processing options are amazingly powerful!
My expectation is that you're looking for the StrippedAsynch sampler sender mode.
As per the documentation:
Asynch
samples are temporarily stored in a local queue. A separate worker thread sends the samples. This allows the test thread to continue without waiting for the result to be sent back to the client. However, if samples are being created faster than they can be sent, the queue will eventually fill up, and the sampler thread will block until some samples can be drained from the queue. This mode is useful for smoothing out peaks in sample generation. The queue size can be adjusted by setting the JMeter property asynch.batch.queue.size (default 100) on the server node.
StrippedAsynch
remove responseData from successful samples, and use Async sender to send them.
So on slave node add the following line to user.properties file:
mode=StrippedAsynch
and on the master node define asynch.batch.queue.size, to be as high to not to have impact onto JMeter's throughput (won't slow it down) and as low to not to overwhelm the master. I would start with 1000.
Another option is using StrippedDiskStore but you will have to manually collect serialized results after test completion (make sure that slave processes will not shut down because the results will be deleted when slave process finishes)
You could use JMeter PerfMon Plugin to monitor memory and network usage on master and slaves.
The default behavior of JMeter definitely seems to just duplicate your test plan across servers. So, if the test plan has 10 "threads", running it against X servers will yield 10x threads.
Is there any way to make this more intelligent? For example, maybe I only want one copy of some HTTP thread running even though I have 5 servers to distribute a more intense load.
Another example...I want to ensure that my sampler uses unique IDs for each thread, but my service requires that the usernames be pre-provisioned so they can't be preprovisioned...I haven't been able to find a straightforward way to coordinate this (statelessly) across my distributed servers.
A "simple" implementation might be if JMeter had distributed testing aware variables built in so the client sent the server something like ServerID and ServerCount so that the test plan could use the numeric serverId as a prefix or mod by the server count. Alternatively, JMeter could have an option to shard thread_num so that if you say 10,000 threads and have 10 servers, it will run 1,000 threads on each server with thread_num never being duplicated across the distributed test for a given sampler (Example, skip thread_num if thread_num % serverCount != serverId).
Any thoughts on the best way to accomplish this?
One approach to have distributed test-aware variable is to start each jmeter-server with different variable value:
bin/jmeter-server -Jvariable=valuehost1
And then in your test script just use:
${__P(variable)}
I've a simple scenario written in JMeter. Now I want to use SmartMeter instead of JMeter, but I don't know if I have to create a new scenario/test or if can reuse the old one?
I speak about http://www.smartmeter.io/
In an environment of SmartMeter Editor we can run the Test 1:1, then it is virtually identical run as in JMeter 2.12 and it does not use distributed mode. But we can watch the test in the Runner tab and after the test to generate the report (if the listener "et#sm - Controller Summary Report" is included the test).
For a distributed mode we recommend using SmartMeter Thread Group "et#sm - Distributed Lazy Stepping Thread Group", which creates users at the moment of their involvement in a testing process and they are also automatically distributed to generator servers with exact deviation between them and keeping the number of VU from the settings.
You also need to add a component of listener "et#sm - Controller Summary Report" to store the results and display informations in SmartMeter runner.
Further adjustments are voluntary, but I can only recommend them:
To use assert "et#sm - Better Response Assertion" which works much more efficiently and faster ; To retrieve values from the response exploit "et#sm - Boundary Points Extractor" etc.
I have a system that should be able to handle millions of users requests concurrently. In order to check how the system handles the load, I setup a cluster of JMeter servers (slaves), and one controller (client).
I have a database of all users (~10M), and I need each request sent to be from a different user.
I am wondering how I can implement such a thing in JMeter. Basically, I thought about dividing a range of users (let's say 100,000) per each slave, and then within a given slave, each request should read a new user from the local 100,000 list, and delete it. Thus, I will eventually send a request from every user.
The thing is while this idea sounds logical theoretically, I do not exactly know how to implement it using the JMeter terms. Also, I am not sure how to read from database in the test, although I could theoretically read it in advance into a text file, and have each slave contain the text file with its 100,000 users portion.
I can setup a very large cluster of machines, so scale will not be the issue here. Just how to set it all up.
The best way to provide Jmeter with a list of parameters is to use a CSV file:
http://jmeter.apache.org/usermanual/component_reference.html#CSV_Data_Set_Config
You can configure the CSV dataset config to do make every thread use a different line in the CSV. Each engine will need to have it’s own unique CSV file, because the sharing mode does nto work between engines in distributed testing (you can try to automate this part, this can be interesting to do :) ).
This is how your script should look like:
1. Thread Group
1.1 HTTP sampler (login)
1.1.1 CSV dataset config
1.2 second http sampler
etc...
The login sampler will use the parameters loaded by from the CSV file, so for every ’login’ it will use a different line.
Distributed testing is pretty simple:
http://jmeter.apache.org/usermanual/remote-test.html
Keep in mind that running 100K concurrent users on a single Jmeter load engine will be hard (Jmeter consumes resources on the server, so you will need lots of CPU and memory). So you should also monitor the engines.
Also 1M users will cause a lot of data that the engines will send back to the console, so you might need to start a bunch of distributed tests in parallel, and at the end aggregate the results.
Cheers,
This is can be implemented by doing the following steps:
Taking the user credential dump and saving it to a csv file
split the csv file. copy 1 file to each Jmeter slave, in the same location on all the
machines e.g. "C:\Loadtest\"
from the controller, give the path of your csv file in "CSV Data Set Config".
Run the Test.
By doing the above steps, Jmeter controller will start execution of the test by pointing all the Jmeter Slave nodes to use the CSV file in the same location "C:\Loadtest\".
But the trick here is that all the machine will be using different set of users.
Hope this will help.
What is purpose to use setup threadgroup and teardown threagroup in Jmeter? Please explain same with example.
I know why we use thread groups and also aware of fact that setup is for pre activities like creating user and monitoring purpose but not sure with an incident where can i use it. same with tear down.
It sounds like you have pretty much figured it out already, but let me give you a few examples of when I've used it.
setup:
Get a large data set from a database into a jmeter variable for use during the test.
Get and log the version number from the system under test:s version number.
Run a javascript to set jmeter properties based on more simple input parameters/properties. Lets say you want to configure the selection of target host a simple true/false value, but in your test you need to expand it to different strings, and you dont want to have logic spread out all over your test plan.
teardown:
Never used it, but I guess it is mainly useful for cleaning up your system (e.g. deleting users that were created during the test)
correct me as i'm probably wrong, but a setUp thread cannot be used to store variables for use on the test threads (that i can see). any variables that i use in the setUp are never available. however, i found that if i use a beanshell and convert the variable in the setUp thread to a property like this
${__setProperty(userToken, ${userToken})};
then on each test thread i either use the property directly like:
${__property(userToken)}
or at the top of my thread i convert the property back into a variable like:
vars.put("userToken","${__property(userToken)}")
however, this seems a bit long winded and it would be great if there was a way to set up variables in the setUP to be used on every thread.
A special type of ThreadGroup that can be utilized to perform Pre-Test/ Post-Test Actions.The behavior of these threads is exactly like a normal Thread Group element.
The difference is that these type of threads execute before/after the test has finished executing its regular Thread Groups.enter image description here