Enabling additional tests in JMeter reduces number of samples - jmeter

I have a JMeter Test Plan with many copies of almost exactly the same test. In each case there is a variable that is slightly different.
Here is the configuration:
There are two sets of user variables. There is a top level user variable list, that contains maximum_runs and there are Test Fragment level user variable lists with the User Defined Variable add_users, which goes up by 10 for each test case. users is a static 10.
I set maximum_runs to 100 and disable all but one Test Fragment. This gives me a number of samples = 100 for each Fragment. I enable a second Test Fragment and I still get 100 samples. But as soon as I enable the third Test Fragment my number of samples drops to 90. 4th, 80. But on the 5th one it goes right back up to 100 and the cycle starts over again. I don't see anything wrong with my math so I believe it to be something about how JMeter uses jexl2 or maybe variables are being changed due to the number of Fragments running? I really need to be able to run this with the same number of samples no matter how many Fragments are running. Ah, note, I have Run Thread Groups consecutively (i.e. run groups one at a time) in Test Plan checked.

I had a similar issue with one application. 1 out of 4 test components just would not go up more, than 50 percent of required users.
The problem was that component was a memory eater and when it reached the maximum heap it did not let the other threads in that component to ramp-up. But just a long shot.

Related

JMeter - 50% split and take relevant path

I am not an expert in JMeter but hope you can help.
How does one execute a test plan where based on a CSV input of users, I want 50% of them to go down 1 flow (for each user - do stuff) and the other 50% to do another flow of logic?
I'm not fussed about if it's at random (That would be good) of users in the file or sequentially going through it and splitting at the 50% mark - but do want to know how to do this type of process within a test plan?
I am trying to create an even (or almost even) type of distribution "load" - some go one path, some go another path. Simple.
Let me give you a walk through:
we have students.
students have different tests.
half of them I want each student to do ALL tests
half of them I want each student to do 1 test (based on some JSON that has been parsed which gives us the list of tests from the web end)
Also, is it possible where if I have a jmx file that contains, say, 10 controllers (these would be tests), I can pick one at random and then return back to the parent? Maybe in here a decision can be made to do next test or not and if next test then... cycle to the next one
Thoughts?
I am using JMeter 5.3 if it helps!
You can use Throughput Controller(s) to distribute the executions.
Set the Based on to Percent executions and set the value to 50.
Note: Set the Sharing mode in the CSV Data Set Config appropriately
You can use a Random Controller to pick a random child from the sub-controllers and samplers.
Use modulo (%) operator on the studentID (or 'number' or 'rownumber' -- whatever is handy).
If the output is odd, send a request to scenario a (first 50%), and if even, send a request for scenario b (second 50%).
// Example in PsuedoJava:
int id = myStudentIDNumber;
if(id % 2 == 0){ // Remainder is 0, so even number
fetch(mywebsite/api1);
} else { // Remainder is 1, so odd number
fetch(mywebsite/api2);
}
There are multiple ways of implementing your scenario, depending on what is your design the options are in:
Throughput Controller which controls how often its children to be executed (either in percentage of total executions or in absolute values)
Switch Controller which acts like a switch statement in programming languages - this guy seems to be the most appropriate for your use case. Moreover it can run a random child if you supply __Random() function as the input
Weighted Switch Controller - which combines above 2 approaches and provides easy way of

VS Web Test Load, Distribution and Iterations

In this case, I have created several webtest scripts, and added them to a load test (distributed by expected use).
What I would like to do is send a user load (500 for example) where all users run at the same time, each user is given only a single script to run and complete, then the test is finished. One iteration for each user.
I am finding that iterations are not user based but test based, so only one user and test is completed when selecting a Test Iterations value of 1 for 500 users.
Is there a user based iteration setting or some other way to accomplish my intended test?
Thanks.
The test settings you have used are not at all clear from your question. However assuming you want to start 500 test cases at the same time and stop after they have completed then you can use the following.
In the properties of the scenario: Set the user load to constant and to 500 users. Also set maximum test iteration to 0 (meaning no maximum). I would also set the think time between iteration to much longer than you expect the test run to take; this setting may not be needed but it avoids unexpected behaviours.
In the properties of the run settings there are two possibilities.
Either (1) set the test iterations to 500.
Or (2) set the run duration to long enough for all 500 tests to complete, but shorter than the think time between iteration in the scenario.

How can i execute several scenarios simultaneously without using several thread groups

I have created seven thread groups which execute different scenarios in one application. I'm trying to optimize my scripts in order to be more maintainable and easy to master when someone else uses them.
The thing that i cannot figure out is how can i combine those thread groups into one or two and to still have the seven different execution paths and the possibility to control them, by control i mean to set how many users to execute scenario 1, how many to execute scenario 2 etc. till 7.
Currently the test plan looks like this
If you don't want several thread groups for some reason the alternative options are in:
Throughput Controller - with different global executions or execution percentages
Switch Controller - which provides random weighted values (in some cases Throughput Controller doesn't guarantee that samplers in scope will ever be executed)
See Running JMeter Samplers with Defined Percentage Probability guide for more information on configuration and implementation.
Well i just figured out how to do that i have added a Loop controller a Random Order Controller as child of the loop controller. And i have put seven throughput controllers as child of the Random Order Controller so now everything looks fine

Generating graphs from distributed test seems to give results for one client/slave

I'm running a distributed test using 5 JMeter clients (slaves). Each client is set to run 50 users. At the conclusion of the test I generate a series of graphs from the resulting JTL along with a SynthesisReport. The SynthesisReport details 250 samples for each request, as you'd expect, however the TimeVsThreads and the ThreadsStateOverTime peak at 50 users, as if they were showing the results from just one of the clients.
I've confirmed that the jmeter.properties files for each client are the same as I suspected that it was possibly an issue with the each clients results file configuration and settings for saveservice.
I can't imagine this is by design, has anyone experienced something similar and if so how did was it solved?
As per documentation:
http://jmeter-plugins.org/wiki/ActiveThreadsOverTime/
Just name your thread groups using a unique id for each generator ( hostname or a property you pass to injector and use with function __P if you have more than 1 injector per host) and it will work fine.
This is normal for Jmeter Distributed Testing.
The reason this occurs is that each load generator separately starts User Threads 1-50, so when cmdrunner runs, it sees 1 responses from each User Thread 1 (5 total), but can't differentiate between them.
If you're using a custom reporter tool (that wraps cmdrunner), you can multiply your peak users by your load generators to display a more accurate number at the top of your Report. But as long as you're calling cmdrunner, you won't be able to see the actual number of users on your graphs.
This is normal behavior of JMeter. 5 clients will run each 50 threads.
Open JMeter and run on all remote hosts, and check Active threads each will run 50 threads.

Jmeter thread groups

I have entered 500 to be the number of thread group and ramp up time to be 120 seconds but when the report is generated,the virtual users count is only 15 or in composite graph-active threads over time is raised to near about 12. So I am bit confuse active threads counts. Because the data(threads) or numbers that I filled in test plan before test is different and after result is different. What about scaled values in graph?and x10?Something related to threads?
Each JMeter thread representing a virtual user after initialization starts executing samplers upside down (or according to the Logic Controllers).
If thread doesn't have more samplers to execute and no more loops to iterate it's being shut down. It looks just like your case. See Max Users is Lower than Expected article for more detailed explanation and workaround.
Usually people set Loop Count to "Forever" and use Runtime Controller to so test could finish in designed time. Another option is using i.e. Ultimate Thread Group available via JMeter Plugins which provides convenient way of defining a load scenario.

Resources