When setting 'multiple number of executions' under Testmodulelist->Options, how can I determine the current execution round in CAPL code?
This is related to Properties NumberOfExecutions, ExecutionMode etc. mentioned in the CANoe help.
In the Test execution windows (bottom right) you have an indicator that says how many times the TM was repeated: e.g.
3 of 10 executions
Related
I need to verify one of API TPS.
6TPS is Requirement.
I have given 6 user load , 1s pacing and runned for 1hr.
Output snap attached.
From the Output how do i verify that API is achieved 6TPS.
Thanks in advance enter image description here
In your case the number of TPS for Get_id transaction is 1.9 per second so my expectation is that you either need to remove pacing or increase the number of users or both.
You can reach 6 TPS with 6 users only if response time is 1 second (or less), looking at your results it can be as high as 5.6 seconds so either your server cannot server 6 transactions per second of you just need to add more users.
If you want to check the throughput automatically and fail the test if the expected number of transactions per second is not met you can consider running your JMeter test using Taurus tool as a wrapper, Taurus provides flexible and powerful Pass/Fail criteria subsystem which can check multiple metrics and return non-zero exit status code in case if throughput will be lower than your expectations.
More information: Failure Criteria
I am not an expert in JMeter but hope you can help.
How does one execute a test plan where based on a CSV input of users, I want 50% of them to go down 1 flow (for each user - do stuff) and the other 50% to do another flow of logic?
I'm not fussed about if it's at random (That would be good) of users in the file or sequentially going through it and splitting at the 50% mark - but do want to know how to do this type of process within a test plan?
I am trying to create an even (or almost even) type of distribution "load" - some go one path, some go another path. Simple.
Let me give you a walk through:
we have students.
students have different tests.
half of them I want each student to do ALL tests
half of them I want each student to do 1 test (based on some JSON that has been parsed which gives us the list of tests from the web end)
Also, is it possible where if I have a jmx file that contains, say, 10 controllers (these would be tests), I can pick one at random and then return back to the parent? Maybe in here a decision can be made to do next test or not and if next test then... cycle to the next one
Thoughts?
I am using JMeter 5.3 if it helps!
You can use Throughput Controller(s) to distribute the executions.
Set the Based on to Percent executions and set the value to 50.
Note: Set the Sharing mode in the CSV Data Set Config appropriately
You can use a Random Controller to pick a random child from the sub-controllers and samplers.
Use modulo (%) operator on the studentID (or 'number' or 'rownumber' -- whatever is handy).
If the output is odd, send a request to scenario a (first 50%), and if even, send a request for scenario b (second 50%).
// Example in PsuedoJava:
int id = myStudentIDNumber;
if(id % 2 == 0){ // Remainder is 0, so even number
fetch(mywebsite/api1);
} else { // Remainder is 1, so odd number
fetch(mywebsite/api2);
}
There are multiple ways of implementing your scenario, depending on what is your design the options are in:
Throughput Controller which controls how often its children to be executed (either in percentage of total executions or in absolute values)
Switch Controller which acts like a switch statement in programming languages - this guy seems to be the most appropriate for your use case. Moreover it can run a random child if you supply __Random() function as the input
Weighted Switch Controller - which combines above 2 approaches and provides easy way of
I'm about a week into learning JMeter and I've run a few test scripts which generate a summary.csv which contains your standard ; Samples, Average, Median etc...
[My Question]
I was wondering if there was a way to add a threshold for the summary.csv so if Average time is higher than x amount of milliseconds, then the user will be informed that the specific result was slower than expected. (Maybe this can be displayed on the summary.csv, I'm not sure what my options are tbh on how to output this)
I am aware that we can use assertions (specifically duration assertion) through the test script but the issue I have with assertions is that it stops the test once an assertion fails, stopping it from generating a summary.csv
Thank you for any input/opinions you guys have :) It is much appreciated!
Have a great day and stay safe everyone!
They are there already and they're controllable by the following JMeter Properties:
jmeter.reportgenerator.apdex_satisfied_threshold
jmeter.reportgenerator.apdex_tolerated_threshold
there is also a property which can apply thresholds to specific samplers or Transaction Controllers: jmeter.reportgenerator.apdex_per_transaction
Just declare the properties with the values of your choice in the user.properties file and next time you generate the dashboard its APPDEX section will reflect the thresholds.
More information: JMeter HTML Reporting Dashboard - General Settings
I'm executing multiple scripts for 1 hr in Non GUI Mode.
Test Scripts:- Script1 Script2 Script3
Test Execution Approach :- Keeping 3 thread groups in one script and giving equal number of users in Non GUI mode.
JMeter version 4.0
Number of Samples are differing with respect to the scenarios. I need equal distribution for all 3 scenarios. How to achieve this?
You can define the fix loop count in all 3 thread groups something like 1000 or 10000 depend on your scenario. Do not use time duration here as you want jmeter to stop after all execution is completed.
Secondly, you can use "Througput Controller" for executing samplers for a fix number of times.
Hope it help.
I have a JMeter Test Plan with many copies of almost exactly the same test. In each case there is a variable that is slightly different.
Here is the configuration:
There are two sets of user variables. There is a top level user variable list, that contains maximum_runs and there are Test Fragment level user variable lists with the User Defined Variable add_users, which goes up by 10 for each test case. users is a static 10.
I set maximum_runs to 100 and disable all but one Test Fragment. This gives me a number of samples = 100 for each Fragment. I enable a second Test Fragment and I still get 100 samples. But as soon as I enable the third Test Fragment my number of samples drops to 90. 4th, 80. But on the 5th one it goes right back up to 100 and the cycle starts over again. I don't see anything wrong with my math so I believe it to be something about how JMeter uses jexl2 or maybe variables are being changed due to the number of Fragments running? I really need to be able to run this with the same number of samples no matter how many Fragments are running. Ah, note, I have Run Thread Groups consecutively (i.e. run groups one at a time) in Test Plan checked.
I had a similar issue with one application. 1 out of 4 test components just would not go up more, than 50 percent of required users.
The problem was that component was a memory eater and when it reached the maximum heap it did not let the other threads in that component to ramp-up. But just a long shot.