Do Gatling reports req/s include pauses and pace? - performance

I'm running load tests in gatling, and noticed when I ramp 250 users over 10 seconds, the report gives me an average of 31 req/s:
val combinedScenario = scenario("Combined")
.feed(UuidFeeder.feeder)
.exec(_.set("token", token))
.exec(saveData)
.exec(processDocumentRequest)
)
val scn = List(OAuthRequest.inject(atOnceUsers(1)),
combinedScenario.inject(nothingFor(5 seconds),
rampUsers(250) over (10 seconds)));
setUp(scn).protocols(httpConf).maxDuration(60 minutes)
However, when I surround the scenario in a forever loop and put a 60 second pace in between each set of requests, the report then says I average about 8 req/s:
val combinedScenario = scenario("Combined")
.feed(UuidFeeder.feeder)
.exec(_.set("token", token))
.forever(
pace(60 seconds)
.exec(saveData)
.exec(processDocumentRequest)
)
Is this simply because the report factors in the 50 seconds in between iterations where 0 requests are being sent? Can I assume that it's still sending around 31 req/s for the short bursts of requests being sent each minute?

Yes - the reports just show what the actual throughout during the scenario was, not some hypothetical maximum. The number you get could be constrained by your scenario or by the application under test. You would need to run some experiments to confirm.
With the pace in the scenario, you should also be able to increase the number of concurrent users, based on your initial testing

Related

100 requests per minute for a duration of 20 minutes - Load/performance testing

I am doing load test on my system using Jmeter. the requirement is I need to generate 150 requests per minute for a duration of 20 minutes constantly.
I tried with below approaches
I tried by giving this configuration.
No of threads - 3000 [150 req/min * 20 mins]
rampup period - 1200sec [20mins * 60]
But here test stopped after creation of 2004 thread. by giving
this error
Failed to start the native thread for java.lang.Thread “Thread Group 1-2004”
Uncaught Exception java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached in thread Thread[#51,StandardJMeterEngine,6,main]. See log file for details
Used concurrency thread group with below details
Target concurrency - 150
ramp up time - 1 min
hold target rate time - 20 mins
but here no of samples collected are more than 3000 [150 req *20 sec] which i feel is not correct
Is it possible to create exact load according to my requirement in Jmeter(150 req/min ->duration of 20 mins) or should I explore other tools like locust??
tried with precision timers (attaching screen shots)
enter image description here
enter image description here
enter image description here
Your understanding of relationship between users and hits per second is not correct.
When JMeter thread (virtual user) is started it begins executing Samplers as fast as it can. The throughput (number of requests per second) mainly depends on the response time.
For example:
you have 1 user and 1 second response time - the load will be 1 request per second
you have 1 user and 2 seconds response time - the load will be 0.5 requests per second
you have 2 users and 2 seconds response time - the load will be 1 requests per second
you have 4 users and 2 seconds response time - the load will be 2 requests per second
etc.
If you want to slow down JMeter to the desired number of requests per minute it can be done using Timers.
For example:
Constant Throughput Timer:
Precise Throughput Timer:
Throughput Shaping Timer

Gatling - problem with users creation with rampUsers when other scenarios are working

I have spotted some strange behaviour of gatling.
I have 5 scenarios which have such "setup":
scn[00-04].inject(constantUsersPerSec(simulationConfig.UsersPerSec) during
(simulationConfig.maxDurationSeconds seconds).max(simulationConfig.maxDurationSeconds)).protocols(protocol),
UsersPerSec = ~0.8
maxDurationSeconds = 240
and additionaly one scenario with:
scn05.inject(nothingFor(120 seconds), rampUsers(100) during(2 seconds))
.protocols(protocol)
So whole load last 240 seconds and after 120 seconds additional load should be created during 2 seconds. What I have observed:
after starting creation of additional 100 new "users", old "users" from first 5 scenarios did not execute requests / stay inactive some time (why?)
amount of 100 "users" is not reached after start ( internally only one blocking request to soap service is sent during one action - it can take from 5 seconds to ~30seconds ), it stays on about level of 70 (dark blue line) but it should reach amount of 100 "users" (why?)
OK, I have created another test-project with:
scn00.inject(rampUsers(100) during(2 seconds))
.protocols(protocol),
scn01.inject(nothingFor(20 seconds),rampUsers(100) during(2 seconds))
.protocols(protocol)
In those scenarios I had left only
.feed(customFeeder)
.pause(1 seconds)
.exec(customAction("Action X")((testData) => {
val y = testData.getConfig -- sometimes reads shared file during initialization
Thread.sleep(20000)
}))
and results looked:
why in this chart we can see that:
amount of 100 users was not reached? rampUsers(100) during (2 seconds)
in that point this scenario should ends ( it contains almost only Thread.sleep(20000) )
why: second scenario could reach amount of 100 users even if it differs only with nothingFor(20 seconds), additionaly: why this scenario did not start after 20seconds but 40 seconds?
why: second scenario was frozen for 60 seconds and started to finishing after first scenario was completed by all users?
why it did not looked more or less like this:
I guess you're confusing injection profile for something that would drive virtual users lifespan.
Injection profiles only drive when users are injected/started.
Lifespan is driven by your scenarios, once a user reach the end of its scenario, it terminates.

How to get the responses of only the spike added in the soak test in jmeter?

My scenario is I'm running 50 threads for 15 mins and the running 100 threads for 15 mins. The total time of the of the test is 21 mins.
The 50 threads will start running after 10 seconds, slowly ramping up, for 5 mins 50 threads will run simultaneously and then after 5 mins 100 threads with start running slowly with ramping up and run for 15 mins.
After 100 threads finish the 50 threads will continue there running.
The image below will show you the jp#gc thread group
The image will show you the jp#gc ultimate thread group
I only want the responses (maily in graph format ) drilled down to only when 100 users are present, I dont want aggregate of all the soak test.How can this be done? I have also tried loading the jtl.gz file on https://loadosophia.org , it also gives the aggregate reportwhich i dont want.
I only want the specific report of the spike added of 100 users for 15 mins
Please let me know.
Thanks in advance
You can grep your file to only select the interval of time you want and use it to generate file.
Another option is to use this method:
http://www.ubik-ingenierie.com/blog/automatically-generating-nice-graphs-at-end-of-your-load-test-with-apache-jmeter-and-jmeter-plugins/
With this plugin:
http://jmeter-plugins.org/wiki/GraphsGeneratorListener/
And use the fields :
Start Offset
End Offset

jmeter ultimate thread group with relation to constant timer

Scenario :
a. Ultimate Thread Group : Thread count :100, Startup time : 60, Hold load : 300
b. If there are 10 Http(s) request in the script and each is having 1 sec of constant timer, total constant time value = 10 seconds.
In the above scenario the hold time will become 300 +(100 *10) OR 300 +(10) OR 300 -(100 *10) OR 300 -(10)
Your timers on samplers don't have anything to do with your total test time. So in your above example, it will simply be 60+300 seconds.
When a thread finishes its 10 requests, it will start again. So once your test is ramped up, each thread will execute them 30 times. If you increased your timers, the 10 request would take longer to complete, so fewer iterations of them would be done- but it wouldn't change your duration.
Timers and holdtime works independently, they are not related.
In your example-
Test will start loading Threads as test begins and by end of 60 seconds all 100 threads would be up.
Individual thread execution depends on response of each request sent on server (in your case 10 requests/thread), so constant timer will wait for 1 seconds before sending next request of same thread to server.
So, hold time ensures same 100 users(threads) load on server for specified period. As and when one thread completes its execution cycles (all 10 requests), it will add another thread to maintain same load during test time specified as hold time.
Test will get completed in 30+60 = 90 seconds.

Jmeter - I have run 2 test cases but result seems odd

I have run load testing for website but when I have increased no. of users , I can see throughput time seems increasing instead of decrease.
Test Case 1 :
No. of Threads : 15
Ramp up time : 450 [As I want to put delay of 30 seconds between 2 users]
Loop count : Forever
Scheduler : 1800 Seconds [As I want to run test for 30 minutes]
In Http requests I have added 10 pages and each request has constant timer with 30000 miliseconds as I need to put delay of 30 seconds between 2 requests.
Now When I see result of Aggregate Report , it shows me Throughput 3/min for each request.
Test Case 2 :
No. of Threads : 30
Ramp up time : 900 [As I want to put delay of 30 seconds between 2 users]
Loop count : Forever
Scheduler : 1800 Seconds [As I want to run test for 30 minutes]
In Http requests I have added 10 requests/pages and each request has constant timer with 30000 miliseconds as I need to put delay of 30 seconds between 2 requests.
Now When I see result of Aggregate Report , it shows me Throughput 6/min for each request.
I am confuse that how it is possible? If my users are increased from 15 to 30 then it should have more load on server and throughtput should decrease like 1/min or 2/min.
Please let me know what I am doing wrong here.
Throughput is no. of completions per unit time. (A completion can be a http request/db request in short anything that needs to be executed and needs >0 execution time.)
Ex. req per sec or req per min etc.
By definition of throughput in JMeter, it is calculated as total no. of requests/total time.
In your first case, no. of requests generated in 1800 seconds with 3 second delay in every request by 15 users are x. Thus throughput is x/30 i.e. 3 it means ~90 requests were generated (verify this from aggregate report or other reporter.)
In your second case, everything else is same but no. of users are doubled which creates ~double no. of requests in given time which is (1800 seconds)
Thus according to formula, no. of requests generated/total time.
Throughput in 2nd case = 2x/30 = 2*throughput in 1st case
Which is 6/min. (Correctly shown by JMeter.)
Key here is to check no. of requests generated in both cases.
I hope this clears your confusion. Let me know if you need further clarification. BTW "when I have increased no. of users , I can see throughput time seems increasing instead of decrease." is not always true.
Throughput increased by factor of 2.
Test Case 1: - 3 requests per minute - 1 request each 20 seconds
Test Case 2: - 6 requests per minute - 1 request each 10 seconds
As per JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
You may also be interested in the following plugins:
Server Hits Per Second
Transactions Per Second
or alternatively Loadosophia.org service which can convert your JMeter .jtl results files into easy-understandable professional load report

Resources