jmeter threads and rampup understanding - jmeter

I've just started to play with Jmeter for my performance/load tests and have a basic question despite reading the official documentation. Would be helpful if someone validates my understanding on Threads and Rampup times.
Example 1:
Threads: 4
RampupTime: 0.1
No of requests (test cases): 1000
How does the thread distribution happen above?
Example 2:
Threads: 4
RampupTime: 1
No of requests (test cases): 1000
How does the thread distribution happen above?
My understanding in this case is Jmeter would take 1 second to spin up 4 threads. And the tests that happen to run after a second (let's say from test case 10 onwards) 4 concurrent threads will be hitting 4 different tests? (ie in a concurrent batch of 4) Is this correct?
Please help. I'm a bit confused with the correlation between the above 3 parameters. Any illustration would be much appreciated. Thanks.

So for the first question no there is no rampup time for anything less than 1.
Why?
because rampupTime is int anyting less than one is considered 0.
http://svn.apache.org/viewvc/jmeter/branches/doc-v2_3_1/src/core/org/apache/jmeter/threads/ThreadGroup.java?revision=1196285&view=markup#l227
public void setRampUp(int rampUp) {
setProperty(new IntegerProperty(RAMP_TIME, rampUp));
}
For the second question every 250 milliseconds there is a thread spawned and after one second you will have 4 threads running.
http://svn.apache.org/viewvc/jmeter/branches/doc-v2_3_1/src/core/org/apache/jmeter/engine/StandardJMeterEngine.java?revision=1196285&view=markup#l399
int rampUp = group.getRampUp();
float perThreadDelay = ((float) (rampUp * 1000) / (float) numThreads);
Coming back to your understanding of concurrrent batch. NO it is not so, each thread will be independently running for eg if for some unknown reason one of the threads get hung other three will be still running. It is not that they will wait for the first thread to complete to start the second batch of request.

Related

Do Gatling reports req/s include pauses and pace?

I'm running load tests in gatling, and noticed when I ramp 250 users over 10 seconds, the report gives me an average of 31 req/s:
val combinedScenario = scenario("Combined")
.feed(UuidFeeder.feeder)
.exec(_.set("token", token))
.exec(saveData)
.exec(processDocumentRequest)
)
val scn = List(OAuthRequest.inject(atOnceUsers(1)),
combinedScenario.inject(nothingFor(5 seconds),
rampUsers(250) over (10 seconds)));
setUp(scn).protocols(httpConf).maxDuration(60 minutes)
However, when I surround the scenario in a forever loop and put a 60 second pace in between each set of requests, the report then says I average about 8 req/s:
val combinedScenario = scenario("Combined")
.feed(UuidFeeder.feeder)
.exec(_.set("token", token))
.forever(
pace(60 seconds)
.exec(saveData)
.exec(processDocumentRequest)
)
Is this simply because the report factors in the 50 seconds in between iterations where 0 requests are being sent? Can I assume that it's still sending around 31 req/s for the short bursts of requests being sent each minute?
Yes - the reports just show what the actual throughout during the scenario was, not some hypothetical maximum. The number you get could be constrained by your scenario or by the application under test. You would need to run some experiments to confirm.
With the pace in the scenario, you should also be able to increase the number of concurrent users, based on your initial testing

Runtime Controller in Jmeter

Can you help me to explain relationship between time in Runtime Controller and Ramp_up period value of Thread group?
I tested
Number of Thread: 1
Ramp_Up Period: 1
Loop count: 1
Runtime Controller: 5s
->Elapsed time of current running test: 5s
But with case
Number of Thread: 5
Ramp_Up Period: 5
Loop count: 1
Runtime Controller: 5s
->Elapsed time of current running test: 10s
I don't understand why it become 10s.
Could you help me to explain more?
Ramp up is the time to execute all threads, runtime is controlling each thread execution.
In your case, ramp up 5 seconds means last thread will be executed after 5 seconds. Last thread will enter runtime controller which will run 5 seconds of execution. Thus 10 seconds is the maximum of your execution.
Runtime Controller acts according to JMeter Scoping Rules so it defines how long its children are allowed to run.
Normally you should be using it in conjunction with Loop Count = Forever or -1 on either Thread Group or Loop Controller level.
So
if you want the whole test to run for 5 seconds - use "Scheduler" section of the Thread Group
if you want only certain sampler(s) to run for 5 seconds - put them under the Runtime Controller, however the whole test duration will depend on when the last sampler enters the Runtime Controller
Also be aware that JMeter "asks" threads to stop so it might take some extra time to let them gracefully shut down.

Jmeter - I have run 2 test cases but result seems odd

I have run load testing for website but when I have increased no. of users , I can see throughput time seems increasing instead of decrease.
Test Case 1 :
No. of Threads : 15
Ramp up time : 450 [As I want to put delay of 30 seconds between 2 users]
Loop count : Forever
Scheduler : 1800 Seconds [As I want to run test for 30 minutes]
In Http requests I have added 10 pages and each request has constant timer with 30000 miliseconds as I need to put delay of 30 seconds between 2 requests.
Now When I see result of Aggregate Report , it shows me Throughput 3/min for each request.
Test Case 2 :
No. of Threads : 30
Ramp up time : 900 [As I want to put delay of 30 seconds between 2 users]
Loop count : Forever
Scheduler : 1800 Seconds [As I want to run test for 30 minutes]
In Http requests I have added 10 requests/pages and each request has constant timer with 30000 miliseconds as I need to put delay of 30 seconds between 2 requests.
Now When I see result of Aggregate Report , it shows me Throughput 6/min for each request.
I am confuse that how it is possible? If my users are increased from 15 to 30 then it should have more load on server and throughtput should decrease like 1/min or 2/min.
Please let me know what I am doing wrong here.
Throughput is no. of completions per unit time. (A completion can be a http request/db request in short anything that needs to be executed and needs >0 execution time.)
Ex. req per sec or req per min etc.
By definition of throughput in JMeter, it is calculated as total no. of requests/total time.
In your first case, no. of requests generated in 1800 seconds with 3 second delay in every request by 15 users are x. Thus throughput is x/30 i.e. 3 it means ~90 requests were generated (verify this from aggregate report or other reporter.)
In your second case, everything else is same but no. of users are doubled which creates ~double no. of requests in given time which is (1800 seconds)
Thus according to formula, no. of requests generated/total time.
Throughput in 2nd case = 2x/30 = 2*throughput in 1st case
Which is 6/min. (Correctly shown by JMeter.)
Key here is to check no. of requests generated in both cases.
I hope this clears your confusion. Let me know if you need further clarification. BTW "when I have increased no. of users , I can see throughput time seems increasing instead of decrease." is not always true.
Throughput increased by factor of 2.
Test Case 1: - 3 requests per minute - 1 request each 20 seconds
Test Case 2: - 6 requests per minute - 1 request each 10 seconds
As per JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
You may also be interested in the following plugins:
Server Hits Per Second
Transactions Per Second
or alternatively Loadosophia.org service which can convert your JMeter .jtl results files into easy-understandable professional load report

Total time taken by jmeter to execute the given load

I am performing load test with these parameters :
threads=4
ramp_up_period=90
loop_count=60
So according to above numbers, my assumption is that each one of the four thread will be created in 22.25 seconds and this 4 thread cycle will be repeated 60 times.
Below is the load test summarized report :
According to JMeter manual ramp up period is :
The ramp-up period tells JMeter how long to take to "ramp-up" to the full number of threads chosen. If 10 threads are used, and the ramp-up
period is 100 seconds, then JMeter will take 100 seconds to get all 10
threads up and running. Each thread will start 10 (100/10) seconds
after the previous thread was begun. If there are 30 threads and a
ramp-up period of 120 seconds, then each successive thread will be
delayed by 4 seconds.
So according to above scenarios approximate total time for executing load test with mentioned thread group parameters is :
TotalTime = ramp_up_period*loop_count
which in my case evaluates to 90*60 = 5400 seconds, but according to summariser Total Time is coming 74 seconds
JMeter version is 2.11.
Is there is any problem in my understanding or there is some issue with JMeter ?
Initially JMeter will start 1 thread which will be doing something, which is under your Loop Controller. In 30 seconds second thread will join, in 30 more seconds 3rd thread will start and finally on 90th second 4th thread will start.
Starting from 90 second 4 threads will be doing "what is under your loop controller".
There is no way to determine how long it would take, especially under the load. If you need a load test to last approximately N seconds you can use Duration input under Sheduler in Thread Group.
If you want to forcefully stop the test if certain conditions are met there are 2 more options:
Use Test Action Sampler
Use Beanshell Sampler
Example Beanshell code (assumed to be run in separate thread group in endless loop with reasonable delay between firing events)
if (currenttime - teststart > Long.parseLong(props.get("test_run_time").toString())) {
try {
DatagramSocket socket = new DatagramSocket();
byte[] buf = "StopTestNow".getBytes("ASCII");
InetAddress address = InetAddress.getByName("localhost");
DatagramPacket packet = new DatagramPacket(buf, buf.length, address, 4445);
socket.send(packet);
socket.close();
} catch (Throwable ex) {
}
}
TotalTime would be that if you were working without concurrency. When working in a multi-threaded environment it can happen that thread 1 is already performing its second call when thread 3 is still firing up.

JMeter - How to implement "N users fire up N different queries simultaneously" scenario

I have trouble implementing the following scenario and Google did't help - may be I am missing something obvious?
Scenario is :
Step 1. 9 sesssions simultaneously running 3 different JDBC queries, i.e
3*Q1,3*Q2,3*Q3 all starting and running at the same time
Clarification: In the beginning of step 1, the following queries will start in 9 different sessions - Q1,Q1,Q1,Q2,Q2,Q2,Q3,Q3,Q3
Step 2. 27 sessions like
above (9 times each query)
Step 3. 54 sessions (18 times each query)
Steps must run sequentially.
To do so:
Step 1)
3 thread groups, each one with 3 threads, each thread group calling a different Qi
Step2)
3 thread groups, each one with 9 threads, each thread group calling a different Qi with scheduler delayed so that it starts after step1 has finished
Step3)
Same as step2 with 18 threads and delayed so that it starts after step 2
But I must say I don't understand why you need such behaviour

Resources