I am performing load test with these parameters :
threads=4
ramp_up_period=90
loop_count=60
So according to above numbers, my assumption is that each one of the four thread will be created in 22.25 seconds and this 4 thread cycle will be repeated 60 times.
Below is the load test summarized report :
According to JMeter manual ramp up period is :
The ramp-up period tells JMeter how long to take to "ramp-up" to the full number of threads chosen. If 10 threads are used, and the ramp-up
period is 100 seconds, then JMeter will take 100 seconds to get all 10
threads up and running. Each thread will start 10 (100/10) seconds
after the previous thread was begun. If there are 30 threads and a
ramp-up period of 120 seconds, then each successive thread will be
delayed by 4 seconds.
So according to above scenarios approximate total time for executing load test with mentioned thread group parameters is :
TotalTime = ramp_up_period*loop_count
which in my case evaluates to 90*60 = 5400 seconds, but according to summariser Total Time is coming 74 seconds
JMeter version is 2.11.
Is there is any problem in my understanding or there is some issue with JMeter ?
Initially JMeter will start 1 thread which will be doing something, which is under your Loop Controller. In 30 seconds second thread will join, in 30 more seconds 3rd thread will start and finally on 90th second 4th thread will start.
Starting from 90 second 4 threads will be doing "what is under your loop controller".
There is no way to determine how long it would take, especially under the load. If you need a load test to last approximately N seconds you can use Duration input under Sheduler in Thread Group.
If you want to forcefully stop the test if certain conditions are met there are 2 more options:
Use Test Action Sampler
Use Beanshell Sampler
Example Beanshell code (assumed to be run in separate thread group in endless loop with reasonable delay between firing events)
if (currenttime - teststart > Long.parseLong(props.get("test_run_time").toString())) {
try {
DatagramSocket socket = new DatagramSocket();
byte[] buf = "StopTestNow".getBytes("ASCII");
InetAddress address = InetAddress.getByName("localhost");
DatagramPacket packet = new DatagramPacket(buf, buf.length, address, 4445);
socket.send(packet);
socket.close();
} catch (Throwable ex) {
}
}
TotalTime would be that if you were working without concurrency. When working in a multi-threaded environment it can happen that thread 1 is already performing its second call when thread 3 is still firing up.
Related
I am doing load test on my system using Jmeter. the requirement is I need to generate 150 requests per minute for a duration of 20 minutes constantly.
I tried with below approaches
I tried by giving this configuration.
No of threads - 3000 [150 req/min * 20 mins]
rampup period - 1200sec [20mins * 60]
But here test stopped after creation of 2004 thread. by giving
this error
Failed to start the native thread for java.lang.Thread “Thread Group 1-2004”
Uncaught Exception java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached in thread Thread[#51,StandardJMeterEngine,6,main]. See log file for details
Used concurrency thread group with below details
Target concurrency - 150
ramp up time - 1 min
hold target rate time - 20 mins
but here no of samples collected are more than 3000 [150 req *20 sec] which i feel is not correct
Is it possible to create exact load according to my requirement in Jmeter(150 req/min ->duration of 20 mins) or should I explore other tools like locust??
tried with precision timers (attaching screen shots)
enter image description here
enter image description here
enter image description here
Your understanding of relationship between users and hits per second is not correct.
When JMeter thread (virtual user) is started it begins executing Samplers as fast as it can. The throughput (number of requests per second) mainly depends on the response time.
For example:
you have 1 user and 1 second response time - the load will be 1 request per second
you have 1 user and 2 seconds response time - the load will be 0.5 requests per second
you have 2 users and 2 seconds response time - the load will be 1 requests per second
you have 4 users and 2 seconds response time - the load will be 2 requests per second
etc.
If you want to slow down JMeter to the desired number of requests per minute it can be done using Timers.
For example:
Constant Throughput Timer:
Precise Throughput Timer:
Throughput Shaping Timer
I am using an ultimate thread group to log in 1000 users in a period of 30 minutes. Only after all the users have logged in do I want to execute further scenarios.
The way I thought about doing this was to start a global timer and delay each thread for 30 minutes - (current time - start time) e.g. test starts at 9am and thread 1 completes login in 10 seconds so it would be delayed for 30 minutes - (9:00:10 - 9:00:00) i.e. 29 minutes and 50 seconds. And for example thread 500 would start at 9.15 and login takes 30 seconds then the delay for this thread would be 30 minutes - (9:15:45 - 9:00:00) i.e. 14 minutes and 15 seconds. In this way after 30 minutes I'll have 1000 users all logged in ready to execute the next steps. Does this make sense?
Is there are a more elegant way of doing this perhaps with built in JMeter functionality?
You're using the wrong timer, the easier solution would be going for Synchronizing Timer
Add it as a child of the second sampler (or whatever is doing the real stuff after the login)
Set "Number of Simulated Users to Group by" to 1000
This way the ramp-up/login will happen according to the ultimate thread group schedule and after that JMeter will wait until there will be 1000 active threads at the location of the Synchronizing Timer and once there are 1000 users - they will be released at exactly the same moment.
More information: Using the JMeter Synchronizing Timer
Can you help me to explain relationship between time in Runtime Controller and Ramp_up period value of Thread group?
I tested
Number of Thread: 1
Ramp_Up Period: 1
Loop count: 1
Runtime Controller: 5s
->Elapsed time of current running test: 5s
But with case
Number of Thread: 5
Ramp_Up Period: 5
Loop count: 1
Runtime Controller: 5s
->Elapsed time of current running test: 10s
I don't understand why it become 10s.
Could you help me to explain more?
Ramp up is the time to execute all threads, runtime is controlling each thread execution.
In your case, ramp up 5 seconds means last thread will be executed after 5 seconds. Last thread will enter runtime controller which will run 5 seconds of execution. Thus 10 seconds is the maximum of your execution.
Runtime Controller acts according to JMeter Scoping Rules so it defines how long its children are allowed to run.
Normally you should be using it in conjunction with Loop Count = Forever or -1 on either Thread Group or Loop Controller level.
So
if you want the whole test to run for 5 seconds - use "Scheduler" section of the Thread Group
if you want only certain sampler(s) to run for 5 seconds - put them under the Runtime Controller, however the whole test duration will depend on when the last sampler enters the Runtime Controller
Also be aware that JMeter "asks" threads to stop so it might take some extra time to let them gracefully shut down.
Scenario :
a. Ultimate Thread Group : Thread count :100, Startup time : 60, Hold load : 300
b. If there are 10 Http(s) request in the script and each is having 1 sec of constant timer, total constant time value = 10 seconds.
In the above scenario the hold time will become 300 +(100 *10) OR 300 +(10) OR 300 -(100 *10) OR 300 -(10)
Your timers on samplers don't have anything to do with your total test time. So in your above example, it will simply be 60+300 seconds.
When a thread finishes its 10 requests, it will start again. So once your test is ramped up, each thread will execute them 30 times. If you increased your timers, the 10 request would take longer to complete, so fewer iterations of them would be done- but it wouldn't change your duration.
Timers and holdtime works independently, they are not related.
In your example-
Test will start loading Threads as test begins and by end of 60 seconds all 100 threads would be up.
Individual thread execution depends on response of each request sent on server (in your case 10 requests/thread), so constant timer will wait for 1 seconds before sending next request of same thread to server.
So, hold time ensures same 100 users(threads) load on server for specified period. As and when one thread completes its execution cycles (all 10 requests), it will add another thread to maintain same load during test time specified as hold time.
Test will get completed in 30+60 = 90 seconds.
I have run load testing for website but when I have increased no. of users , I can see throughput time seems increasing instead of decrease.
Test Case 1 :
No. of Threads : 15
Ramp up time : 450 [As I want to put delay of 30 seconds between 2 users]
Loop count : Forever
Scheduler : 1800 Seconds [As I want to run test for 30 minutes]
In Http requests I have added 10 pages and each request has constant timer with 30000 miliseconds as I need to put delay of 30 seconds between 2 requests.
Now When I see result of Aggregate Report , it shows me Throughput 3/min for each request.
Test Case 2 :
No. of Threads : 30
Ramp up time : 900 [As I want to put delay of 30 seconds between 2 users]
Loop count : Forever
Scheduler : 1800 Seconds [As I want to run test for 30 minutes]
In Http requests I have added 10 requests/pages and each request has constant timer with 30000 miliseconds as I need to put delay of 30 seconds between 2 requests.
Now When I see result of Aggregate Report , it shows me Throughput 6/min for each request.
I am confuse that how it is possible? If my users are increased from 15 to 30 then it should have more load on server and throughtput should decrease like 1/min or 2/min.
Please let me know what I am doing wrong here.
Throughput is no. of completions per unit time. (A completion can be a http request/db request in short anything that needs to be executed and needs >0 execution time.)
Ex. req per sec or req per min etc.
By definition of throughput in JMeter, it is calculated as total no. of requests/total time.
In your first case, no. of requests generated in 1800 seconds with 3 second delay in every request by 15 users are x. Thus throughput is x/30 i.e. 3 it means ~90 requests were generated (verify this from aggregate report or other reporter.)
In your second case, everything else is same but no. of users are doubled which creates ~double no. of requests in given time which is (1800 seconds)
Thus according to formula, no. of requests generated/total time.
Throughput in 2nd case = 2x/30 = 2*throughput in 1st case
Which is 6/min. (Correctly shown by JMeter.)
Key here is to check no. of requests generated in both cases.
I hope this clears your confusion. Let me know if you need further clarification. BTW "when I have increased no. of users , I can see throughput time seems increasing instead of decrease." is not always true.
Throughput increased by factor of 2.
Test Case 1: - 3 requests per minute - 1 request each 20 seconds
Test Case 2: - 6 requests per minute - 1 request each 10 seconds
As per JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
You may also be interested in the following plugins:
Server Hits Per Second
Transactions Per Second
or alternatively Loadosophia.org service which can convert your JMeter .jtl results files into easy-understandable professional load report