I need to test if our system can perform N requests per second.
Technically, it's 2 requests to one API, 2 requests to another, and 6 requests to third one.
But the important thing that they should happen simultaneously - so 10 requests per second.
So, in JMeter I've created three Thread Groups, first defines number of threads 1, and ramp-up time 0.
Second thread group is the same, and third thread group defines number of threads 6 and ramp-up time 0.
But that doesn't really guarantee it's going to run them per second
How do I emulate that? And how do I see the results -- if it was able to perform or wasn't?
Thanks!
You could use ConstantThroughputTimer.
Quote from JMeter help files below:
18.6.4 Constant Throughput Timer
This timer introduces variable pauses, calculated to keep the total throughput (in terms of samples per minute) as close as possible to a give figure. Of course the throughput will be lower if the server is not capable of handling it, or if other timers or time-consuming test elements prevent it.
N.B. although the Timer is called the Constant Throughput timer, the throughput value does not need to be constant. It can be defined in terms of a variable or function call, and the value can be changed during a test.
For example I've used it to generate 40 requests per second:
<ConstantThroughputTimer guiclass="TestBeanGUI" testclass="ConstantThroughputTimer" testname="Constant Throughput Timer" enabled="true">
<stringProp name="calcMode">all active threads in current thread group</stringProp>
<doubleProp>
<name>throughput</name>
<value>2400.0</value>
<savedValue>0.0</savedValue>
</doubleProp>
</ConstantThroughputTimer>
And thats a summary:
Created the tree successfully using performance/search-performance.jmx
Starting the test # Tue Mar 15 16:28:39 CET 2011 (1300202919244)
Waiting for possible shutdown message on port 4445
Generate Summary Results + 3247 in 80,3s = 40,4/s Avg: 18 Min: 0 Max: 1328 Err: 108 (3,33%)
Generate Summary Results + 7199 in 180,0s = 40,0/s Avg: 15 Min: 1 Max: 2071 Err: 378 (5,25%)
Generate Summary Results = 10446 in 260,3s = 40,1/s Avg: 16 Min: 0 Max: 2071 Err: 486 (4,65%)
Generate Summary Results + 7200 in 180,0s = 40,0/s Avg: 14 Min: 0 Max: 152 Err: 399 (5,54%)
Generate Summary Results = 17646 in 440,4s = 40,1/s Avg: 15 Min: 0 Max: 2071 Err: 885 (5,02%)
Generate Summary Results + 7199 in 180,0s = 40,0/s Avg: 14 Min: 0 Max: 1797 Err: 436 (6,06%)
Generate Summary Results = 24845 in 620,4s = 40,0/s Avg: 15 Min: 0 Max: 2071 Err: 1321 (5,32%)
But I run this test inside my network.
As with any network test, there's always going to be problems, especially with latency - even if you could send exactly 6 per second, they're going to be sent sequentially (that's just how packets get sent) and may not all hit in that second, plus processing time.
Generally when performance metrics specific x per second, it's measured over a period of time. Your API may even have a buffer - so you could technically send 6 per second, but process 5 per second, with a buffer of 20, meaning it'd be fine for 20 seconds of traffic, as you'd have sent 120, which would take 120/5 = 24 seconds to process. But any more than that would overflow the buffer. So to just send exactly 6 in a second to test is insufficient.
In the thread group, you're right setting number of threads (users) to 6. Then run it looping forever (tick it or put it in a while loop) and add a listener like aggregate report and results tree. The results you can use to check the right stuff is being sent and responded to (assuming you validate the responses) and in the aggregate report, you can see how many of each activity is happening per hour (obviously multiply by 3600 for seconds, but because of this inaccuracy it's best to run it for a good length of time).
The initial load test can now be run, and as a more accurate test, you can leave it running for longer (soak test) to see if any other problems surface - buffer overflows, memory leaks, or other unexpected events.
Use the Throughput Shaping Timer
I had similar problem and here are two solutions I found:
Solution 1:
You can use Stepping Thread Group (allows to set thread number increase stages over set periods of time) with Constant Throughput Timer in it.
Throughput Timer allows you to set number of samples that thread can send per minute (e.g. if you set it to 1, the thread will only send one request per minute). Also, you can apply Throughput Timer to all threads in your Thread Group or have Timer for each thread with its own settings.
Read more about Throughput Timer here: https://www.blazemeter.com/blog/how-use-jmeters-throughput-constant-timer
Solution 2:
Use "SetUp Thread Group". You can calculate thread number and rump up time to get Threads per Second desired.
You can use Schedule Feedback Function and will also need Concurrency Thread Group
Same can Also be done by configuring "ConstantThroughputTimer" as suggested above from UI also by adding "Constant Throughput Timer" by navigating by right click on Thread Group and then click on Timer and then choose the "Constant Throughput Timer".
Related
I am doing load test on my system using Jmeter. the requirement is I need to generate 150 requests per minute for a duration of 20 minutes constantly.
I tried with below approaches
I tried by giving this configuration.
No of threads - 3000 [150 req/min * 20 mins]
rampup period - 1200sec [20mins * 60]
But here test stopped after creation of 2004 thread. by giving
this error
Failed to start the native thread for java.lang.Thread “Thread Group 1-2004”
Uncaught Exception java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached in thread Thread[#51,StandardJMeterEngine,6,main]. See log file for details
Used concurrency thread group with below details
Target concurrency - 150
ramp up time - 1 min
hold target rate time - 20 mins
but here no of samples collected are more than 3000 [150 req *20 sec] which i feel is not correct
Is it possible to create exact load according to my requirement in Jmeter(150 req/min ->duration of 20 mins) or should I explore other tools like locust??
tried with precision timers (attaching screen shots)
enter image description here
enter image description here
enter image description here
Your understanding of relationship between users and hits per second is not correct.
When JMeter thread (virtual user) is started it begins executing Samplers as fast as it can. The throughput (number of requests per second) mainly depends on the response time.
For example:
you have 1 user and 1 second response time - the load will be 1 request per second
you have 1 user and 2 seconds response time - the load will be 0.5 requests per second
you have 2 users and 2 seconds response time - the load will be 1 requests per second
you have 4 users and 2 seconds response time - the load will be 2 requests per second
etc.
If you want to slow down JMeter to the desired number of requests per minute it can be done using Timers.
For example:
Constant Throughput Timer:
Precise Throughput Timer:
Throughput Shaping Timer
I want to simulate 100 rps for the application which I am working on. I am planning to use Concurrency Thread Group and Throughput shaping timer. I have created a sample example to test how it works. Below is my script
I have added this as the next line to log4j2.xml file:
<Logger name="kg.apc.jmeter.timers.VariableThroughputTimer" level="debug" />
jmeter.log has below logs
2021-07-21 14:11:22,402 INFO c.b.j.c.VirtualUserController: Need to decrease concurrency, thread is done: bzm - Concurrency Thread Group-ThreadStarter 1-217
2021-07-21 14:11:22,402 INFO o.a.j.t.JMeterThread: Thread is done: bzm - Concurrency Thread Group-ThreadStarter 1-217
2021-07-21 14:11:22,402 INFO o.a.j.t.JMeterThread: Thread finished: bzm - Concurrency Thread Group-ThreadStarter 1-217
2021-07-21 14:11:22,407 DEBUG k.a.j.t.VariableThroughputTimer: Calculating 407 380.0 38
2021-07-21 14:11:22,427 INFO c.b.j.c.VirtualUserController: Need to decrease concurrency, thread is done: bzm - Concurrency Thread Group-ThreadStarter 1-218
2021-07-21 14:11:22,427 INFO o.a.j.t.JMeterThread: Thread is done: bzm - Concurrency Thread Group-ThreadStarter 1-218
2021-07-21 14:11:22,427 INFO o.a.j.t.JMeterThread: Thread finished: bzm - Concurrency Thread Group-ThreadStarter 1-218
........
........
2021-07-21 14:11:23,007 DEBUG k.a.j.t.VariableThroughputTimer: Second changed 60.0 , waiting: 0, samples sent 94, current rps: 100.0 rps
2021-07-21 14:11:23,007 WARN k.a.j.t.VariableThroughputTimer: No free threads available in current Thread Group bzm - Concurrency Thread Group, made 94 samples/s for expected rps 100.0 samples/s, increase your number of threads
2021-07-21 14:11:23,007 DEBUG k.a.j.t.VariableThroughputTimer: Calculating 7 0.0 0
My question is
Q1. Have I configured my test correctly to simulate throughput of 100 rps or am I missing something ?
Q2. How do I calculate in advance how many users do I need to add in Target Concurrency? If I go with the formula
(rps * Maximum response time) / 1000
Here, do I need to add the Maximum response time of all the samplers from 1 to 6? or how?
Q3. How do we calculate the throughput?(refer 3rd image having Aggregate Report),
Is the Total Throughput = Adding Throughput of Sampler 1 to 6, which is = (15.8 + 15.8 + 15.8 + 15.7 + 15.6 + 15.6) = 94.3 rps. Is my calculation correct?
Q4. In the jmeter.log, it says "Need to decrease concurrency, thread is done: bzm - Concurrency Thread Group-ThreadStarter 1-217".
Does that mean number of threads(users) needed to simulate 100 rps are more and hence jmeter needs to decrease the threads(users)?
Then again in the logs, it says, "No free threads available in current Thread Group bzm - Concurrency Thread Group, made 94 samples/s for expected rps 100.0 samples/s, increase your number of threads"
Is it asking me (the user) to increase the thread or it is just the jmeter talking to itself? Jmeter already has 150 threads to use. Actually I started from 50 and then also I received the message to increase the no of threads then I increased the threads to 100 and I got the same message and then finally I increased it to 150 and still getting that message in the logs?
As you can see for the above image, at the 51th second, jmeter was using only 29 threads(users) out of 150. Which means it still has 121 threads left to use. Also, I observed that when I started the script, immediately 150 threads were in use but then they started rapidly decreasing and increasing. However, they never been to 150 during the 60 seconds run (150 threads wered used only at the start, for a fraction of seconds and then got reduced!)
Then why was the message in the logs to increase the users? infact there are users available which jmeter can use?
You only forgot to ask the "Q5": the Ultimate Question of Life, the Universe, and Everything
Q1. We don't know, it depends on the application response time, your 150 threads may or may not be sufficient to conduct the load of 100 requests per second
Q2. Go for the largest response time ever, according to JMeter threads model each thread waits for the previous request to finish before starting the next one so the whole sequence of 6 samplers will act at the speed of the slowest one.
Q3. Throughput Shaping Timer tries to reach and maintain the defined throughput for all the samplers in its scope so if you have 6 requests in scope the throughput for individual request would be 100 / 6
Q4. Need to decrease concurrency - the timer is "talking to itself" informing that it goes too fast hence need to shut down a couple of threads to slow down the requests rate. increase your number of threads is for you but I don't think it's applicable for your case, if you run the real test and will see tons of messages like this in the log it will indicate that the current amount is not sufficient to reach the target throughput
Don't run your test in GUI mode, it's only for tests development and debugging, when it comes to execution you must be using command-line non-GUI mode
According to JMeter Best Practices you should always be using the latest version of JMeter so consider upgrading to JMeter 5.4.1 or whatever is the latest stable version available at JMeter Downloads page
I am still confused with some JMeter logs displayed here. Can someone please give me some light into this?
Below is a log generated by JMeter for my tests.
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
summary + 1 in 00:00:02 = 0.5/s Avg: 1631 Min: 1631 Max: 1631 Err: 0 (0.00%) Active: 2 Started: 2 Finished: 0
summary + 218 in 00:00:25 = 8.6/s Avg: 816 Min: 141 Max: 1882 Err: 1 (0.46%) Active: 10 Started: 27 Finished: 17
summary = 219 in 00:00:27 = 8.1/s Avg: 820 Min: 141 Max: 1882 Err: 1 (0.46%)
summary + 81 in 00:00:15 = 5.4/s Avg: 998 Min: 201 Max: 2096 Err: 1 (1.23%) Active: 0 Started: 30 Finished: 30
summary = 300 in 00:00:42 = 7.1/s Avg: 868 Min: 141 Max: 2096 Err: 2 (0.67%)
Tidying up ... # Fri Jun 09 04:19:15 IDT 2017 (1496971155116)
Does this log means [ last step ] 300 requests were fired, 00.00:42 secs took for the whole tests, 7.1 threads/sec or 7.1 requests/sec fired?
How can i make sure to increase the TPS? Same tests were done in a different site and they are getting 132TPS for the same tests and on the same server. Can someone put some light into this?
In here, total number of requests is 300. Throughput is 7 requests per second. These 300 requests generated by your given number of threads in Thread group configuration. You can also see the number of active threads in the log results. These threads become active depend on your ramp-up time.
Ramp-up time is the speed at which users or threads arrive on your application.
Check this for an example: How should I calculate Ramp-up time in Jmeter
You can give enough duration in your script and also check the loop count forever, so that all of the threads will be hitting those requests in your application server until the test finishes.
When all the threads become active on the server, then they will hit those requests in server.
To increase the TPS, you must have to increase the number of threads because those threads will hit your desired requests in server.
It also depends on the response time of your requests.
Suppose,
If you have 500 virtual users and application response time is 1 second - you will have 500 RPS
If you have 500 virtual users and application response time is 2 seconds - you will have 250 RPS
If you have 500 virtual users and application response time is 500 ms - you will have 1000 RPS.
First of all, a little of theory:
You have Sampler(s) which should mimic real user actions
You have Threads (virtual users) defined under Thread Group which mimic real users
JMeter starts threads which execute samplers as fast as they can and generate certain amount of requests per second. This "request per second" value depends on 2 factors:
number of virtual users
your application response time
JMeter Summarizer doesn't tell the full story, I would recommend generating the HTML Reporting Dashboard from the .jtl results file, it will provide more comprehensive load test result data which is much easier to analyze looking into tables and charts, it can be done as simple as:
jmeter -g /path/to/testresult.jtl -o /path/to/dashboard/output/folder
Looking into current results, you achieved maximum throughput of 7.1 requests second with average response time of 868 milliseconds.
So in order to have more "requests per second" you need to increase the number of "virtual users". If you increase the number of virtual users and "requests per second" is not increasing - it means that you identified so called saturation point and your application is not capable of handling more.
Scenario :
a. Ultimate Thread Group : Thread count :100, Startup time : 60, Hold load : 300
b. If there are 10 Http(s) request in the script and each is having 1 sec of constant timer, total constant time value = 10 seconds.
In the above scenario the hold time will become 300 +(100 *10) OR 300 +(10) OR 300 -(100 *10) OR 300 -(10)
Your timers on samplers don't have anything to do with your total test time. So in your above example, it will simply be 60+300 seconds.
When a thread finishes its 10 requests, it will start again. So once your test is ramped up, each thread will execute them 30 times. If you increased your timers, the 10 request would take longer to complete, so fewer iterations of them would be done- but it wouldn't change your duration.
Timers and holdtime works independently, they are not related.
In your example-
Test will start loading Threads as test begins and by end of 60 seconds all 100 threads would be up.
Individual thread execution depends on response of each request sent on server (in your case 10 requests/thread), so constant timer will wait for 1 seconds before sending next request of same thread to server.
So, hold time ensures same 100 users(threads) load on server for specified period. As and when one thread completes its execution cycles (all 10 requests), it will add another thread to maintain same load during test time specified as hold time.
Test will get completed in 30+60 = 90 seconds.
I have run load testing for website but when I have increased no. of users , I can see throughput time seems increasing instead of decrease.
Test Case 1 :
No. of Threads : 15
Ramp up time : 450 [As I want to put delay of 30 seconds between 2 users]
Loop count : Forever
Scheduler : 1800 Seconds [As I want to run test for 30 minutes]
In Http requests I have added 10 pages and each request has constant timer with 30000 miliseconds as I need to put delay of 30 seconds between 2 requests.
Now When I see result of Aggregate Report , it shows me Throughput 3/min for each request.
Test Case 2 :
No. of Threads : 30
Ramp up time : 900 [As I want to put delay of 30 seconds between 2 users]
Loop count : Forever
Scheduler : 1800 Seconds [As I want to run test for 30 minutes]
In Http requests I have added 10 requests/pages and each request has constant timer with 30000 miliseconds as I need to put delay of 30 seconds between 2 requests.
Now When I see result of Aggregate Report , it shows me Throughput 6/min for each request.
I am confuse that how it is possible? If my users are increased from 15 to 30 then it should have more load on server and throughtput should decrease like 1/min or 2/min.
Please let me know what I am doing wrong here.
Throughput is no. of completions per unit time. (A completion can be a http request/db request in short anything that needs to be executed and needs >0 execution time.)
Ex. req per sec or req per min etc.
By definition of throughput in JMeter, it is calculated as total no. of requests/total time.
In your first case, no. of requests generated in 1800 seconds with 3 second delay in every request by 15 users are x. Thus throughput is x/30 i.e. 3 it means ~90 requests were generated (verify this from aggregate report or other reporter.)
In your second case, everything else is same but no. of users are doubled which creates ~double no. of requests in given time which is (1800 seconds)
Thus according to formula, no. of requests generated/total time.
Throughput in 2nd case = 2x/30 = 2*throughput in 1st case
Which is 6/min. (Correctly shown by JMeter.)
Key here is to check no. of requests generated in both cases.
I hope this clears your confusion. Let me know if you need further clarification. BTW "when I have increased no. of users , I can see throughput time seems increasing instead of decrease." is not always true.
Throughput increased by factor of 2.
Test Case 1: - 3 requests per minute - 1 request each 20 seconds
Test Case 2: - 6 requests per minute - 1 request each 10 seconds
As per JMeter Glossary:
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
You may also be interested in the following plugins:
Server Hits Per Second
Transactions Per Second
or alternatively Loadosophia.org service which can convert your JMeter .jtl results files into easy-understandable professional load report