I am doing load test on my system using Jmeter. the requirement is I need to generate 150 requests per minute for a duration of 20 minutes constantly.
I tried with below approaches
I tried by giving this configuration.
No of threads - 3000 [150 req/min * 20 mins]
rampup period - 1200sec [20mins * 60]
But here test stopped after creation of 2004 thread. by giving
this error
Failed to start the native thread for java.lang.Thread “Thread Group 1-2004”
Uncaught Exception java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached in thread Thread[#51,StandardJMeterEngine,6,main]. See log file for details
Used concurrency thread group with below details
Target concurrency - 150
ramp up time - 1 min
hold target rate time - 20 mins
but here no of samples collected are more than 3000 [150 req *20 sec] which i feel is not correct
Is it possible to create exact load according to my requirement in Jmeter(150 req/min ->duration of 20 mins) or should I explore other tools like locust??
tried with precision timers (attaching screen shots)
enter image description here
enter image description here
enter image description here
Your understanding of relationship between users and hits per second is not correct.
When JMeter thread (virtual user) is started it begins executing Samplers as fast as it can. The throughput (number of requests per second) mainly depends on the response time.
For example:
you have 1 user and 1 second response time - the load will be 1 request per second
you have 1 user and 2 seconds response time - the load will be 0.5 requests per second
you have 2 users and 2 seconds response time - the load will be 1 requests per second
you have 4 users and 2 seconds response time - the load will be 2 requests per second
etc.
If you want to slow down JMeter to the desired number of requests per minute it can be done using Timers.
For example:
Constant Throughput Timer:
Precise Throughput Timer:
Throughput Shaping Timer
I have an api that returns student data when i send a student id to it .Now I want to test my api with 1000 users hitting every seconds.
My laptop configuration - Core i5 , ram 8 gB
My jmeter completes test but the threads show an error
summary + 2642 in 00:00:30 = 88.0/s Avg: 938 Min: 59 Max: 130375 Err: 3 (0.11%) Active: 993 Started: 1000 Finished: 7
Generate Summary Results = 14824 in 00:02:54 = 85.3/s Avg: 1699 Min: 59 Max: 130375 Err: 54 (0.36%)
summary = 14824 in 00:02:54 = 85.3/s Avg: 1699 Min: 59 Max: 130375 Err: 54 (0.36%)
Generate Summary Results + 2636 in 00:00:30 = 87.9/s Avg: 613 Min: 59 Max: 15489 Err: 2 (0.08%) Active: 977 Started: 1000 Finished: 23
Generate Summary Results = 17460 in 00:03:24 = 85.7/s Avg: 1535 Min: 59 Max: 130375 Err: 56 (0.32%)
summary + 2636 in 00:00:30 = 87.9/s Avg: 614 Min: 59 Max: 15489 Err: 2 (0.08%) Active: 977 Started: 1000 Finished: 23
summary = 17460 in 00:03:24 = 85.7/s Avg: 1535 Min: 59 Max: 130375 Err: 56 (0.32%)
:
:
:
:
summary = 17460 in 00:010:24 = 123.7/s Avg: 5535 Min: 59 Max: 130375 Err: 723 (70.3%)
By the end around 723 threads failed
My api is returning response if i run the test with 100 users the test is successful but when i run with 1000 users most of the threads get failed or remain active after completion
JMeter default configuration is not suitable for high loads, you need to tune it in order to be able to kick off 1000 threads.
Make sure to use the latest JMeter version and 64-bit version of Server JRE or JDK
Increase JVM Heap size allocated to JMeter to ~6 gigabytes
Disable (or delete) listeners in Test Plan (if any)
Make sure to monitor CPU and RAM usage on the machine where JMeter is running during the test, you can use JMeter PerfMon plugin for that. JMeter must have enough headroom to operate, if it will lack RAM or CPU it will not be able to send requests fast enough. If you see that JMeter machine is overloaded - you will have to consider distributed testing
Harwdare requirements will greatly vary depending on your test nature, i.e. number of pre/post processors, assertions, request and response sizes, application response time, etc. so there is no mapping like X hardware = Y virtual users, you will need to assess it for each and every different test plan. Remember to increase the load gradually and keep an eye on health metrics
Lets calculate it using simple math. At max thread size of HTTP request will be approx. 1MB(Apart from payload).
1MB*1000= 1000MB = 1 GB at-least heap size is required for 1000 concurrent users.
500 MB - for additional tasks like listeners, aggregate result
Total is at-least 1500 MB is required to run this test. configure jmeter to use memory like this in /apache-jmeter/bin/jmeter file.
Multithreading is also depends on CPU capability, so try to use multicore CPU.
What is a thread on a CPU?
In computer architecture, multithreading is the ability of a central processing unit (CPU) (or a single core in a multi-core processor) to execute multiple processes or threads concurrently, supported by the operating system.
For Jmeter script, when I use
Number of Threads = 10
Ramp-up Period = 40
Loop Count = 1
Then 6 out of 40 samples failed.
When I increase the Ramp-up Period to 60 then all the samples pass.
For the failed requests, the response code returned is 522:
Sampler result
Thread Name: Liberty Insight 1-4
Sample Start: 2018-02-23 20:43:12 IST
Load time: 1
Connect Time: 0
Latency: 1
Size in bytes: 112
Sent bytes:584
Headers size in bytes: 112
Body size in bytes: 0
Sample Count: 1
Error Count: 1
Data type ("text"|"bin"|""):
Response code: 522
Response message:
Response headers:
HTTP/1.1 522
Server: nginx
Date: Fri, 23 Feb 2018 15:13:12 GMT
Content-Length: 0
Connection: keep-alive
HTTPSampleResult fields:
ContentType:
DataEncoding: null
I am unable to figure out the reason for this type of behaviour. Any pointers what could be issue for such type of behaviour?
If you choose Ramp-up Period = 40 with 10 threads calls to server are about 4 transaction a second.
When you use cloudflare services one of its feature is to prevent overload the server
There are a few main causes of this:
The origin server was too overloaded to respond.
The origin web server has a firewall that is blocking our requests, or packets are being dropped within the host’s network.
Error 522:
(source: cloudflare.com)
If you need to load test your server use different route than cloudflare, Consult your IT for such option.
If not reduce transaction per second rate
Ensure that the origin server isn’t overloaded. If it is, it could be dropping requests.
I configured a load test in visual studio with the following settings:
Load Pattern: Step
Initial User Count: 500
Maximum User Count: 1000
Step Duration (seconds): 300
Step Ramp Time (seconds): 0
Step User Count: 250
I need to start the test with 500 users, then increment to 750 users and finish with 1000 users.
So, in my inform, in the Key Indicators section, I expect to see:
Load users:
min: 500
max: 1000
avg: 750
But really I see:
Load users:
min: 500
max: 500
avg: 500
I don't know what I'm doing wrong.
I need to test if our system can perform N requests per second.
Technically, it's 2 requests to one API, 2 requests to another, and 6 requests to third one.
But the important thing that they should happen simultaneously - so 10 requests per second.
So, in JMeter I've created three Thread Groups, first defines number of threads 1, and ramp-up time 0.
Second thread group is the same, and third thread group defines number of threads 6 and ramp-up time 0.
But that doesn't really guarantee it's going to run them per second
How do I emulate that? And how do I see the results -- if it was able to perform or wasn't?
Thanks!
You could use ConstantThroughputTimer.
Quote from JMeter help files below:
18.6.4 Constant Throughput Timer
This timer introduces variable pauses, calculated to keep the total throughput (in terms of samples per minute) as close as possible to a give figure. Of course the throughput will be lower if the server is not capable of handling it, or if other timers or time-consuming test elements prevent it.
N.B. although the Timer is called the Constant Throughput timer, the throughput value does not need to be constant. It can be defined in terms of a variable or function call, and the value can be changed during a test.
For example I've used it to generate 40 requests per second:
<ConstantThroughputTimer guiclass="TestBeanGUI" testclass="ConstantThroughputTimer" testname="Constant Throughput Timer" enabled="true">
<stringProp name="calcMode">all active threads in current thread group</stringProp>
<doubleProp>
<name>throughput</name>
<value>2400.0</value>
<savedValue>0.0</savedValue>
</doubleProp>
</ConstantThroughputTimer>
And thats a summary:
Created the tree successfully using performance/search-performance.jmx
Starting the test # Tue Mar 15 16:28:39 CET 2011 (1300202919244)
Waiting for possible shutdown message on port 4445
Generate Summary Results + 3247 in 80,3s = 40,4/s Avg: 18 Min: 0 Max: 1328 Err: 108 (3,33%)
Generate Summary Results + 7199 in 180,0s = 40,0/s Avg: 15 Min: 1 Max: 2071 Err: 378 (5,25%)
Generate Summary Results = 10446 in 260,3s = 40,1/s Avg: 16 Min: 0 Max: 2071 Err: 486 (4,65%)
Generate Summary Results + 7200 in 180,0s = 40,0/s Avg: 14 Min: 0 Max: 152 Err: 399 (5,54%)
Generate Summary Results = 17646 in 440,4s = 40,1/s Avg: 15 Min: 0 Max: 2071 Err: 885 (5,02%)
Generate Summary Results + 7199 in 180,0s = 40,0/s Avg: 14 Min: 0 Max: 1797 Err: 436 (6,06%)
Generate Summary Results = 24845 in 620,4s = 40,0/s Avg: 15 Min: 0 Max: 2071 Err: 1321 (5,32%)
But I run this test inside my network.
As with any network test, there's always going to be problems, especially with latency - even if you could send exactly 6 per second, they're going to be sent sequentially (that's just how packets get sent) and may not all hit in that second, plus processing time.
Generally when performance metrics specific x per second, it's measured over a period of time. Your API may even have a buffer - so you could technically send 6 per second, but process 5 per second, with a buffer of 20, meaning it'd be fine for 20 seconds of traffic, as you'd have sent 120, which would take 120/5 = 24 seconds to process. But any more than that would overflow the buffer. So to just send exactly 6 in a second to test is insufficient.
In the thread group, you're right setting number of threads (users) to 6. Then run it looping forever (tick it or put it in a while loop) and add a listener like aggregate report and results tree. The results you can use to check the right stuff is being sent and responded to (assuming you validate the responses) and in the aggregate report, you can see how many of each activity is happening per hour (obviously multiply by 3600 for seconds, but because of this inaccuracy it's best to run it for a good length of time).
The initial load test can now be run, and as a more accurate test, you can leave it running for longer (soak test) to see if any other problems surface - buffer overflows, memory leaks, or other unexpected events.
Use the Throughput Shaping Timer
I had similar problem and here are two solutions I found:
Solution 1:
You can use Stepping Thread Group (allows to set thread number increase stages over set periods of time) with Constant Throughput Timer in it.
Throughput Timer allows you to set number of samples that thread can send per minute (e.g. if you set it to 1, the thread will only send one request per minute). Also, you can apply Throughput Timer to all threads in your Thread Group or have Timer for each thread with its own settings.
Read more about Throughput Timer here: https://www.blazemeter.com/blog/how-use-jmeters-throughput-constant-timer
Solution 2:
Use "SetUp Thread Group". You can calculate thread number and rump up time to get Threads per Second desired.
You can use Schedule Feedback Function and will also need Concurrency Thread Group
Same can Also be done by configuring "ConstantThroughputTimer" as suggested above from UI also by adding "Constant Throughput Timer" by navigating by right click on Thread Group and then click on Timer and then choose the "Constant Throughput Timer".