JMeter showing different request time - jmeter

I have written a script where I have combined 10 HTTP requests with the number of threads different for each but the same Ramp Up period set to 1 second. I had a look at the Kibana after the test execution. What I can see is that the API requests are requested in 5 seconds even though the Ramp Up period is set to 1 second. The command which I used to generate the graphs and results file is as below.
jmeter -n -t S:\roshTests\Cocktail\Cocktail.jmx -l S:\roshTests\Cocktail\results.csv
Even the results.CSV file shows the timestamp of a gap between 5 seconds.
Can someone answer me the following?
a) Does the results.CSV file shows the response time or the time it requested by Jmeter?
b) Few threads are in 1000 range and few are less than 100. Is there any restriction for the maximum threads that can be applied in a second using JMeter?
c) Why the results.CSV shows a gap of 5 seconds with Ramp Up set to 1 second.
d) Is there any Graph which shows that the requests are sent in a second?

Related

Unable to reach the requested number of requests while running jmeter distributed

I am working with 1 master, 50 slaves. Number of threads: 20
I have 1 thread and there are 3 samples in it. I'm starting the tests, but as a result of the test, I see that you are going 500-700 requests.
1000 requests from each should go. But it doesn't go.
You have 20*50=1000 threads in total.
You can reach 1000 requests per second only if response time is 1 second sharp.
If response time will be 0.5 seconds - you will get 2000 requests per second
If response time will be 2 seconds - you will get 500 requests per second.
If you need to simulate X requests per second you need to add more threads (or alternatively go for Concurrency Thread Group and Throughput Shaping Timer combination)
But remember that the application under test must be able to respond fast enough because if it doesn't - any JMeter tweaks won't make any difference.
By the way, 1000 requests can be simulated from a modern mid-range laptop without any issues (unless your requests are "very heavy"), just make sure to follow JMeter Best Practices

Jmeter to determine the time taken to send 1000 requests

I am using Jmeter normal Thread Group to simulate 1 user to loop and send a total of 1000 requests. Is the total time taken to send 1000 requests the total runtime?
Eg. 1 user, 1000 requests (looped to do so), on top right after it finishes I see a timing of 1 min 40 seconds. 1 user spends 100 seconds to send 1000 requests?
So average of 1 requests per 0.1 second see
Yes it's a viable approach. However if you want the total time to execute 1000 requests in the loop to appear in the .jtl results file and/or HTML Reporting Dashboard you could amend your test plan a little bit like:
Thread Group with 1 user and 1 loop and 0 ramp-up
Transaction Controller
Loop Controller with 1000 loops
Your Sampler
This way the Transaction Controller will generate a synthetic Sample Result and its elapsed time will be the sum of all its children (1000 Sampler executions)

Understanding difference between thread group properties

i've started distributed performance testing using jmeter. If i give scenario 1:
no.of threads: 10
ramp up period: 1
loop count: 300
Everything runs smooth, as scenario 1 translates to 3000 requests in 300 seconds. i.e. 10 requests per second.
If i give scenario 2:
no.of threads: 100
ramp up period: 10
loop count: 30
Afaik, scenario2 is also executing 3000 requests in 300 seconds, i.e. 10 requests per second.
But things started failing i.e. sever is facing heavy load and requests fail. In theory both scenario1 and scenario2 should be same, right? Am i missing something?
All of these are heavy calls, each one will take 1-2 seconds under normal load.
In ideal world for scenario 2 you would have 100 requests per second and the test would finish in 30 seconds.
The fact that in 2nd case you have the same execution time indicates that your application cannot process incoming requests faster than 10 per second.
Try increasing ramp-up time for 2nd scenario and look into the following charts:
Active Threads Over Time
Response Times Over Time
Transactions Per Second
Normally when you increase the load the number of "Transactions Per Second" should increase by the same factor and "Response Time" should remain the same. Once response time starts growing and number of transactions per second starts decreasing it means that you passed the saturation point and discovered the bottleneck. You should report the point of maximum performance and investigate the reasons of the first bottleneck
More information: What is the Relationship Between Users and Hits Per Second?
In scenario 2 after 10 seconds you have 100 concurrent users which execute requests in parallel, your server may not handle well or prevent such load
Concurrent user load testing sends simultaneous artificial traffic to a web application in order to stress the infrastructure and record system response times during periods of sustained heavy load.
In scenario 1 after 10 seconds you have 10 concurrent users looping through the flow, without causing a load on server
Notice your server may have restriction on number of users requesting only on specific request(s)
We shall be very clear about the Rampup time
Following is extract from the official documentation
Scenario 1 : no.of threads: 10
ramp up period: 1
loop count: 300
In the above scenario 10 threads(virtual users) are to be created in 1 seconds. Each user will loop 300 times. Hence there will be 3000 requests to the server. Throughput cannot be calculated in advance with above configuration. It fluctuates based on the server capability, network etc. You could control the throughput with some components and plugins.
Scenario 2 : no.of threads: 100
ramp up period: 10
loop count: 30
In scenario 2 100 threads (virtual users) are created in 10 seconds. 100 virtual users will send requests concurrently to the server. Each user will send 30 requests. In the second scenario you will have higher throughput (number of requests per seconds) as compared to the scenario 1. Looks like server cannot handle the 100 users sending requests concurrently.
Ramp up time is applicable for the first cycle of each thread. It will simulate delays between first request of each user in their first iteration.

How to verify JMeter's performance on a distributed performance test?

I'm doing a REST API performance test, where I have to do a lot of requests simultaneously. To do it I'm using 3 JMeter instances (1 master and 2 slaves).
To give you some more contest, I did a JMeter script with 2 thread groups, and on each group I have 150 threads and a constant throughput timer.
Here is the command line I use to launch the test:
./jmeter -n -t ./script.jmx -l ./samples.csv -e -o ./dashboard -R 127.0.0.1,192.168.1.96,192.168.1.175 -Gthroughput=900000 -Gduration=10 -Gvmnb=3 -G ./API.properties
In this command line, throughput is the total throughput that I'm aiming for the 3 servers (it's value is divided by vmnb, my 3rd variable, and then each server do this part of the throughput) and duration is the duration of the test.
In this case, the constant throughput should be 900K (300K per server) for 10 minutes. The ramp-up period is 5 minutes (duration/2)
Now my question:
If I understood correctly, at the end I should have 900K * 10 minutes = 9000K samples in my result file (per API).
On my JMeter's dashboard, I have only 200K and 160K samples for each url. even if it only manages to see the Master server (I think), I'm far away from the expected results, no?
dashboard image (I can't upload an image yet...)
Am I missing something or I'm having some performance issues with my VMs, and they aren't able to deliver the high throughput?
I would like to thank you all in advance for your help,
Best regards,
Marc
JMeter master doesn't generate any load, it only loads the test plan and sends it to the slave instances so in your setup you have 2 load generators
Constant Throughput Timer can only pause the threads to limit JMeter's throughput to the given value so you need to ensure that you have enough threads to produce the desired throughput. If your target is 9M samples in 10 minutes it means 900k samples per minute or 450k samples per minute per slave which gives 7500 requests per second. In order to have 7500 requests per second with 150 threads you need to have 0.02 seconds response time while your average response time is around 1 second.
Assuming the above I would recommend switching to Throughput Shaping Timer and Concurrency Thread Group combination. They can be connected via Scheduled Feedback Function so JMeter will be able to kick off extra threads to reach and maintain the defined throughput.
Also make sure to follow JMeter Best Practices as 7500 RPS is quite a high load and you need the confidence that JMeter is capable of sending requests fast enough

How to correctly configure number of requests in JMeter?

My performance test strategy is to use JMeter to send 3 different requests to a linux server continually. They are Init, Calculate and Apply.
The number of active users of peak hour is 40 and the number of each request per hour is 200. The load test should be conducted with the peak usage with no less than one hour.
If my understanding is correct, running the test for two hours eventually there will be 1200 samples shown in the result table (200 requests * 3 * 2 hours). However, with the following configuration there are much more samples sent to the server.
Thread Group:
- Number of threads: 200
- Ramp-up time: 3600 seconds
- Duration: 7200 seconds
I have also tried setting the number of threads 50, the result is still far more than my expectation.
May I know how to configure the JMeter correctly ?
Your configuration should be:
Number of threads : 40
Ramp-up time: Should be short in your case, its value tells in how much time threads will go from 0 to 40.
Duration is ok
Finally, as you want 200 requests per hour, which would be 600 for the 3 ones, it would be 10 per minute, you need to use Constant Throughput Timer inside a Test Action:
Where Test Action is :

Resources