(Attached as image)
"In My summary report
Total Samplers = 11944
My total Average response = 2494 mili-second = 2.49 seconds.
What i understand from here 11944 samplers are processed in average of 2.49 seconds.That means my test actually should processed for 11944 x 2.49 Seconds = 82 hours.But it actually ran about 15-20 mints max.
So trying to understand,is it reduced execution time due to JMeter parallel/multiple thread execution or i am understanding it wrong way.
I want to know a single request average response time"
JMeter calculates response time as:
Sum of all Samplers response times
Divided by the number of samplers
basically it's arithmetic mean of all samplers response times.
11944 x 2.49 / 3600 gives 8.2 hours and yes, this is how much time it would take to execute the test with a single user, the amount of time will reduce proportionally depending on the number of threads used
More information:
Calculator class source code
JMeter Glossary
Understanding Your Reports: Part 2 - KPI Correlations
It depends on threads number you used
For example if you used 50 threads 12K Samples/requests and each time took (average of) 2.5 seconds
12000 * 2.5 / 50 / 60 = 10 minutes
^ ^ ^ ^
requests avg. sec threads sec per minute
Related
I am using J-Meter to run a performance test with different number of users. With 1 user, the avg response time is 1.4 seconds, but with more number of users, it's logical that the avg response time will go up, but instead it is reducing. Can anyone explain why? The test scenario is that I am interacting a few times (2-3 interactions) with a chat bot.
Please help me understand this confusing results below:
1 user - 30 seconds - 1.3 seconds (average response time)
5 users - 60 seconds - 0.92 seconds (average response time)
10 users - 60 seconds - 0.93 seconds (average response time)
20 users - 120 seconds - 0.92 seconds (average response time)
First iteration of first user often involves some overhead on client side (most commonly DNS resolution), and can have some overhead on server side (server "warmup"). That overhead is not required in the following iterations or users.
Thus what you see as reduction in average time is actually reduction of the impact of the slower "first user first iteration" execution time on overall outcome. This is why it's important to provide a sufficient sample, so that such local spike does not matter that much anymore. My rule of thumb is at least 10000 iterations before looking at any averages, although level of comfort is up to every tester to set.
Also when increasing number of users, you should not expect average to be worse, unless you reached a saturation point: it should be stable rather. So if you expect your app to be able to support not more than 20 users, than your result is surprising, but if you expect application to support 20000 users, you should not have any average degradation at 20 users.
To test if this is what happens, try to run 1 user, but for much longer, so that total number of iterations is similar to running 20 users for example. Roughly you need to increase duration of test with 1 user to 20 min to get to similar number of iterations (i.e. same length of test would be 120 sec, but also x20 iterations with 20 users, giving you rough number of 20 min total for 1 user)
My performance test strategy is to use JMeter to send 3 different requests to a linux server continually. They are Init, Calculate and Apply.
The number of active users of peak hour is 40 and the number of each request per hour is 200. The load test should be conducted with the peak usage with no less than one hour.
If my understanding is correct, running the test for two hours eventually there will be 1200 samples shown in the result table (200 requests * 3 * 2 hours). However, with the following configuration there are much more samples sent to the server.
Thread Group:
- Number of threads: 200
- Ramp-up time: 3600 seconds
- Duration: 7200 seconds
I have also tried setting the number of threads 50, the result is still far more than my expectation.
May I know how to configure the JMeter correctly ?
Your configuration should be:
Number of threads : 40
Ramp-up time: Should be short in your case, its value tells in how much time threads will go from 0 to 40.
Duration is ok
Finally, as you want 200 requests per hour, which would be 600 for the 3 ones, it would be 10 per minute, you need to use Constant Throughput Timer inside a Test Action:
Where Test Action is :
I have about 300 users (configured in the thread group) who would perform an activity (e.g.: run an e-learning course) twice. That would mean I need to expect about 600 iterations i.e 300 users performing an activity twice.
My thread group contains the following transaction controllers:
Login
Dashboard
Launch Course
Complete Course
Logout
As I need 600 iterations per 5400 seconds i.e 3600 + 900 + 900 seconds (1 hour steady state + 15 mins ramp-up and 15 mins ramp-down), and the sum of sampler requests within the total thread group are 18, would I be correct to say I need about 2 RPS?
Total number of iterations * number of requests per iteration = Total number of requests
600 * 18 = 10800
Total number of requests / Total test duration in seconds = Requests per second
10800 / 5400 = 2
Are my calculations correct?
In addition, what is the best approach to achieve the expected throughput?
Your calculation looks more or less correct. If you need to limit your test throughput to 2 RPS you can do it using Constant Throughput Timer or Throughput Shaping Timer.
However 2 RPS is nothing more than statistical noise, my expectation is that you need much higher load to really test your application performance, i.e.
Simulate the anticipated number of users for a short period. Don't care about iterations, just let your test to run i.e. for an hour with the number of users you expect. This is called load testing
Do the same but for longer period of time (i.e. overnight or weekend). This is called soak testing.
Gradually increase the number of users until you will see errors or response time will start exceeding acceptable thresholds. This is called stress testing.
I have set the number of threads and ramp-up time to 1/1 and I am iterating my 1000 records from data.csv for 1800 seconds.
Now given the numbers, I have set the CTT, constant time throughput to 2000 every minute and I expected the average throughput to be 2000/60 = 33.3/sec, but I get 18.7/sec, when I increased the throughput to 4000/60, I still get 18 or 19/sec.
Constant Throughput Timer cannot force threads to execute faster, it can only pause threads to limit JMeter's throughput to the defined value.
Each JMeter thread executes samplers as fast as it can, however next iteration won't start until previous is finished so given you use 1 thread - the throughput won't be higher than application response time.
Also be aware that Constant Throughput Timer is accurate enough on minute level so you can rather manipulate "requests per minute" rather than "requests per second", if your test is shorter than 1 minute - consider using Throughput Shaping Timer
So I would recommend increasing the number of virtual users to i.e. 50.
See How to use JMeter's Constant Throughput Timer for more details.
I guess your application average response time is around 50ms. Which means a single thread can only perform about 20 hits/sec (1 sec / 0.05 sec per hit = 20 hits/sec).
You have 2 solutions:
increase the number of threads to parallelize requests sent,
or make you app response faster (obviously harder).
At some point, when you application can't handle more load, you should see the hits/sec dropping and the average response time increase.
The graph below shows you an example of application which has a steady response time with up to 20 concurrent threads.
I have checked for load 100 and got result 100 in13.2/s= 7.4/s.
So what is the meaning of 100 in 13.2/s = 7.4/s?
It means the Number of Executed Samples or Requests are 100. Test duration is 13.2 seconds and Throughput is 7.4/s. So your application handled average 7.4 requests per second during those 13.2 seconds. From that test, the total number of requests are 100.
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
In fact, there's been a mistake in the question: it should be "100 in 13.2s" not "100 in 13.2/s" !!
For further detail, go through Apache JMeter User Manual: Glossary & Elemants of a Test Plan.