Jmeter interpreting results in simple terms - jmeter

So I'm trying to test a website, and trying to interpret the aggregate report by "common sense" (as I tried looking up the meanings of each result and i cannot understand how they should be interpreted).
TEST 1
Thread Group: 1
Ramp-up: 1
Loop Count: 1
- Samples 1
- Average 645
- Median 645
- 90% Line 645
- Min 645
- Max 645
- Throughput 1.6/sec
So I am under the assumption that the first result is the best outcome.
TEST 2
Thread Group: 5
Ramp-up: 1
Loop Count: 1
- Samples 1
- Average 647
- Median 647
- 90% Line 647
- Min 643
- Max 652
- Throughput 3.5/sec
I am assuming TEST 2 result is not so bad, given that the results are near TEST 1.
TEST 3
Thread Group: 10
Ramp-up: 1
Loop Count: 1
- Samples 1
- Average 710
- Median 711
- 90% Line 739
- Min 639
- Max 786
- Throughput 6.2/sec
Given the dramatic difference, I am assuming that if 10 users concurrently requested for the website, it will not perform well. How would this set of tests be interpreted in simple terms?

It is as simple as available resources.
Response Times are dependent on many things and following are critical factors:
Server Machine Resources (Network, CPU, Disk, Memory etc)
Server Machine Configuration (type of server, number of nodes, no. of threads etc)
Client Machine Resources (Network, CPU, Disk, Memory etc)
As you understand it is about mostly how server is busy responding to other requests and how much client machine is busy generating/processing Load (I assume you run all 10 users in single machine)
Best way to know the actual reason is by Monitoring these resources using nmon for linux & perfmon or task manager for Windows (or any other monitoring tool) and understand the differences when you ran 1, 5, 10 users.
Apart from Theory part, I assume that it is talking time because of your are putting the sudden load where the server takes time in processing the previous requests.
Are you using client and server on the same machine? If Yes, that would tell us that the system resources are utilized both for client threads (10 threads) and server threads.
Resposne Time = client sends the request to server TIME + server processing TIME + Server sends the resposne to the client TIME
In your case, it might be one or more TIME's increased.
If you have good bandwidth, then it might be server processing time

Your results are confusing.
For thread count of 5 and 10, you have given the same number of samples - 1. It should be 1 (1 thread), 5 ( 5 threads) and 10 samples for 10 threads. Your experiment has statistically less samples to conclude anything. You should model your load in such a way that the 1 thread load is sustained for a longer period before you ramp up 5 and 10 threads. If you are running a small test to assess the the scalability of your application, you could do something like
1 thread - 15 mins
5 threads - 15 mins
10 threads - 15 mins
provide the observations for each of the 15 min period. If you application is really scaling, it should maintain the same response time even under increased load.
Looking at your results, I don't see any issues with your application. There is nothing varying. Again, you don't have much samples that can lead to statistically relevant conclusion.

Related

Concurrency Thread is running more user than the defined in the configuration

Total Concurrency 12
Ramp Up Time (sec) 120
Ramp up step 4
In listener, sometime I have seen 17 user or sometimes 21.
I am expecting to have Total user with the group of 3
Concurrency Thread Group is running exactly that number of users as you define in the "Target Concurrency" field, not less, not more.
Here is the demo, I only changed 120 seconds to 12 to keep it reasonably short.
I can only think of one situation when the described behaviour is possible: running JMeter test in distributed mode. In that case the actual load will be 12 multiplied by the number of JMeter slave machines, for example:
1 slave - 12 threads
2 slaves - 24 threads
3 slaves - 36 threads
etc.
You can also check out jmeter.log file, it normally contains entries about starting and stopping threads and the reason for this, it might be the case you have more than one Thread Group or another component like Precise Throughput Timer or Throughput Shaping Timer or custom code in JSR223 Test Elements which kicks off extra threads.

Understanding difference between thread group properties

i've started distributed performance testing using jmeter. If i give scenario 1:
no.of threads: 10
ramp up period: 1
loop count: 300
Everything runs smooth, as scenario 1 translates to 3000 requests in 300 seconds. i.e. 10 requests per second.
If i give scenario 2:
no.of threads: 100
ramp up period: 10
loop count: 30
Afaik, scenario2 is also executing 3000 requests in 300 seconds, i.e. 10 requests per second.
But things started failing i.e. sever is facing heavy load and requests fail. In theory both scenario1 and scenario2 should be same, right? Am i missing something?
All of these are heavy calls, each one will take 1-2 seconds under normal load.
In ideal world for scenario 2 you would have 100 requests per second and the test would finish in 30 seconds.
The fact that in 2nd case you have the same execution time indicates that your application cannot process incoming requests faster than 10 per second.
Try increasing ramp-up time for 2nd scenario and look into the following charts:
Active Threads Over Time
Response Times Over Time
Transactions Per Second
Normally when you increase the load the number of "Transactions Per Second" should increase by the same factor and "Response Time" should remain the same. Once response time starts growing and number of transactions per second starts decreasing it means that you passed the saturation point and discovered the bottleneck. You should report the point of maximum performance and investigate the reasons of the first bottleneck
More information: What is the Relationship Between Users and Hits Per Second?
In scenario 2 after 10 seconds you have 100 concurrent users which execute requests in parallel, your server may not handle well or prevent such load
Concurrent user load testing sends simultaneous artificial traffic to a web application in order to stress the infrastructure and record system response times during periods of sustained heavy load.
In scenario 1 after 10 seconds you have 10 concurrent users looping through the flow, without causing a load on server
Notice your server may have restriction on number of users requesting only on specific request(s)
We shall be very clear about the Rampup time
Following is extract from the official documentation
Scenario 1 : no.of threads: 10
ramp up period: 1
loop count: 300
In the above scenario 10 threads(virtual users) are to be created in 1 seconds. Each user will loop 300 times. Hence there will be 3000 requests to the server. Throughput cannot be calculated in advance with above configuration. It fluctuates based on the server capability, network etc. You could control the throughput with some components and plugins.
Scenario 2 : no.of threads: 100
ramp up period: 10
loop count: 30
In scenario 2 100 threads (virtual users) are created in 10 seconds. 100 virtual users will send requests concurrently to the server. Each user will send 30 requests. In the second scenario you will have higher throughput (number of requests per seconds) as compared to the scenario 1. Looks like server cannot handle the 100 users sending requests concurrently.
Ramp up time is applicable for the first cycle of each thread. It will simulate delays between first request of each user in their first iteration.

Why is Average response time is reducing when we are increasing the number of users?

I am using J-Meter to run a performance test with different number of users. With 1 user, the avg response time is 1.4 seconds, but with more number of users, it's logical that the avg response time will go up, but instead it is reducing. Can anyone explain why? The test scenario is that I am interacting a few times (2-3 interactions) with a chat bot.
Please help me understand this confusing results below:
1 user - 30 seconds - 1.3 seconds (average response time)
5 users - 60 seconds - 0.92 seconds (average response time)
10 users - 60 seconds - 0.93 seconds (average response time)
20 users - 120 seconds - 0.92 seconds (average response time)
First iteration of first user often involves some overhead on client side (most commonly DNS resolution), and can have some overhead on server side (server "warmup"). That overhead is not required in the following iterations or users.
Thus what you see as reduction in average time is actually reduction of the impact of the slower "first user first iteration" execution time on overall outcome. This is why it's important to provide a sufficient sample, so that such local spike does not matter that much anymore. My rule of thumb is at least 10000 iterations before looking at any averages, although level of comfort is up to every tester to set.
Also when increasing number of users, you should not expect average to be worse, unless you reached a saturation point: it should be stable rather. So if you expect your app to be able to support not more than 20 users, than your result is surprising, but if you expect application to support 20000 users, you should not have any average degradation at 20 users.
To test if this is what happens, try to run 1 user, but for much longer, so that total number of iterations is similar to running 20 users for example. Roughly you need to increase duration of test with 1 user to 20 min to get to similar number of iterations (i.e. same length of test would be 120 sec, but also x20 iterations with 20 users, giving you rough number of 20 min total for 1 user)

How to correctly configure number of requests in JMeter?

My performance test strategy is to use JMeter to send 3 different requests to a linux server continually. They are Init, Calculate and Apply.
The number of active users of peak hour is 40 and the number of each request per hour is 200. The load test should be conducted with the peak usage with no less than one hour.
If my understanding is correct, running the test for two hours eventually there will be 1200 samples shown in the result table (200 requests * 3 * 2 hours). However, with the following configuration there are much more samples sent to the server.
Thread Group:
- Number of threads: 200
- Ramp-up time: 3600 seconds
- Duration: 7200 seconds
I have also tried setting the number of threads 50, the result is still far more than my expectation.
May I know how to configure the JMeter correctly ?
Your configuration should be:
Number of threads : 40
Ramp-up time: Should be short in your case, its value tells in how much time threads will go from 0 to 40.
Duration is ok
Finally, as you want 200 requests per hour, which would be 600 for the 3 ones, it would be 10 per minute, you need to use Constant Throughput Timer inside a Test Action:
Where Test Action is :

web stress and load testing with jmeter - How good is good? benchmark guide?

I am going to set up some load testing test with jmeter.
However, I can't find on the net what is a good bench mark.
How good is good?
My site hit is about 80,000/day on average.
After mining out some data from access log for 1 month, I manage to find out:
Average low traffic is about 1 requests / sec
Average Medium traffic of 30 requests / sec
Average High traffic of 60 requests / sec
I came out with the following plans to test 8 kind of pages and it's ideal respective average response time on a single page load :
Page 1 - 409ms
Page 2 - 730ms
Page 3 - 1412ms
Page 4 - 1811ms
Page 5 - 853
Page 6 - 657ms
Page 7 - 10ms
Page 8 - 180ms
Simulate average traffic scenario - 10 requests / second test
Simulated Users: 10 threads
Ramp Period: 1 sec
Duration: 30 min
Simulate medium traffic scenario - 30 requests / second test
Simulated Users: 30 threads
Ramp Period: 1 sec
Duration: 30 min
Simulate super high traffic scenario - 100 requests / second test
Simulated Users: 100 threads
Ramp Period: 1 sec
Duration: 30 min
Simulate an attack scenario - 350 requests / sec (based on MYSQL max connection of 500)
Simulated Users: 100 threads
Ramp Period: 1 sec
Duration: 10 min
While running these test, I plan to monitor the following:
CPU Load average: (Webserver)
CPU Load average: (DB Server)
RAM Load average:
RAM Load average: (DB Server)
Average Response time:
To see how much it's its memory and cpu utilisation and if there's a need to increase RAM, CPU etc... Also I capped MySQL Max connection at 500.
In all testing, I expect response time should be ideally below a capped benchmark of 10secs.
How does this sound? I've no SLA to follow, this is just based on research on current web traffic and coming out with the plan. The thing is I don't know how what is the threshold of the server. I believe with the hardware below it should already exceed what's needed for hosting our pages.
Webserver is running: 8GB RAM, 4 Core ( 1 server, another cold backup server No load balancer )
MySQL Server is running: 4GB RAM , 2 Core.
We are planning to move to cloud so ideally need to find out what kind of instance best fit our scenario through this load testing as well.
Would really appreciate any good advice here..
Read about SLA
I recommend you to determine your requirements of web-site performance. But by default you can use "standard" response time limit about 10sec.
Next, find user activity profile during day, week, month. Then you could choose base values of load for each profile.
Find most frequent and most "heavy" user actions. Create test scenario with this actions (remember about percent ratio of actions, don't use all actions with one frequency).
Limit load by throughput timer (set needed value of hits per minute). Number of threads in thread group select experimentally (for http load not needed many of them).
Add plugins for jmeter from google-code. Use graphs "number of threads in second", "response time" and standard listener "summary report". Don't use standard graphs with high load, it's too slow. You can also save data to file from each listener (I strongly recommend this, and select csv format). Graphs show average points, so if you want precision pictures then build graphs manually in excel or calc. Grab performance metrics from server (CPU, memory, i/o, DB, net interface).
Set ramp up period about 20-60 minutes and test about 1-2 hours. If all right, then test with 200% load profile or more to find the best performance. Also set test about 1-2 days. So you'll see server stability.

Resources