How to properly load test JMS with JMeter? - jmeter

I've set up a thread group with JMS Point-to-Point sampler and it works ok with our application.
I send an xml message, using ${__UUID()} function for some fields in order to guarantee the message is unique, and expect a response with a timeout of 60s and the content should contain a simple pattern (Response Assertion).
I tried to do a simple load test sending 1000 messages but I'm confused about how I should configure the threads. I get different results with different configurations.
Case 1:
- Threads: 1000
- Ramp-up: 1
- Loop: 1
- Avg time/sample: ~80s
- Total time: 02:41
Case 2:
- Threads: 1000
- Ramp-up: 10
- Loop: 1
- Avg time/sample: ~60s
- Total time: 01:43
- Errors: 3%
Case 3:
- Threads: 1000
- Ramp-up: 100
- Loop: 1
- Avg time/sample: ~12s
- Total time: 02:13
Case 4:
- Threads: 1
- Ramp-up: 1
- Loop: 1000
- Avg time/sample: ~1.2s
- Total time: >16min
Case 5:
- Threads: 10
- Ramp-up: 1
- Loop: 100
- Avg time/sample: ~1.1s
- Total time: 02:12
Case 6:
- Threads: 100
- Ramp-up: 1
- Loop: 10
- Avg time/sample: ~7.3s
- Total time: 01:30
How I'm supposed to interpret those results? Which configuration should I use?

It depends on what you're trying to achieve, the main performance testing types are:
Load Testing - when you put your system under anticipated load and see if response times/number of transactions per second are expected in terms of non-functional requirements or SLAs. If this is the case - just configure JMeter to exactly replicate the expected usage of your application and that would be your "configuration"
Stress Testing - when you're identifying the limits of your system and looking for the bottleneck. In this case I would recommend starting from 1 thread and gradually increasing the load at the same time looking into the following charts:
Active Threads Over Time
Response Times Over Time
Transactions Per Second
Response Codes Per Second
Ideally the number of transactions per second should increase as you increase the number of users and response time should remain the same, however I expect that at some point you will see that response time will go up and transactions per second will go down - that would indicate the saturation point - the point of maximum system performance. You can record the number of active threads or requests per second at this stage and report it. Additionally you can look for the root cause of the performance problem and try to fix it.

Related

Does the loop define the number of request or calls

Can someone explain to me if the loop sets the number of requests or it`s there to have an average?
If I have 100 users, ramp - up set as 0 and loop 1, does it mean that I have only 100 users and if I increase the loop to 2, does it mean that there will be 200 users doing the request?
If I needed to test with 200 users, why would I not set the users to 200? What does the loop do differently and how does it affect the result?
If you have only one request (sampler) under the Thread Group:
With loop count 1 - 100 users will execute 1 request 1 time each, 100 requests in total
With loop count 2 - 100 users will execute 1 request 2 times each, 200 requests in total
With regards to setting ramp-up to 0 - it's not the best idea, it is good to increase the load gradually, this way you will be able to correlate increasing load with other performance metrics, in particular with response time and number of requests per second.
More information: JMeter Ramp-Up - The Ultimate Guide
P.S. It might be easier to consider i.e. Ultimate Thread Group which makes the process of defining the workload easier, for example here is how you can configure JMeter to start with 1 user, increase the load up to 200 users in 60 seconds, hold the load for another 60 seconds and then gradually decrease the load:
You can install Ultimate Thread Group as a part of Custom Thread Groups bundle using JMeter Plugins Manager

Jmeter Throughput Shaping Timer sending more requests then desired

I am using Jmeter 4.0 with Throughput Shaping Timer and I have mentioned my configuration as follows:
bzm - Concurrency Thread Group:
Target Concurrency 1000
ramp up time : 1
ramp up steps: 1
Hold Target Rate: 100 min
jp#gc - Throughput Shaping Timer
Start RPS: 333 || End Rps: 333 || Duration(sec): 1200
Since the test duration is mentioned as 1200 seconds and Rps is 333/sec so number of request hits through the test should be (333*1200) = 399600. But actual number of hits coming in range of 400000 - 410000 Requests per second.
How can Throughput Shaping Timer be restricted to not send extra requests?
Your Total test duration isn't 1200 seconds. Looking into your Concurrency Thread Group Configurations, your test duration is exactly 6001 seconds (Ramp up for 1000 user is 1 sec and the Hold Target Rate time is 6000 seconds).
To get your desired RPS, You have to follow the below formula to define the number of threads in Concurrency Thread Groups:
Threads pool size can be calculated like RPS * <max response time> / 1000
If your response time is 1 second, then 333 Threads are enough to achieve this RPS. You have used more threads in this case I guess.
According to your given test plan, it is working like 1000 users are active in 1 second and then they will try to achieve 333 RPS for 1200 seconds and then they will maintain 1000 users requests for remaining time (6001-1220=4801 seconds) as you mentioned 1000 users will hold the load for 100 mins. For this reason, you are getting extra requests than desired.
So, Define number of threads and ramp up time accordingly in your Thread groups and also sync your test duration properly (in this case hold load time could be 20 mins not 100 mins).
JMeter is not capable of immediately stopping 1000 threads when the Throughput Shaping Timer reaches its duration limit, JMeter "tells" threads to stop one 1200 seconds pass and it might take a while to gracefully shut the threads down.
Given your setup the only way of having exactly 399600 samplers is using Throughput Controller in Total Executions mode like:
This way you will get confidence that not more than 339600 samplers will be executed (the number can be less by the way if your application response time will be higher than 300 ms)

How to correctly configure number of requests in JMeter?

My performance test strategy is to use JMeter to send 3 different requests to a linux server continually. They are Init, Calculate and Apply.
The number of active users of peak hour is 40 and the number of each request per hour is 200. The load test should be conducted with the peak usage with no less than one hour.
If my understanding is correct, running the test for two hours eventually there will be 1200 samples shown in the result table (200 requests * 3 * 2 hours). However, with the following configuration there are much more samples sent to the server.
Thread Group:
- Number of threads: 200
- Ramp-up time: 3600 seconds
- Duration: 7200 seconds
I have also tried setting the number of threads 50, the result is still far more than my expectation.
May I know how to configure the JMeter correctly ?
Your configuration should be:
Number of threads : 40
Ramp-up time: Should be short in your case, its value tells in how much time threads will go from 0 to 40.
Duration is ok
Finally, as you want 200 requests per hour, which would be 600 for the 3 ones, it would be 10 per minute, you need to use Constant Throughput Timer inside a Test Action:
Where Test Action is :

Jmeter interpreting results in simple terms

So I'm trying to test a website, and trying to interpret the aggregate report by "common sense" (as I tried looking up the meanings of each result and i cannot understand how they should be interpreted).
TEST 1
Thread Group: 1
Ramp-up: 1
Loop Count: 1
- Samples 1
- Average 645
- Median 645
- 90% Line 645
- Min 645
- Max 645
- Throughput 1.6/sec
So I am under the assumption that the first result is the best outcome.
TEST 2
Thread Group: 5
Ramp-up: 1
Loop Count: 1
- Samples 1
- Average 647
- Median 647
- 90% Line 647
- Min 643
- Max 652
- Throughput 3.5/sec
I am assuming TEST 2 result is not so bad, given that the results are near TEST 1.
TEST 3
Thread Group: 10
Ramp-up: 1
Loop Count: 1
- Samples 1
- Average 710
- Median 711
- 90% Line 739
- Min 639
- Max 786
- Throughput 6.2/sec
Given the dramatic difference, I am assuming that if 10 users concurrently requested for the website, it will not perform well. How would this set of tests be interpreted in simple terms?
It is as simple as available resources.
Response Times are dependent on many things and following are critical factors:
Server Machine Resources (Network, CPU, Disk, Memory etc)
Server Machine Configuration (type of server, number of nodes, no. of threads etc)
Client Machine Resources (Network, CPU, Disk, Memory etc)
As you understand it is about mostly how server is busy responding to other requests and how much client machine is busy generating/processing Load (I assume you run all 10 users in single machine)
Best way to know the actual reason is by Monitoring these resources using nmon for linux & perfmon or task manager for Windows (or any other monitoring tool) and understand the differences when you ran 1, 5, 10 users.
Apart from Theory part, I assume that it is talking time because of your are putting the sudden load where the server takes time in processing the previous requests.
Are you using client and server on the same machine? If Yes, that would tell us that the system resources are utilized both for client threads (10 threads) and server threads.
Resposne Time = client sends the request to server TIME + server processing TIME + Server sends the resposne to the client TIME
In your case, it might be one or more TIME's increased.
If you have good bandwidth, then it might be server processing time
Your results are confusing.
For thread count of 5 and 10, you have given the same number of samples - 1. It should be 1 (1 thread), 5 ( 5 threads) and 10 samples for 10 threads. Your experiment has statistically less samples to conclude anything. You should model your load in such a way that the 1 thread load is sustained for a longer period before you ramp up 5 and 10 threads. If you are running a small test to assess the the scalability of your application, you could do something like
1 thread - 15 mins
5 threads - 15 mins
10 threads - 15 mins
provide the observations for each of the 15 min period. If you application is really scaling, it should maintain the same response time even under increased load.
Looking at your results, I don't see any issues with your application. There is nothing varying. Again, you don't have much samples that can lead to statistically relevant conclusion.

Jmeter - What is the functioning of “Per User” checkbox under Throughput Controller?

I need to divide the load on my application with some percentages i.e. Login Module - 60%, Accounts - 10%, Other Modules - 30%. After few research i find a option under Throughput Controller section in jmeter using which i can control these percentages. I find one checkbox there named "per user". Now i am not getting this check box.
As per blazemeter blog here i tried one scenario as below with "Per user" checkbox checked.
Select "Total Execution" from dropdown.
Marked Throughput as 40.
Threads used - 10, Loop count 1
Now, as per the blog specific transactions should execute for 400 times. but there was Zero execution for that transaction.
I tried another scenario with "Per user" checkbox checked.
Select "Total Execution" from dropdown.
Marked Throughput as 60.
Threads used - 10, Loop count 1
Now, as per the blog specific transactions should execute for 600 times. but it executed 10 times.
Can any experts out there share what am i doing wrong here? Or there is some more clarity needed how this checkbox works.
To understand the Throughput Controller (TC), just add one TC and one sampler (inside TC) and Aggregate Report in combination. then, play with all the parameters in the Throughput Controller.
From Official Documentation:
Total executions:
causes the controller to stop executing after a certain number of executions have occurred.
and
Per User: If checked, per user will cause the controller to calculate
whether it should execute on a per user (per thread) basis. If
unchecked, then the calculation will be global for all users. For
example, if using total execution mode, and uncheck "per user", then
the number given for throughput will be the total number of executions
made. If "per user" is checked, then the total number of executions
would be the number of users times the number given for throughput.
Read both the statements carefully multiple times.
In both the scenarios that you specified, you have maximum executions of 10. (Thread count * Loop Count). Though you specified, Total Executions as 40 or 60, First, you should provide more than 60, in order to see all those 40/60 iterations get executed. So, always specify more iterations (using Thread count & Loop Count) than the Total Executions.
You have to consider Percentage Executions instead of Total Executions to match your requirements. Again, I suggest to simulate for one sample and understand the behaviour by varying the percentages.
Following are some scenarios and expected behaviour (EB).
Scenario:1
Thread Group - 10, Loop Count - 1, Throughput - 40 (Total Executions), Per User - Checked.
EB: Sampler will run only 10 times.
Scenario:2
Thread Group - 40, Loop Count - 1, Throughput - 40 (Total Executions), Per User - Checked.
EB: Sampler will run only 40 times.
Scenario:3
Thread Group - 40, Loop Count - 1, Throughput - 40 (Total Executions), Per User - Unchecked.
EB: Sampler will run only 40 times.
Scenario:4
Thread Group - 100, Loop Count - 1, Throughput - 40 (Total Executions), Per User - Checked.
EB: Sampler will run only 100 times. calculated whether each user executed 40 times. As the limit is not reached, it is executes all 100 iterations.
Scenario:5
Thread Group - 100, Loop Count - 1, Throughput - 40 (Total Executions), Per User - Unchecked.
EB: Sampler will run only 40 times. calculated at the global level. As the sampler reached 40 times for all the threads, stops executing it.
Scenario:6
Thread Group - 100, Loop Count - 40, Throughput - 40 (Total Executions), Per User - Checked.
EB: Sampler will run 400 times (Each user -> 40 times, 100*40). calculated whether each user executed 40 times. Here, even each user limit is also reached and no more executions after 40.
Scenario:7
Thread Group - 100, Loop Count - 1, Throughput - 40 (Total Executions), Per User - Unchecked.
EB: Sampler will run only 40 times. calculated at the global level. As the sampler reached 40 times for all the threads, stops executing it.

Resources