How to spread requests over specific times in JMeter - jmeter

I'm new to JMETER and having some trouble understanding how to spread requests over specific times
I need to run the test for 16 hours and spread it as follows:
Morning shift – “low traffic”: 07:00-15:00 (8h) – 20% of total traffic
Noon shift – “high traffic”: 15:00-19:00 (4h) – 50% of total traffic
Evening shift 1 - “very high traffic”: 19:00-20:00 (1h) – 20% of total traffic
Evening shift 2 - “low traffic”: 20:00-23:59 (3h) – 10% of total traffic
traffic is the whatever number of requests that comes from 500 threads, 500 ramp-up and 1000 loops
so for example we have 100,000 requests, I need:
20% of it in the Morning shift thread group over 8 hours
50% of it in the Noon shift thread group over 4 hours
20% of it in the Evening shift 1 thread group over 1 hour
and 10% of it in the Evening shift 2 thread group over 3 hours
any help is much appreciated

You can make use of Ultimate Thread Group
This can help you spread your load over time. Please have a look at the documentation and you should be done

traffic is the whatever number of requests that comes from 500 threads, 500 ramp-up and 1000 loops
the aforementioned configuration results in 500 000 total requests
So you need to send 500k requests in 16 hours out of which:
Morning shift - 100 000 requests in 8 hours (100 000 requests in 28800 seconds == 3.472222222222222 requests per second)
Noon shift - 250 000 requests in 4 hours (250 000 requests in 14400 seconds == 17.36111111111111 requests per second)
Evening shift 1 - 100 000 requests in 1 hour (100 000 requests in 3600 seconds == 27.77777777777778 requests per second)
Evening shift 2 - 50 000 requests in 3 hours (50 000 requests in 10800 seconds == 4.62962962962963 requests per second)
I would recommend going for Concurrency Thread Group and Throughput Shaping Timer combination, example setup:

Related

Understanding difference between thread group properties

i've started distributed performance testing using jmeter. If i give scenario 1:
no.of threads: 10
ramp up period: 1
loop count: 300
Everything runs smooth, as scenario 1 translates to 3000 requests in 300 seconds. i.e. 10 requests per second.
If i give scenario 2:
no.of threads: 100
ramp up period: 10
loop count: 30
Afaik, scenario2 is also executing 3000 requests in 300 seconds, i.e. 10 requests per second.
But things started failing i.e. sever is facing heavy load and requests fail. In theory both scenario1 and scenario2 should be same, right? Am i missing something?
All of these are heavy calls, each one will take 1-2 seconds under normal load.
In ideal world for scenario 2 you would have 100 requests per second and the test would finish in 30 seconds.
The fact that in 2nd case you have the same execution time indicates that your application cannot process incoming requests faster than 10 per second.
Try increasing ramp-up time for 2nd scenario and look into the following charts:
Active Threads Over Time
Response Times Over Time
Transactions Per Second
Normally when you increase the load the number of "Transactions Per Second" should increase by the same factor and "Response Time" should remain the same. Once response time starts growing and number of transactions per second starts decreasing it means that you passed the saturation point and discovered the bottleneck. You should report the point of maximum performance and investigate the reasons of the first bottleneck
More information: What is the Relationship Between Users and Hits Per Second?
In scenario 2 after 10 seconds you have 100 concurrent users which execute requests in parallel, your server may not handle well or prevent such load
Concurrent user load testing sends simultaneous artificial traffic to a web application in order to stress the infrastructure and record system response times during periods of sustained heavy load.
In scenario 1 after 10 seconds you have 10 concurrent users looping through the flow, without causing a load on server
Notice your server may have restriction on number of users requesting only on specific request(s)
We shall be very clear about the Rampup time
Following is extract from the official documentation
Scenario 1 : no.of threads: 10
ramp up period: 1
loop count: 300
In the above scenario 10 threads(virtual users) are to be created in 1 seconds. Each user will loop 300 times. Hence there will be 3000 requests to the server. Throughput cannot be calculated in advance with above configuration. It fluctuates based on the server capability, network etc. You could control the throughput with some components and plugins.
Scenario 2 : no.of threads: 100
ramp up period: 10
loop count: 30
In scenario 2 100 threads (virtual users) are created in 10 seconds. 100 virtual users will send requests concurrently to the server. Each user will send 30 requests. In the second scenario you will have higher throughput (number of requests per seconds) as compared to the scenario 1. Looks like server cannot handle the 100 users sending requests concurrently.
Ramp up time is applicable for the first cycle of each thread. It will simulate delays between first request of each user in their first iteration.

Jmeter loop count value behaviour

This is with respect to Jmeter's loop count behavior.
Number of threads 4000
ramp up period 800
Thread count 2
Action to be taken after a sample error Continue
Same user on each iteration Yes
Delay thread creation until needed Yes
This is resulting in 8000 requests being made in 800 seconds. However, my use case is, 4000 requests in 800 seconds (count=1), then another 4000 in next 800 seconds (count=2).
What changes can I make for this?
Ramp-up period doesn't mean 8000 requests in 800 seconds, it results in:
JMeter starts with 1 virtual user and adds 5 virtual users each second for 800 seconds
Each virtual user executes Samplers upside down for the specified number of iterations
When there are no more Samplers to execute and loops to iterate the thread is being shut down
My expectation is that you only have 1 Sampler and its response time is relatively low (less than 1 second), you can check the actual amount of virtual users and produced load using Active Threads Over Time and Transactions Per Second listeners
If you need to implement 4000 requests for 800 seconds twice the easiest option would be going for the Throughput Shaping Timer and configuring it to reach/maintain 5 requests per second throughput for 800 seconds twice with 10 seconds time frame of doing nothing between loops.

150 TPS for 30 users with 50,000 requests in 6 hrs

Suppose we have to achieve 150 TPS for 30 users with 50,000 requests in jmeter where user runs for 6hrs.
I want to hit 3 HTTP request in this scenario.
Can you please suggest how can i configure in such a way?
I have tired to create thread group where users are 25 and duration is 28,800 but i am unable to achieve above part.
I need 150 tps for 50,000 requests in 6hrs
If you want to execute 100 000 requests in 8 hours with 25 users - you need to perform approximately 3.5 TPS. In this case use Constant Throughput Timer to limit the requests execution rate to 208.3 requests per minute
If you want to achieve 150 TPS rate - I doubt you will be able to do this with 25 users (unless response time of your application is 0.16 ms). You might want to allocate more virtual users in order to reach 150 TPS using Concurrency Thread Group and Throughput Shaping Timer combination. However given 150 TPS and 8 hours test duration you will get 4 320 000 requests.
So double check your SLA/NFR as the requirements you listed above are mutually exclusive and cannot be put together.

How do I achieve the expected throughput in JMeter for a given scenario?

I have about 300 users (configured in the thread group) who would perform an activity (e.g.: run an e-learning course) twice. That would mean I need to expect about 600 iterations i.e 300 users performing an activity twice.
My thread group contains the following transaction controllers:
Login
Dashboard
Launch Course
Complete Course
Logout
As I need 600 iterations per 5400 seconds i.e 3600 + 900 + 900 seconds (1 hour steady state + 15 mins ramp-up and 15 mins ramp-down), and the sum of sampler requests within the total thread group are 18, would I be correct to say I need about 2 RPS?
Total number of iterations * number of requests per iteration = Total number of requests
600 * 18 = 10800
Total number of requests / Total test duration in seconds = Requests per second
10800 / 5400 = 2
Are my calculations correct?
In addition, what is the best approach to achieve the expected throughput?
Your calculation looks more or less correct. If you need to limit your test throughput to 2 RPS you can do it using Constant Throughput Timer or Throughput Shaping Timer.
However 2 RPS is nothing more than statistical noise, my expectation is that you need much higher load to really test your application performance, i.e.
Simulate the anticipated number of users for a short period. Don't care about iterations, just let your test to run i.e. for an hour with the number of users you expect. This is called load testing
Do the same but for longer period of time (i.e. overnight or weekend). This is called soak testing.
Gradually increase the number of users until you will see errors or response time will start exceeding acceptable thresholds. This is called stress testing.

Simultaneous SOAP Requests with JMeter

We have test-plan like below:
Test Plan
Thread Group
SOAP/XML-RPC Request 1
SOAP/XML-RPC Request 2
SOAP/XML-RPC Request 3
We have an issue when our service goes down on a certain day with a large load. We want to load test requests/responses per second, ranging from 500 to 10,000 requests over 20 mins to 1 hour.
Setting the thread value to 1200 for example only gives us roughly 60 per second. Any help to get this value up would be great folks.
ranging from 500 to 10,000 requests over 20 mins to 1 hour.
Do you mean 500 to 10,000 requests per second, or for example 10,000 requests over 30 mins which are a little over 5 requests på second?
I you are starting 1200 threads and not getting more than 60 requests per second it is probably not JMeter limiting the throughput. If you are sure the system can manage a higher troughput (thus it is not a service hardware problem) then I would check the network capacity between the load generating server and the service for bottlenecks.

Resources