Is it possible to hold a session for a certain period of time using JMeter? While using "Ultimate Thread Group"
Ultimate Thread Group: 100 users will be ramping up and run for 15 minutes. Then another set of 100 user will be ramping up after 15 minutes and it will continue till it reaches 1000 users. No iteration and Total sample count should be 1000. Application session time out is 15 minutes. Logout transaction should execute once all 1000 users are reached.
Attached the load profile:
Load Profile
Total sample count should be 1000
this is not something that you can get for sure because the total number of requests you will be able to make using the above setup will depend mostly on :
your application response time
the nature of your test (i.e. think times)
You can use i.e. a Dummy Sampler where you can mimic various connect time, latency and response time and see what will be the number of requests you will make in 2.5 hours, however my expectation is that 1000 concurrent users will be able to conduct much more than 1000 requests given the test duration.
More information: What is the Relationship Between Users and Hits Per Second?
If you want to introduce a "hard limit" on the number of requests you can consider adding Throughput Controller in Total Executions mode.
Related
i have to loadtest apis with 10k rpm,what should be the ramp up time and execution time.
10k rpm = 10k/60=166 req/sec i.e 166 concurrent users
but don't know to calculate exact ramptime time and execution time.
currently standard set is 60min execution and 15 min ramp up in the org
10k rpm = 10k/60=166 req/sec i.e 166 concurrent users
166 concurrent users don't necessarily mean 166 requests per second, moreover it will happen only if response time for each Sampler will be 1 second sharp.
If response time will be 2 seconds - you will have 83 requests per second
If response time will be 0.5 seconds - you will have 332 requests per second
So you need to supply sufficient amount of threads and add a relevant Timer like Constant Throughput Timer or Throughput Shaping Timer to slow down JMeter throughput to the desired value. The latter one can be connected with the Concurrency Thread Group so JMeter would be able to kick off extra threads if the current amount is not sufficient to conduct the required load.
And the system under test need to be able to respond fast enough because JMeter waits for the response from the previous request before starting a new one.
With regards to ramp-up, there are no exact numbers recommendations, the only approach you can follow is increasing the load gradually so you would be able to correlate the increasing load with increasing throughput, response time, number of errors and so on.
If you don't have better ideas you can go for the quote from the JMeter user manual:
The ramp-up period tells JMeter how long to take to "ramp-up" to the full number of threads chosen. If 10 threads are used, and the ramp-up period is 100 seconds, then JMeter will take 100 seconds to get all 10 threads up and running. Each thread will start 10 (100/10) seconds after the previous thread was begun. If there are 30 threads and a ramp-up period of 120 seconds, then each successive thread will be delayed by 4 seconds.
Ramp-up needs to be long enough to avoid too large a work-load at the start of a test, and short enough that the last threads start running before the first ones finish (unless one wants that to happen).
Start with Ramp-up = number of threads and adjust up or down as needed.
It completely depends on what exactly needs to be achieved
Things to consider:
Test execution duration and ramp up/ramp down of users, needs to evaluated based on the AUT(Application under Test) Real user utilization in Production.(Capture the number of users active at particular point of time in system)
Watch out for pattern of users entering/exiting the system.
Calculate User load/ RPM using Little's Law.
Some of the common test Types includes.
Baseline Test (Initial testing conducted with very minimal load to test the stability of the application)
Load Test (To evaluate applications performance under gradually increasing User load with expected concurrent number of users
Stress Test (Testing beyond normal conditions, increasing the user load until the Response time exceeds acceptable criteria. Usually done to find the breakpoint of the application, responsiveness of the application for higher volume traffic
Endurance Test ( To test for a longer durations, by applying varying loads.)
I am conducting a performance test (TPS) using jmeter.
I am requesting about 10,000 TPS, but the following two results are different.
(Position that 10,000 TPS responds normally)
1000 thread x 600 target throughput(in samples per minute)
100 thread x 6000 target throughput(in samples per minute)
I think the two results should be the same, but why is the response time delayed as the thread increases?
I think the two results should be the same - why they would be the same?
Let's imagine your system has fixed response time of 1 second, in that case:
With 1000 threads you will get 1000 requests per second and you can limit the throughput to 10 requests per second using the Constant Throughput Timer
With 100 threads you will get 100 requests per second, no limiting is required
And what if response time is 2 seconds?
With 1000 threads you will get 500 requests per second
With 100 threads you will get 50 requests per second
Constant Throughput Timer:
acts precise enough on "minute" scale, if your test lasts less than minute it might not apply the throughput
can only pause the threads to limit the throughput (requests per minute) to the desired value. If current number of threads is not enough in order to conduct the required load - the time won't have any effect.
If you want to send requests at the rate of 10000 TPS it worth considering going for the Throughput Shaping Timer and Concurrency Thread Group combination connected via the Feedback Function in this case JMeter will be able to kick off extra threads if current number is not sufficient.
But also be informed that:
JMeter should be able to start as many threads as needed to send 10000 TPS so make sure to follow JMeter Best Practices or even consider going for Distributed Testing Mode
Application needs to be able to handle the load and respond fast enough, JMeter waits for the previous response before starting the new request so if application is able to serve i.e. 5000 requests per second only you won't be able to reach 10000 by any means
i've started distributed performance testing using jmeter. If i give scenario 1:
no.of threads: 10
ramp up period: 1
loop count: 300
Everything runs smooth, as scenario 1 translates to 3000 requests in 300 seconds. i.e. 10 requests per second.
If i give scenario 2:
no.of threads: 100
ramp up period: 10
loop count: 30
Afaik, scenario2 is also executing 3000 requests in 300 seconds, i.e. 10 requests per second.
But things started failing i.e. sever is facing heavy load and requests fail. In theory both scenario1 and scenario2 should be same, right? Am i missing something?
All of these are heavy calls, each one will take 1-2 seconds under normal load.
In ideal world for scenario 2 you would have 100 requests per second and the test would finish in 30 seconds.
The fact that in 2nd case you have the same execution time indicates that your application cannot process incoming requests faster than 10 per second.
Try increasing ramp-up time for 2nd scenario and look into the following charts:
Active Threads Over Time
Response Times Over Time
Transactions Per Second
Normally when you increase the load the number of "Transactions Per Second" should increase by the same factor and "Response Time" should remain the same. Once response time starts growing and number of transactions per second starts decreasing it means that you passed the saturation point and discovered the bottleneck. You should report the point of maximum performance and investigate the reasons of the first bottleneck
More information: What is the Relationship Between Users and Hits Per Second?
In scenario 2 after 10 seconds you have 100 concurrent users which execute requests in parallel, your server may not handle well or prevent such load
Concurrent user load testing sends simultaneous artificial traffic to a web application in order to stress the infrastructure and record system response times during periods of sustained heavy load.
In scenario 1 after 10 seconds you have 10 concurrent users looping through the flow, without causing a load on server
Notice your server may have restriction on number of users requesting only on specific request(s)
We shall be very clear about the Rampup time
Following is extract from the official documentation
Scenario 1 : no.of threads: 10
ramp up period: 1
loop count: 300
In the above scenario 10 threads(virtual users) are to be created in 1 seconds. Each user will loop 300 times. Hence there will be 3000 requests to the server. Throughput cannot be calculated in advance with above configuration. It fluctuates based on the server capability, network etc. You could control the throughput with some components and plugins.
Scenario 2 : no.of threads: 100
ramp up period: 10
loop count: 30
In scenario 2 100 threads (virtual users) are created in 10 seconds. 100 virtual users will send requests concurrently to the server. Each user will send 30 requests. In the second scenario you will have higher throughput (number of requests per seconds) as compared to the scenario 1. Looks like server cannot handle the 100 users sending requests concurrently.
Ramp up time is applicable for the first cycle of each thread. It will simulate delays between first request of each user in their first iteration.
in Visual Studio Load testing there is an option in the test mix to set a value for each specific test to be run by all the users assigned to that scenario a specific number of times. Therefore if you want to increase the load by increasing the number of times a transaction is executed in a test you can increase the user load or set the number of times a particular transaction is run. This facility also spaces out each time a transaction is run, so that the transaction is run throughtout the test.
With Jmeter if I want to achieve a similar affect I would need to change the value of the ConstantTimer (I think) for each new user load.
Example:
Test time 300 sec, Transaction count required = 20, Thread Count =1, Constantimer =15 secs
Test time 300 sec, Transaction count required = 20, Thread Count =5, Constantimer =75 secs
Is the ConstantTimer the correct timer to use (with a loop obviously) for this?
If you want to execute a specific number of requests per unit of time, i.e. 20 transactions in 300 seconds the most obvious choice is using Constant Throughput Timer
20 transactions in 300 seconds is 4 transactions per minute and given you configure the Constant Throughput Timer like this:
you will "tell" JMeter to pause its requests to 4 transactions per minute no matter how many threads you have.
A more flexible option is using Throughput Shaping Timer and Concurrency Thread Group combination, this way you will not be only able to pause threads to limit JMeter's throughput to the given number of requests per minute, but also kick off new threads if current amount is not sufficient to reach/maintain the target throughput.
Our requirement is to execute
1 user/thread, 10 requests/sec, Total requests = 10,000
With current Thread Properties not able to achieve.
Only one user/thread send 10 request/sec and total should be 10000 requests. Any other way to achieve this in Jmeter ?
Is following approach is correct ? We used Loop controller. So each request will repeat 10 times
You will be able to achieve 10 requests/second with 1 virtual user only if your response time will be 100 milliseconds. If response time will be more than 100 milliseconds - you will not be able to reach the desired load.
If your application fails to respond in 100 milliseconds time frame - most probably you've found a performance bottleneck and you can report this to your application developers.
If you have some time to investigate the issue you can try to provide more information, i.e. what is the average response time, what are minimum, maximum, quantiles, the actual number of requests per second, etc, all this information can be retrieved using Aggregate Report listener.
Normally when people are looking for the answer if the application under test can support X requests per second they use > 1 virtual user as load test should represent real life usage of the system under test and 1 thread (virtual user) isn't normally associated with performance testing and derivatives.
So probably you should reconsider your approach to testing and try increasing the number of threads (virtual users). The throughput can be controlled using Precise Throughput Timer or Constant Throughput Timer. However be aware that the above timers can only pause JMeter to slow it down to the desired throughput. Another approach is using Concurrency Thread Group and Throughput Shaping Timer combination, they can be connected using Feedback Function so JMeter will kick off extra threads in order to reach/maintain the desired number of requests per second.
You can install Concurrency Thread Group and Throughput Shaping Timer using JMeter Plugins Manager