I am running JMeter with number of threads=10,60,140 for the multiple thread groups and We are getting high response time.
If we changed recording controller to loop controller and same values given in loop count, then we are getting least response time.
Why there is a difference between them? Which response should we consider?
Threads are executed in parallel while loop is executed samplers sequentially.
Executing numerous calls in parallel on same machine versus sequentially is basically creating more stress on server (more hits per seconds).
When server is under stress there may appears waits/locks because of reaching max number of X, where X can be either database/server/resource/...
Therefore your response time will be higher when using threads over loop number.
Instead of this approaches, you probably should consider try to simulate real users behavior, see for more details an answer.
Related
This is the sort of traffic pattern I'm consistently seeing.
I understand that RPS roughly equals number of users/(response time + sleep time), hence my RPS will be roughly flat if my number of users and my response times are increasing at a similar rate (I'm using 0 sleep time).
I also understand that you can't help me debug the underlying system whose response time is increasing! That's another thread I'll be pursuing separately. The increasing response time is not a Locust issue.
My question is how can I get Locust to ignore response time, in order to produce a constantly increasing RPS? I would like to take response time out of the equation entirely so that RPS is proportional to number of users.
(Why do I want to do this? In order to effectively load test my particular system.)
An individual Locust user is syncronous/sequential and cannot ”ignore response times” any more than any other Python program can ”ignore the time spent executing a line of code”
But you can use wait_time = constant_pacing(seconds_per_iteration) to ensure a fixed iteration time for each user https://docs.locust.io/en/stable/writing-a-locustfile.html#wait-time-attribute
Or wait_time = constant_pacing(1/iterations_per_second) if you prefer.
For a ”global” version of the same type of wait, use https://github.com/SvenskaSpel/locust-plugins/blob/master/examples/constant_total_ips_ex.py
Make sure your user count is high enough, as none of these methods can launch additional users/concurrent requests.
You may also want to have a look at https://github.com/locustio/locust/wiki/FAQ#increase-my-request-raterps
Building on cyberwiz's answer, you can't make the individual Locust users ignore response time. Each has made a request and can't do anything else until it gets a response. With ever increasing response times, all you can do is make Locust spawn more and more users. You'd need to run in distributed mode and add more workers who can spawn more users. You can specify a higher user count and maybe even a higher hatch rate, depending on the behavior you're trying to achieve.
I'm trying to understand a significant performance increase in my Jmeter test.
In a multi-tenancy database environment, I have a single RESTful service test containing a Thread Group with a single HTTP Request sampler posting an XML payload. The XML payload is then evaluated via stored procedures, and a response is received stating if the claim was qualified. I run this test from a .bat file (non-gui mode) in an Apache 7 environment with a single JVM running.
Test Thread Group Properties
# of Threads: ${__P(test.threads,200)}
Ramp-Up Period: ${__P(test.rampup,1)}
Loop Count: Forever
Delay Thread: Enabled
Scheduler: Enabled
Duration: ${__P(test.duration,1800)}
HTTP Request
Method: POST
https://serverName:port/database/.../${__P(tenant,1111)}/Claim/${__property(contractId)}
When I duplicate the HTTP Request sampler within the TG and change the tenant ID within the URL, for some reason the performance seems to increase by > 55%. (i.e., the # of claims/second is increased by 55%) It appears the test did not fail, so I cannot attribute the performance increase to an increased error rate.
I would have expected an increase if I had enabled another JVM to let the Load Balancer perform optimization, but this is not the case. (still using only 1 JVM)
HTTP Request 1
https://serverName:port/database/.../${__P(tenant,1111)}/Claim/${__property(contractId)}
HTTP Request 2
https://serverName:port/database/.../${__P(tenant,2222)}/Claim/${__property(contractId)}
The theory going around here is that Jmeter generates a workload at a higher rate for multiple requests than for a single request. I'm skeptical, but haven't found anything "solid" to support my skepticism.
Is this theory true? If so, why would two HTTP Requests increase the performance?
In short: it's OK.
Longer version:
Here is how JMeter works:
JMeter starts all the threads during ramp-up period
Each thread starts executing samplers upside down (or according to the Logic Controllers)
When request doesn't have more samplers to execute and no more loops to iterate it's being shut down.
So how does number of virtual users correlate with the "performance". When you increase virtual users number of requests number for a load test it affects Throughput
Throughput is calculated as requests/unit of time. The time is calculated from the start of the first sample to the end of the last sample. This includes any intervals between samples, as it is supposed to represent the load on the server.
The formula is: Throughput = (number of requests) / (total time).
So if you increase load on well-behaved system throughput should increase by the same factor or linearly.
When you increase load but throughput does not increase, such situation is known as "saturation point" when you get the maximum performance from the system. Further load increasing will lead to throughput going down.
References:
Apache JMeter Glossary
An extended Glossary version
And how do you messure your performance? According to your "theory" your messurements includes jmeter overhead and this would be wrong. More over, is the response the same for both cases? What I mean, is the backend doing the same work work both cases?
Maybe first request returns different output then the other one. Maybe it is more expensive to generate output in one of the request. That is why you will notice "incresed" performance as normally you would do N x Heavy task in X seconds, and in second case G x heavy tasks + H X light tasks in the same time where G < N/2 - more requests in the same time? Sure! Incresed perfomance? Nope.
So to completly investigate what is happening, you need to review your measurement method. I would start with comparing the the actual time for both requests.
I'm trying to create a jmeter script that sends 2 http requests(each with a different path). I managed to get it to send the requests randomly, but i also need it to send each request exactly 50% of the time. any ideas?
Divide your requests into 2 separate Thread Groups. Set identical number of Threads and Loops in each Thread Group
Put 2 requests under the same Thread Group. Add Throughput Controller as a child of each request and either set the same "Total Executions" value for both Throughput Controllers or use 50.0 value in "Percent executions" mode.
See Running JMeter Samplers with Defined Percentage Probability article for detailed information on above approaches and for more complex distribution scenarios.
Option 1: Math
Run a large number of users or a large number of times, picking randomly. On average, it's 50% of the time. Easiest to do, but not exact.
Option 2: Alternate
Use a variable to alternate a single thread back and forth over the course of multiple loops. I assume you have some sort of If Controller that you're using to split them. In your condition, use "${alternating_variable}"=="1". Then use a Beanshell Postprocessor to switch its value: vars.put(alternating_variable,2);. Obviously, you'll need the reverse for the other HTTP Request (both the If and the Beanshell). A little involved, and requires a thread to loop multiple times.
Option 3: Determined by Thread Number
Inside your If, use ${__threadNum}%2!=0 and ${__threadNum}%2==0. This gets the number of the thread, divides by 2, and compares the remainder to 0. Any even-numbered thread will go into one If and any odd thread will go into the other. Easy now that it's generated, but requires multiple threads. Also not necessarily easy to understand.
Apply 2 throughput controller and place your first http request into first throughput controller and 2nd request in other controller. Now change the mode to Percent Executions and pass 50 in throughput textbox
Please refer this
link for more details
Are the simulation loops separate? With separate I mean that JMeter waits for all threads to be done to begin a new iteration of the loop. Or does JMeter just let every thread do a request X time, without stopping?
Additional question: Could one change the number of threads dynamically? Doing a simulation for a range of number of thread (e.g. 100-1500) would be nice.
Each thread is completely independent. So when you have loop set, if a thread is finished its first loop of execution, it goes for another round (as per the loop count) irrespective of the completion of other threads.
You can use a variable for the number of threads & set the number via property files etc. But when the test is running, you can not change the no of threads for the test.
Hope it is clear!
In addition to vIns answer:
You CAN change load dynamically during execution. Threads count is static, but their fire rate is something you can impact.
Look into combination of Beanshell Server and Constant Throughput Timer.
I would like to run the test with the given execution rate per second. The next iteration should start asynchronously at 2nd second without waiting for completion of first iteration.
I tried with Constant Throughput Timer but it doesn't proceed to next iteration until
it finish getting response of first iteration threads.
You can use 2 separate Thread Groups for this (make sure that you have Run Thread Groups Consecutively box unchecked at Test Plan level.
Also check your Constant Throughput Timer configuration Calculate Throughput based on field, you may wish to have separate timer for each Thread Group.
By the way, there is more advanced Throughput Shaping Timer element available via plugin which provides easy-readable graph demonstrating the load pattern.
If you will be considering using separate Thread Groups remember that JMeter Variables have scope local to the Thread Group where they are defined. To use them across different Thread Groups you'll need to cast them to JMeter Properties which have "global" scope. See How to Use Variables in Different Thread Groups guide for how to implement it.
A single thread can only handle one request at a time. You'll need more than one thread for what you're asking for. The constant throughput timer can indeed do what you're asking as long as you have enough threads.
In order to achieve what you're asking for (lets say 1 request every second, regardless of how long the request takes) you could I would suggest using a large number of threads and using the CTT for 60 requests per second.