I'm using Locust for load testing a site of mine and I'm curious about the difference between it and Apache Bench in terms of terminology.
With Apache Bench, you specify the number of requests (-n) and the number of multiple requests, or concurrency (-c).
Locust uses slightly different terminology. It has "Users to simulate" and "Hatch rate (users spawned/second)".
It is my understanding that "Users to simulate" would be the equivalent of number of requests in Apache Bench. Is that also true of -c and "Hatch rate" where "Hatch rate" is essentially how many concurrent requests will be made?
For example, are these two essentially or close to equivalent?
ab -n 1000 -c 100 url and Locust with 1000 users at a hatch rate of 100/second?
Note: I realize these two tools have very different capabilities and that Locust is a lot more flexible than Apache Bench. I'm really trying to understand the terminology difference.
It's not exactly the same, because with locust you can specify multiple requests per user, to play out a whole scenario.
So while the whole scenario for a user might take 10 seconds to complete, if you hatch at 100/second you will end up with around 1000 concurrent requests, because the users hatched in the first second will not make their final request until 10 seconds in, when 900 more users have been hatched and are also making requests.
If on the other hand you only do one request per user, then it's comparable to Apache Benchmark
The Apache Bench parameters and Locust parameters are not really comparable. How the Locust number of simulated Locust users affects the effective requests/second is very much dependent on the python code of your Locust and TaskSet class.
With Locust the aim is to define user behaviour with code, and then you can simulate a large number of these users.
You could have a Locust class that don't make any requests (though that would be kind of pointless), and that would result in getting an effective RPS of 0, no matter the number of users you choose to simulate. Likewise you could write a Locust class which just had a loop in which it constantly made HTTP requests and no wait time, and in that case the number of users simulated would correspond to the -c parameter of Apache Bench.
Looking into Locust issue #646 - Allow a fixed RPS rate currently Locust doesn't support defining the desired throughput in terms of requests per unit of time.
You can consider Apache JMeter which has Constant Throughput Timer out of the box and Throughput Shaping Timer plugin if you need more flexibility.
To compare Apache Benchmark with Locust, just simply test only 1 request,set the same time(-t 5), same users(-c 50), and hatch as quickly as possible in locust(-r 50 ).
-n means the whole request numbers , but there is no such parameter in locust. So we need a large number(100000) that can not been done in time limitation(-t 5):
ab -n 100000 -c 50 -t 5 url
locust -c 50 -r 50 -t 5 url xxx
In my case I find that ab is quicker up to 50% than locust which wrote in pure python.
Related
This is the sort of traffic pattern I'm consistently seeing.
I understand that RPS roughly equals number of users/(response time + sleep time), hence my RPS will be roughly flat if my number of users and my response times are increasing at a similar rate (I'm using 0 sleep time).
I also understand that you can't help me debug the underlying system whose response time is increasing! That's another thread I'll be pursuing separately. The increasing response time is not a Locust issue.
My question is how can I get Locust to ignore response time, in order to produce a constantly increasing RPS? I would like to take response time out of the equation entirely so that RPS is proportional to number of users.
(Why do I want to do this? In order to effectively load test my particular system.)
An individual Locust user is syncronous/sequential and cannot ”ignore response times” any more than any other Python program can ”ignore the time spent executing a line of code”
But you can use wait_time = constant_pacing(seconds_per_iteration) to ensure a fixed iteration time for each user https://docs.locust.io/en/stable/writing-a-locustfile.html#wait-time-attribute
Or wait_time = constant_pacing(1/iterations_per_second) if you prefer.
For a ”global” version of the same type of wait, use https://github.com/SvenskaSpel/locust-plugins/blob/master/examples/constant_total_ips_ex.py
Make sure your user count is high enough, as none of these methods can launch additional users/concurrent requests.
You may also want to have a look at https://github.com/locustio/locust/wiki/FAQ#increase-my-request-raterps
Building on cyberwiz's answer, you can't make the individual Locust users ignore response time. Each has made a request and can't do anything else until it gets a response. With ever increasing response times, all you can do is make Locust spawn more and more users. You'd need to run in distributed mode and add more workers who can spawn more users. You can specify a higher user count and maybe even a higher hatch rate, depending on the behavior you're trying to achieve.
I am using Jmeter for load testing and I'm new to this. I have an API where I want to send around 36000 requests in a given time, which is- 5 minutes. What should be the configuration of threads, ramp-up time, loop-count, and constant throughput timer for this scenario?
I am using the following configurations, but I am unable to reach the decided RPS-
Thread- 1000
Ramp-up- 5 Minute
loop-count 36
constant throughput timer- 7200
Where is my configuration wrong?
You can try to reduce the ramp-up period to be close to zero and increase the number of loops to "infinite", the total number of requests can be limited using Throughput Controller
In general there could be 2 main reasons of not being able to conduct the required load:
JMeter cannot produce the desired number of hits per second. Things to try:
Make sure to follow JMeter Best Practices
Increase number of threads in Thread Group
Consider switching to Distributed Testing mode
Application cannot handle that many requests per second. Things to try:
Inspect configuration and make sure it's suitable for high loads
Inspect CPU, RAM, Disk, etc. usage during the load test, it might be simply lack of resources, it can be done using JMeter PerfMon Plugin
Re-run your test with profiler tool telemetry enabled
Raise a ticket as it is a performance bottleneck
Hellow
Which is the maximun number of virtual users that can be testet at a Jmeter distributed test? Is it possible to reach one million of virtual users?
Thak you.
It depends on may factors, technically the limit on JMeter end is very high (I think it should be 231 − 1 or 2 147 483 647 virtual users)
Nature of your application: use cases, is it more about consuming ore creating content, average request and response size, response time, etc.
Nature of your test: again, request and response size, need to use pre/post processors and assertions
Hardware specifications of your load generators
Number of load generators
So I would recommend the following approach:
Start with a single JMeter instance
Make sure you have optimal JMeter configuration and amended your test according to JMeter best practices
Make sure you have monitoring of baseline OS health metrics on that machine
Start with 1 virtual users and gradually increase the number of running users until you start running out of hardware resources (CPU or RAM or Network or Disk IO will be close to maximum)
Mind the number of active users at this stage (you can use i.e. Active Threads Over Time listener) - this is how many users you can simulate for particularly that test scenario. Note, the number might be different for other application or other test scenario.
Multiply the number you get by the number of the load generators you have - if there is > 1M - you are good to go.
If you won't be able to simulate that many users there is a workaround, but personally I don't really like it. The idea is that real users don't hammer application non-stop, they need some time to "think" between actions. Normally you should be simulating these "think times"using JMeter Timers. But if you lack load generators you can consider the following:
Given 1 virtual user needs 15 seconds to think between operations and response time of your application is 5 seconds, it means that each user will be able to execute only 3 requests per minute. So 1M of users will execute 3M requests per minute which gives us 50 000 requests per second which is also high, but more likely to be achievable.
while performance testing an application, i was unable to proceed further of handling large number of threads using JMeter, so, i would like to know the max number of threads that are allowed in Jmeter, Is jmeter capable of handling 1,50,000 threads?
There is no upper limit, it strongly depends on what your test is doing, what is response size, etc.
Also keep in mind that real users don't hammer the application nonstop, they need some time to "think" between operations plus they have to wait for response before they start "thinking" about next action.
For example, given users "think" for 10 seconds and response time is 2 seconds it means that each virtual user will execute 5 requests per minute.
In above scenario 1 50 000 users will execute 7 50 000 requests per minute - which is 12 500 requests per second - > 10x times less users to simulate.
So:
first of all make sure that your JMeter configuration is optimal, default settings are good for tests development and debugging but not very good for the load test execution. You need to
tune Java parameters (Heap size, GC, etc.)
disable all listeners
make sure that you have only those assertions and post processors which are absolutely required
you store only those metrics you need and you don't save any excessive results, especially response data
See 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article for above points comprehensive explanation and some more tips
Even given you apply above tweaks I don't think you'll be able to conduct the load of 1 50 000 users from a single host (unless you have a supercomputer in your QA Lab) so I expect you'll need to consider JMeter Distributed Testing when one master machine orchestrates multiple load generators aka "slaves" acting as a single instance - this way you will be able to increase the load by factor equal to number of slaves
Yes Of course, it depends a lot on the machine running Jmeter, but if mileage counts I can give you some hints.
JMeter allows you to run multiple processes in the same box, and it's usually pretty reliable generating up to 200-300 threads per JMeter instance. If you need more than that, I'd recommend using multiple JMeter instances
Use below link for better description how Jmeter can handle 1,50,000 threads of multiple instances
https://blazemeter.com/blog/how-run-load-test-50k-concurrent-users
My customer give me a traffic figure is 600 request/second and the RX(Mbps) is 30.
Please help me what is the suitable scenario test plan for this issue.
My customer and me is in different countries, so is the network effect to the result?
Many thanks on your pointers.
First of all, 600 requests/second rate isn't something which is recommended to be run from a single node.
You need to consider JMeter Remote Testing which assumes running the test from multiple JMeter instances. Make sure that you're following JMeter Performance and Tuning Tips guidelines while developing your test.
In order to achieve 600 requests/second rate, not more, not less you need to use Constant Throughput Timer