I am using locust to load test an application. I wrote and tested the script on my local ubuntu system, and all went well.
I created an EC2 instance, using an Amazon Linux image, and after adjusting the file limits in /etc/security/limits.conf file I loaded up locust and things went normally for a small test (simple GET test, just to check the plumbing, 2000 users, 20 hatch rate).
However, when I loaded up a larger test, 8000 users 40 hatch rate, I noticed that somewhere around 3,000 or 4,000 users, the hatch rate appeared to slow down, just adding 4 - 5 rather the 40 new "users" at a time. So it took a long time to reach 8000. Is that expected behavior, if not, any idea what the problem might be?
What Locust calls "users" is actually gevent spawned TaskSets. This means you're spawning thousands of eventlets in a single Python process, which means a great deal of overhead managing those eventlets.
If you want to spawn thousands of TaskSets, I'd recommend running Locust in distributed mode. You can have many slaves running on the same hardware, or distribute your slaves across many instances. Google has written up a neat article and open sourced some Kubernetes containers for just such a purpose. We wrote our own Docker container with Alpine and a heavily modified Locust, our ratio of slaves to TaskSets ended up being 1:100. The ratio of slaves to instances heavily depends on what instance size you get.
Related
My scenario is to run 4000 users in 1 hour for a product search. I tried to set it up using the GUI and ran with 1000 users. But in my local system, it's not running and my Jmeter is hanging. What I did is, have created an Azure VM configuration as 8 Core processor, 32GB RAM, and it's a windows machine. But this time I checked my baseline with 5 users in GUI mode. Later on, my intention is to increase the load by 50, 100, 500, 1000 and 4000. But When I run with 100 users also in GUI mode Jmeter is hanging. What I did is I ran the script in Non-GUI mode. But with 50 users I was able to run and getting the result. When I increase the load to 100, Java heap memory exception is getting. Can anyone suggest me how run this scenario is Azure VM. In regular machine, it's not working so only I have gone through Azure VM.
Let me know if anything is needed from my end.
Don't run your tests in GUI mode, it's only for tests development and debugging. For real load test execution always use non-GUI mode
Make sure to follow JMeter Best Practices
Increase JVM Heap size, by default JMeter 5.4.1 comes with 1 GB heap allocation which might not be sufficient for your case, i.e. line #151 of jmeter.bat startup script looks like:
set HEAP=-Xms1g -Xmx1g -XX:MaxMetaspaceSize=256m
this -Xmx1g stanza tells JVM to not to use more than 1 GB for the heap space, you might want to ramp-up it to i.e. 24g
See 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure article for more details
Even if you manage to run the test with 4000 users on this VM make sure to monitor its resources (CPU, RAM, Network, pagefile usage, etc.) as it might be the case it will be overloaded and will not be able to send the requests fast enough, in this case you will need to add another VM and run JMeter in distributed mode
There's a couple of comments I have to do here
First thigs first, you should never run your JMeter performance test using the GUI as specified in their documentation
Don't run load test using GUI mode !
GUI mode is for designing and quick testing things. I'd say that even JMeter tells you that when you start it from the CLI
Second, it seems that you're facing vertically scaling limitations (JIC, we say that we're scaling vertically when we put "a bigger machine" to work). When you start getting into a high number of users (such as 4000) scaling vertically starts to show some issues, and while possible you should try to go for an horizontal scaling strategy (JIC, we say that we're scaling horizontally when we put more machines to work in parallel)
Luckily, JMeter supports horizontal scaling out of the box. They call it Distributed Testing
As a summary, you'll need to perform the following steps:
Setup several machines, they don't need to be super big machines. I'd recommend you playing around a little bit with the specs. I'd say to go with a machine that can support 250-500 users
Configure these machines to act as worker nodes
Make sure to start them in CLI mode!
Start your controller node. This can be your own local machine
Since you won't be running load, you can start it in GUI mode!
Run your tests
While they're running, I'd recommend monitoring the worker nodes as well
Start just with one node and, once you have it working, add the rest of nodes
As an extra step, you could configure some scripts (or even better, a CICD pipeline) to rampup dynamically the required number of worker nodes based of the number of users you want. If we maintain the 500 users/machine, you'd need 8 nodes for your 4k users. But potentially you might need to repeat the scenario with 10k or more users
I have a 32GB, i7 core processor running on windows 10 and I am trying to generate 10kVU concurrent load via jmeter. For some reason I am unable to go beyond 1k concurrent and I start getting BindException error or Socket connection error. Can someone help me with the settings to achieve that kind of load? Also if someone is up for freelancing I am happy to consider that as well. Any help would be great as I am nearing production and am unable to load test this use case. If you guys have any other tools that I can use effectively, that would also help.
You reach the limit of 1 computer, thus you must execute in distributed environment of multiple computers.
You can setup JMeter's distributed testing on your own environment or use blazemeter or other cloud based load testing tool
we can use BlazeMeter, which provides us with an easy way to handle our load tests. All we need to do is to upload our JMX file to BlazeMeter. We can also upload a consolidated CSV file with all the necessary data and BlazeMeter will take care of splitting it, depending on the amount of engines we have set.
On BlazeMeter we can set the amount of users or the combination of engines (slave systems) and threads that we want to apply to our tests. We can also configure additional values like multi locations.
1k concurrent sounds low enough that it's something else ... it's also the default amount of open file descriptor limits on a lot of Linux distributions so maybe try to raise the limit.
ulimit -Sn
will show you your current limit and
ulimit -Hn
will show you the hard limit you can go before you have to touch configuration files. Editing /etc/security/limits.conf as root and setting something like
yourusername soft nofile 50000
yourusername hard nofile 50000
yourusername - will have to be the username of the user which with you run jmeter.
After this you will probably have to restart in order for the changes to take effect. If not on Linux I don't know how to actually do this you will have to google :D
Recommendation:
As a k6 developer I can propose it as an alternative tool, but running 10k VUs on a single machine will be hard with it as well. Every VU will take some memory - like at least 1-3mb and this will go up the larger your script is. But with 32gb you could still run upto 1-2kVUs and use http.batch to make concurrent requests which might simulate the 10k VUs depending on what your actual workflow is like.
I managed to run the stages sample with 300VUs on a single 3770 i7 CPU and 4gb of ram in a virtual machine and got 6.5k+ rps to another virtual machine on a neighboring physical machine (the latency is very low) so maybe 1.5-2kVUs with a a somewhat more interesting script and some higher latency as this will give time to the golang to actually run GC while waiting for tcp packets. I highly recommend using discardResponseBodies if you don't need them and even if you need some to actually get the response for them. This helps a lot with the memory consumption a each VU
I need to test 200.000 VU hitting an app in 10 seconds, so I started to make a test of 10.000 VU, running Jmeter in Non-GUI mode, to see the response of my computer, my internet connection and the site response, but I got 83.50% of Errors.
95% of the errors were these:
Non HTTP response code: java.net.ConnectException/Non HTTP response message: Connection timed out: connect
This means that the internet connection was not enough for the short time of the test?
Thanks.
Running 200K users
Generally speaking in traditional HTTP running 200.000 users from one machine is impossible: there isn't that many ports. I.e. if you maximize your port usage (and it's likely you need to change OS settings to do that, since usually OS will limit number of open ports to somehwere between 1000 and 10000), JMeter will have about 64500 ports to run requests on. Each JMeter HTTP sampler needs a separate port, so you need 200K ports. Thus you need to have at least 4 machines to run 200K requests concurrently.
But that may not be enough: if you have more than one request sequentially (like most performance tests do), you will be able to run even less concurrent requests, since ports are usually not closed right away after request is done, so next request has to use a different port.
Don't forget that server also must be able to receive similar load.
But even that may not be enough: JMeter needs to have enough memory to accommodate 10-30K threads. Size of thread in memory will depend on a few things, and how your script is designed among them.
Bottom line: with all the tweaking, realistically, port availability limits number of concurrent requests JMeter can run from one machine to 10-30K concurrent users. Thus to test 200K users, you need about 7-20 JMeter machines.
Running 10K users
If you were testing in a designated environment (where clients and servers are next to each other with optimized network between them), you should be able to run 10K users from one machine, if other limits, e.g. memory and max ports were properly tweaked. But sounds like you are trying to test them over the internet connection?
Well, 2 problems here:
Performance testing over internet connection is absolutely pointless. You don't know what is between you and servers, and how those things in between are changing the shape of the load. You won't know if it was 10K concurrent requests, or 10K sequential requests. And results will only tell you how fast your internet is.
Any ISP will have a limit on number of connections from one IP, and it will be well below 10K. Not to mention that some ISPs may flag / temporary ban your IP for such flood.
Bottom line: whoever asked you to test 10K or 200K concurrent users, should also provide a set of JMeter machines to run this test from. Those machines should be close to tested servers, preferably without any extra routing in between (or with well known and well configured routing)
I don't think that stressing your application by kicking off 200k users at once is a good idea (same applies to 10k users) as the results, even in case of success, won't tell the full story. Moreover, in case of error you will be able to state only that 10k users in 10 seconds is not possible, however you won't have the information like:
What was the number of users when errors start occurring
What is the correlation between number of concurrent users and response time and/or throughput
What is the saturation point (the maximum system performance)
So I would recommend re-running your test and increasing the load gradually from one virtual user to 10 000 and see when it breaks. The breaking point is called bottleneck and the cause can be determined like:
First of all make sure you're following JMeter Best Practices as default JMeter configuration is not suitable for high loads and if JMeter is not capable of sending requests fast enough you will not get accurate results. Most probably you will have to run JMeter in Distributed mode, it is highly unlikely you will be able to mimic 20k requests per second from a single machine (or it has to be a very powerful one)
Set up monitoring of the application under test in order to ensure that it has enough headroom in terms of CPU, RAM, Disk, etc. You can use JMeter PerfMon Plugin for this
Check your application infrastructure, like JMeter the majority of middleware components like web/application servers, load balancers, databases, etc. default configurations are suitable for development and debugging, they need to be tuned for high throughput.
Check your application code using profiler tools telemetry, the reason could be in i.e. slow DB query, inefficient algorithm, large object, heavy function, etc.
I want to simulate up to 100,000 requests per second and I know that tools like Jmeter and Locust can run in distributed mode to generate load.
But since there are cloud VMs with up to 64 vCPUs and 240GB of RAM on a single VM, is it necessary to run in a cluster of smaller machines, or can I just use 1 large VM?
Will I be able to achieve more "concurrency" with more machines due to a network bottleneck coming from the 1 large machine?
If I just use one big machine, would I be limited by the number of ports there are?
In the load generator, does every simulated "user" that sends a request also require a port on the machine to receive a 200 response? (Sorry, my understanding of how TCP ports work is a bit weak.)
Also, we use Kubernetes pretty heavily, but with Jmeter or Locust, I feel like it'd be easier to run it on bare VM, without containerizing (even in distributed mode) while still maintaining reproducibility. Should I be trying to containerize Jmeter or Locust and running in Kubernetes instead?
According to KISS principle it is better to go for a single machine assuming it is capable of conducting the required load.
Make sure you're following JMeter Best Practices
Make sure you have monitoring of baseline OS health metrics (CPU, RAM, swap, network and disk IO, JVM statistics, etc.)
Start with low number of users and gradually increase the load until you reach the desired throughput or limit of any of the monitored metrics, whatever comes the first. If there will be a lack of CPU or RAM or something - see what could be done to overcome the limitation.
More information: What’s the Max Number of Users You Can Test on JMeter?
I created test with JMeter to test performance of Ghost blogging platform. Ghost written in Node.js and was installed in cloud server with 1Gb RAM, 1 CPU.
I noticed after 400 concurrent users JMeter getting errors. Till 400 concurrent users load is normal. I decide increase CPU and added 1 CPU.
But errors reproduced and added 2 CPUs, totally 4 CPUs. The problem is occuring after 400 concurrent users.
I don't understand why 1 CPU can handle 400 users and the same results with 4 CPUs.
During monitoring I noticed that only one CPU is busy and 3 other CPUs idle. When I check JMeter summary in console there were errors, about 5% of request. See screenshot.
I would like to know is it possible to balance load between CPUs?
Are you using cluster module to load-balance and Node 0.10.x?
If that's so, please update your node.js to 0.11.x.
Node 0.10.x was using balancing algorithm provided by an operating system. In 0.11.x the algorithm was changed, so it will be more evenly distributed from now on.
Node.js is famously single-threaded (see this answer): a single node process will only use one core (see this answer for a more in-depth look), which is why you see that your program fully uses one core, and that all other cores are idle.
The usual solution is to use the cluster core module of Node, which helps you launch a cluster of Node processes to handle the load, by allowing you to create child processes that all share the same server ports.
However, you can't really use this without fixing Ghost's code. An option is to use pm2, which can wrap a node program, by using the cluster module for you. For instance, with four cores:
$ pm2 start app.js -i 4
In theory this should work, except if Ghost relies on some global variables (that can't be shared by every process).
Use cluster core and for load balancing nginx. Thats bad part about node.js. Fantastic framework, but developer has to enter into load balancing mess. While java and other runtimes makes is seamless. Anyway, nothing is perfect.