Our client asked us to run a stress test on their Web application simulating 62,000 users (threads), the test consists of 13-15 HTTP requests, with 1 second delay between every HTTP request, the test should run for 10.5 hours continuously.
I had previous experience with JMeter running up to 10,000 users, but have not tried for larger number.
Is there a limit for the number of threads that JMeter can handle, or is this limited by the hardware of test server?
agreed with Ray when you run jmeter for this much amount of thread the distributed testing is the best option, and the hardware and network shall be capable to handle this kind of load, and in case you want to do that in short go for http://blazemeter.com, they are Scalable from 1,000 to 100,000 concurrent users.
In principle it is limited by several factors including the configuration (CPU/Mem) of your JMeter machine. That said, it is a VERY large number of threads for just one JMeter. To be honest: I wouldn't run even 10000 threads on one machine. You might want to look into using JMeter distributed, see the manual (http://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.pdf).
Except running script remotely, you'll need to configure your script with minimal usage of CPU and memory.
Use JMeter best practices and JMeter tuning tips to reach it
Related
I have come across a situation where I can`t decide what scenario would be best. I have written my test in JMeter as follows:
I have one test plan that runs test in consecutive.
I have 4 thread groups and each thread group has the following properties:
No of threads: 8000
Ramp-up period: 60 sec
Loop count : 10
Same user on each iteration: true
I was having connection error, connection time out error.
So, to make it work , when I test from localhost (same machine), I have to enter a response time out of 1800000 ms, whereas when I do the same test on a remote server, I have to enter the response time out of 3600000 ms.
Can someone please advise :
Is it a good idea to include response time? Is there any other issue I should look for instead of including a response time? Is it an alert for other issue?
Can I improve the test without using response time?
First of all, a couple of questions to you:
Do you really think that the real user will really wait for 1 hour for getting the response from the application?
How did you come to this 8000 users?
Now recommendations:
Never have JMeter and the system under test on the same machine, they are both resource intensive and they will start struggling for CPU cycles, memory pages, etc. and you won't be able to tell whether JMeter is not capable of sending requests fast enough or your application cannot respond properly
Follow JMeter Best Practices
Although the number of virtual users you can simulate with JMeter is very high it's limited by the machine/operating system resources so make sure JMeter has enough headroom to operate in terms of CPU, RAM, Network and Disk IO. The metrics can be checked using JMeter PerfMon Plugin. Once you notice that any of monitored metrics start exceeding i.e. 80% of total available capacity - mention how many users are online and this is how many you can simulate from this machine for this test. If you need more - go for Distributed Testing
The same for the system under test, if you hit the resources consumption limits you need either to upgrade the hardware or to deploy your application in clustered mode (if it supports such a mode)
This is for e-commerce project where the number of users login will be more. I have been given a benchmark 8000 concurrent users need to login and response time should be 3 minutes
#abi , hi .
Let me provide couple notes here.
Depending upon Your connection bandwidth , from my experience as performance test engineer, I'd say jmeter single instance usually holds up to 1k(1000)- 2k(2000) in best case users load.
Considering You have a requirement for 8k (8000 users) load, You need to launch jmeter in distributed mode ( master <-> slaves).
For this config setup I'd recommend to go with 1 master node and 4slaves. For that - You will need 5 machines (aws/azure, whatever) in the same sub-network.
Re more technical details on distributed setup, please take a look:
in public jmeter documentation
please also look into this step-by-step setup manual
Also, when i've been doing set-up for 10k load for one of my recent projects - I did couple notes for myself in g-doc . Let me know if it opens fine for You.
Last note, If You need to do some load/performance tests on APIs that require AUTHZ, I'd recommend to split authorize (IDP bypass) and performance scenario itself - in different thread groups. As usually IDP in DEVs/Stagings does not hold much load .
So at first You need to authorize w/o any load (1st Thread group).
And in 2nd Thread group - start calling target APIs under the test.
It depends on:
Your machine specifications (CPU, RAM, NIC card, hard drive, etc.)
The nature of your Test Plan (number of requests, size of requests/responses, number of pre/post processors, assertions, timers, etc.)
Response time of your application
So if your test is a simple GET request which returns small text response - you might simulate 10 000 of users on a mid-range modern laptop. And if your test is connected with heavy requests, large responses, file uploads, etc. - it might be 1000 users.
Make sure to follow recommendations from JMeter Best Practices
Make sure to have monitoring of resources usage of your system (CPU, RAM, Swap, etc.). You can use JMeter PerfMon Plugin for this.
Make sure that your test behaves like a real browser
Start with 1 virtual user and gradually increase the load until you reach 8000 virtual users or JMeter starts lacking resources, whatever comes the first. If you can simulate 8000 users from a single machine - you're good to go. If not - you will have to consider Distributed Testing.
I need to test 200.000 VU hitting an app in 10 seconds, so I started to make a test of 10.000 VU, running Jmeter in Non-GUI mode, to see the response of my computer, my internet connection and the site response, but I got 83.50% of Errors.
95% of the errors were these:
Non HTTP response code: java.net.ConnectException/Non HTTP response message: Connection timed out: connect
This means that the internet connection was not enough for the short time of the test?
Thanks.
Running 200K users
Generally speaking in traditional HTTP running 200.000 users from one machine is impossible: there isn't that many ports. I.e. if you maximize your port usage (and it's likely you need to change OS settings to do that, since usually OS will limit number of open ports to somehwere between 1000 and 10000), JMeter will have about 64500 ports to run requests on. Each JMeter HTTP sampler needs a separate port, so you need 200K ports. Thus you need to have at least 4 machines to run 200K requests concurrently.
But that may not be enough: if you have more than one request sequentially (like most performance tests do), you will be able to run even less concurrent requests, since ports are usually not closed right away after request is done, so next request has to use a different port.
Don't forget that server also must be able to receive similar load.
But even that may not be enough: JMeter needs to have enough memory to accommodate 10-30K threads. Size of thread in memory will depend on a few things, and how your script is designed among them.
Bottom line: with all the tweaking, realistically, port availability limits number of concurrent requests JMeter can run from one machine to 10-30K concurrent users. Thus to test 200K users, you need about 7-20 JMeter machines.
Running 10K users
If you were testing in a designated environment (where clients and servers are next to each other with optimized network between them), you should be able to run 10K users from one machine, if other limits, e.g. memory and max ports were properly tweaked. But sounds like you are trying to test them over the internet connection?
Well, 2 problems here:
Performance testing over internet connection is absolutely pointless. You don't know what is between you and servers, and how those things in between are changing the shape of the load. You won't know if it was 10K concurrent requests, or 10K sequential requests. And results will only tell you how fast your internet is.
Any ISP will have a limit on number of connections from one IP, and it will be well below 10K. Not to mention that some ISPs may flag / temporary ban your IP for such flood.
Bottom line: whoever asked you to test 10K or 200K concurrent users, should also provide a set of JMeter machines to run this test from. Those machines should be close to tested servers, preferably without any extra routing in between (or with well known and well configured routing)
I don't think that stressing your application by kicking off 200k users at once is a good idea (same applies to 10k users) as the results, even in case of success, won't tell the full story. Moreover, in case of error you will be able to state only that 10k users in 10 seconds is not possible, however you won't have the information like:
What was the number of users when errors start occurring
What is the correlation between number of concurrent users and response time and/or throughput
What is the saturation point (the maximum system performance)
So I would recommend re-running your test and increasing the load gradually from one virtual user to 10 000 and see when it breaks. The breaking point is called bottleneck and the cause can be determined like:
First of all make sure you're following JMeter Best Practices as default JMeter configuration is not suitable for high loads and if JMeter is not capable of sending requests fast enough you will not get accurate results. Most probably you will have to run JMeter in Distributed mode, it is highly unlikely you will be able to mimic 20k requests per second from a single machine (or it has to be a very powerful one)
Set up monitoring of the application under test in order to ensure that it has enough headroom in terms of CPU, RAM, Disk, etc. You can use JMeter PerfMon Plugin for this
Check your application infrastructure, like JMeter the majority of middleware components like web/application servers, load balancers, databases, etc. default configurations are suitable for development and debugging, they need to be tuned for high throughput.
Check your application code using profiler tools telemetry, the reason could be in i.e. slow DB query, inefficient algorithm, large object, heavy function, etc.
I have created a test plan for creating userprofile.
I want to run my test plan for 100 users but when i run it for 10 users then it is running successfully with rump up time of 2 sec; but when i try it for 100 users & more than that it is getting failed I am giving rump uptime of 40 sec for 100 users.
I am not able to understand what may be the problem with it.
In my test plan the thread user are differentiated with id
Thanks in Advance.
It's a wide question, this behavior can be caused by
Your application under test can't handle load of 100 threads. Check logs for errors and make sure that application/web server and/or database configuration allow 100+ concurrent connections. Also you can check "Latency" metric to see if there is a problem with infrastructure or application itself.
Your load generator machine can't create 100 concurrent threads. If so - you'll need to consider JMeter Distributed Testing
Your script isn't optimized. I.e. using memory-consuming listeners like "View Results Tree", any graph listeners, regular expression extractors. Try following JMeter Performance and Tuning Tips guide and see whether it resolves your issue.
Agree with Dmitri, reason could be one of the above three.
One more thing you can try.
You can run your jmeter in ui mode for validation of your script and after validation you can run it in non-ui mode which will save lot of memory and cpu processing (basically UI is heaviest part in jmeter).
you can run your jmeter script in non-ui mode like this,
Jmeter -n -t -H proxy -P port
generally on a single dual core machine with 2 GB ram (Load Generator in your case) 100 user test can be carried out successfully.
some more things you can look at to find out the actual bottleneck
1.check application server logs (server on which your application is hosted)
if there are any failures in that then see performance counters on server (CPU, Memory, network etc) to see anything is overloaded.
(if server is windows then check using perfmon if linux then try sar)
if something is overloaded then reason is your app server cant take load of 100 users
probably try tuning it more.
2.check load generator system performance counters (JVM heap usage,CPU,Memory etc)
if JVM heap size is small enough try increasing it but if other counters are overloaded then try distributed load testing.
3.remove unwanted/heavy listeners, assertion from script.
maybe this will help :)
I'm planning to do a load test of our ASP/.NET web application and need to simulate about 600 concurrent users on our system.
Initially we'll just be running the load test tools (probabaly JMETER or WCAT/WAST) from our personal workstations which are Windows 7/32 Bit Dells (Dual Core processors). I was wondering about how many users I can expect to be able to simulate from one client.
If I can easily do 200 users per client, I'll need to identify 2-3 more clients for the test.
I wanted to ask the community based on their experience how many users I should expect per client on a standard windows box.
Any help is appreciated!
This highly depends on the test plan itself and cannot be answered that easily.
If you for example have 500 users that just do one request and then have a waiting timer for five minutes, this should work. If all users constantly do requests without waiting, this will put much more load on your machine.
It depends on the samplers in use. HTTP requests are less costly than SOAP requests for example.
It also depends on the listeners you have active.
For a normal load test I usually have around 100-300 threads active. I would suggest to start with such a number and to monitor the load (CPU, network) on your client to see how much potential there is.
Without more details about the test scenarios and the hardware, it is hard to give specific answers. But our Load Tester product can (usually) handle this level of users pretty easily on a single machine (assuming relatively modern hardware). The testing tool should scale linearly up to a point, so you should be able to get a good estimate by running 50 users through a scenario that is similar to what you expect to test.