how to do load test to find the maximum vusers that the application can handle? - jmeter

We have a PHP application to do load test. The application team wants to know that, up to how many users that the application can be capable to withstand without any crashes.
How to do the load test on the same.. Please help us.
Thanks in advance

Its a boardroom question, application system capacity depends upon the application design and server where application is hosted. Ultimately this depends the purpose of application (public or private) and customer requirements (number of users).
You can find notes on test strategy for loading application in MSDN website Real-World Load Testing Tips to Avoid Bottlenecks When Your Web App Goes Live. My suggestion is application should able to manage atleast 10% maximum expected user simultaneously (till its popular... !!!!).

Try record and run Jmeter.
Use Summary Report, Summary Error Report and View Results Tree to see your server's health.
Keep adding thread counts till you realized all your thread group assertion starts failing.
My application includes complex data retrieval from RDC using XML. If error are less than 2% total test, i consider it as healthy. My app can handle 50 threads consecutively easily. Try yours. good luck.

You can identify the number of actual users experimentally.
At the first, make virtual users behavior like real user. Follow this link to get it.
Then increase number of users and observe application behavior. To scedule increasing number of users, you can use Throughput Shaping Timer

Related

How to determine the number of thread for each user in Jetty

When I look into Jetty. I see this sentence and as a newbie for Jetty.
Load generators should be written in asynchronous programming style, so that limited threads does not limit the maximum number of users that can be simulated. If the generator is not asynchronous, then a thread pool of 2000 may only be able to simulate 500 or less users. The Jetty HttpClient is an ideal basis for building a load generator, as it is asynchronous and can be used to simulate many thousands of connections (see the Cometd Load Tester for a good example of a realistic load generator).
I wonder how to determine the How to determine the number of thread for each user in Jetty.
Since I don't know how to test and which tool should I use.
Any hint will be appreciated.
Please.
The number of threads you will require depends on the kind of load your application will need.
A typical web page is currently about 30 resources totaling about 40 MB that need to be requested for the web page to be "complete" and can be rendered fully.
If you use HTTP/2, which will get data faster, you will have a greater demand on the thread pool than if you use HTTP/1.1.
But lets say you need Jetty to supply data to a mobile chess playing app, less connections overall, you now have a much lower demand of threads per user.
The advice is as it always is, monitor your applications behavior, now, and in testing, and in QA, and in production. Learn the behavior, and adjust your configuration accordingly.
It is impossible to "do the math" and set the perfect configuration up front without monitoring. You will be adjusting these configurations over time as your application evolves and you learn more about the behaviors within your application.

503 Error while Running JMeter for Thread 400,Is it Because of Server issues?

Getting 503 Error while Running the JMeter for the Thread User 400,Is it Because of Server issues.? When I run the thread group for 100 user with ramp up period 25 seconds then it will be working fine but for the user 400 users its giving 503 error.
Given you don't experience any issues with 100 users and have issues with 400 users most probably it's a server issue connected with the overload so congratulations on finding the bottleneck.
You can either report it as is or perform a little bit deeper investigation in order to find the cause, suggested steps:
Instead of kicking off 400 users at once try increasing the load gradually at the same time looking at Response Times vs Threads and Transaction Throughput vs Threads charts. Ideally response time should remain the same and throughput should be growing as the number of threads increase. When response time starts increasing and throughput starts decreasing it indicates the saturation point and at this stage you can state that this is the maximum number of users your application can support
Check your application logs and configuration as it might be not properly tuned for the high loads, you can use 15 Simple ASP.NET Performance Tuning Tips as a reference or look for a similar guide for your application technology stack
Ensure that your application has enough headroom to operate in terms of CPU, RAM, Network, etc. as it might be the case that it's basically a lack of resources, it can be done using i.e. JMeter PerfMon Plugin
Repeat your test with profiler tool telemetry in place, this way you will be able to localize the problem and state where is the problematic piece of code or inefficient algo lives.
If server isn't down/restarted, then yes, 503 indicate overload
Common causes are a server that is down for maintenance or that is overloaded
You need to find what stop server from serving 400 concurrent requests/users
Notice that if you are testing on a test environment which isn't equal/similar to production environment, it may not reflect the load that production server can endure

Whats the impact of response code 400,503 ? Can we ignore these codes if my primary focus is to measure loading time of web application?

I am testing a web application login page loading time with 300 thread users and ramp up period of 300 secs.Most of my samples return response code 200.But few of them return response code 400,503.
My goal is to just check the performance of the web application if 300 users start using it.
I am new to Jmeter and have basic knowledge of programming.
My Question :-
1.Can i ignore these errors and focus just on timings from the summary report ?
2.If i really need to fix these errors, how to fix it ?
There are 2 different problems indicated by these errors:
HTTP Status 400 stands for Bad Request - it means that you're sending malformed requests which cannot be understood by the server. You should inspect request details and amend JMeter configuration as it is the problem in your script.
HTTP Status 503 stands for Service Unavailable - it indicates the problem on server side, i.e. server is not capable of handling the load you're generating. This is something you can already report as the application issue. You can try to identify the underlying cause by:
looking into your application log files
checking whether your application has enough headroom to operate in terms of CPU, RAM, Network, Disk, etc. It can be done using APM tool or JMeter PerfMon Plugin
re-running your test with profiler tool telemetry to deep dive into what's under the hood of the longest response times
So first of all you should ensure that your test is doing what it is supposed to be doing by running it with 1-2 users/loops and inspecting requests/response details. At this stage you should not be having any errors.
Going forward you should increase the load gradually and correlate the increasing number of virtual users with the increasing response time/number of errors
`
Performance testing is different from load testing. What you are doing is load testing.
Performance testing is more about how quickly an action takes. I typically capture performance on a system not under load for a given action.
This gives a baseline that I can then refer to during load tests.
Hopefully, you’ve been given some performance figures to test. E.g. must be able to handle 300 requests in two minutes.
When moving onto load, I run a series of load tests with increasing number of users/threads and capture the results from each test.
Armed with this, I can see how load degrades performance to the point where errors start to show up. This gives you an idea of how much typical load the system can handle.
I’d also look to run soak tests too. This where I’d run JMeter for a long period with typical (not peak) load to make sure the system can handle sustained load.
In terms of the errors you’re seeing, no I would not ignore them. Assuming your test is calling the same endpoint, it seems safe to say the code is fine, its the infrastructure struggling with the load you’re throwing at it.

Apache Jmeter Concurrent Users Performance Testing

I want to test 400 Concurrency Users Which allow us to pass our load testing scenario as I am using below configuration setting in Apache JMeter which will through us lots of errors.
Number of Thread (Users): 400
Ramp-Up Time: 1
Loop Count: Forever Until ( Period of 1 minutes )
We are not telepathic enough to tell what's wrong with your setup without seeing the configuration and the nature of errors.
Several generic hints:
Run your test with 1-2 users/iterations to ensure it works fine and does what it is supposed to be doing. Check requests and responses details using View Results Tree listener
Make sure to run your test in command-line non-GUI mode and disable all the Listeners while your test is running.
It is better to increase and decrease the load gradually so consider using longer ramp-up time and increase test duration accordingly. I.e.
During the first minute virtual users arrive
They then hold the load for another minute
During the last minute virtual users leave
This way you will be able to tell what was the load when the errors started occurring, what is the maximum number of users your application can support, where is the saturation point, does it recover when the load gets back to normal, etc. See JMeter Ramp-Up - The Ultimate Guide article for more details.
It might be the case you found the bottleneck, i.e. your application fails to support 400 concurrent users, now you need to find the reason which may be in:
incorrect middleware configuration (wrong web server, database, load balancer settings)
your application simply lacks resources (CPU, RAM, Network, Swap, etc.). You can check this using JMeter PerfMon Plugin
if infrastructure configuration is OK and there is enough headroom for the application to operate most probably the reason is in the application code, you need to inspect what it is doing using APM or Profiler tools and report the issue.

How many threads/users can one Windows client simulate during my load test?

I'm planning to do a load test of our ASP/.NET web application and need to simulate about 600 concurrent users on our system.
Initially we'll just be running the load test tools (probabaly JMETER or WCAT/WAST) from our personal workstations which are Windows 7/32 Bit Dells (Dual Core processors). I was wondering about how many users I can expect to be able to simulate from one client.
If I can easily do 200 users per client, I'll need to identify 2-3 more clients for the test.
I wanted to ask the community based on their experience how many users I should expect per client on a standard windows box.
Any help is appreciated!
This highly depends on the test plan itself and cannot be answered that easily.
If you for example have 500 users that just do one request and then have a waiting timer for five minutes, this should work. If all users constantly do requests without waiting, this will put much more load on your machine.
It depends on the samplers in use. HTTP requests are less costly than SOAP requests for example.
It also depends on the listeners you have active.
For a normal load test I usually have around 100-300 threads active. I would suggest to start with such a number and to monitor the load (CPU, network) on your client to see how much potential there is.
Without more details about the test scenarios and the hardware, it is hard to give specific answers. But our Load Tester product can (usually) handle this level of users pretty easily on a single machine (assuming relatively modern hardware). The testing tool should scale linearly up to a point, so you should be able to get a good estimate by running 50 users through a scenario that is similar to what you expect to test.

Resources