Jmeter how do I get the right number for load? - jmeter

I have recorded a script against the application that I want to test. Now, I am having a hard time arriving at the decision that what is that number that the application will run without any issue and to find out the max number of users. Here is what I have done-
I have run the Jmeter script for 10, 50, 100, 150 users
Until 50 users, it runs like a charm. After about 80 users the throughput starts to come down and some samples do not show up in Aggregate Report.
I see heap memory problems in my console for about 150 users over period of time. Is it the application problem or my machine problem?
Do you have an article where I could read about how to come to a conclusion about THE number?
UPDATE- after increasing the heap size, it is running smoothly for 100 users. I am even more confused now
Thank you

The problem can be anywhere!
Server Performance Metrics collector:
First you need an agent running in the application server to monitor the server performance while you are running the test.
This link will give you an idea about the set up.
JMeter Best Practices:
I think that you are running your test in GUI mode with listeners. Most likely the problem is with your machine/your test. Ensure that you follow this.
Samples not showing in aggregate Report:
You already asked a question on this in SO. Do not select 'Successes' in the Log/display only section of the listener while writing the results in the jtl. It will not write the failed requests details. You might need all the results. Once the jtl is created, you can always filter 'Success' only results as and when you want.

Related

Visual Studio Cloud Load Test Average Test Time Seems Long

I have a WebAPI service that I put together to test throughput hosted in Azure. I have it set up to call Task.Delay with a configurable number (IE webservice/api/endpoint?delay=500). When I run against the endpoint via Fiddler, everything works as expected, delays, etc.
I created a Load Test using VS Enterprise and used some of my free cloud load testing minutes to slam it with 500 concurrent users over 2 minutes. After multiple runs of the load test, it says the average test time is roughly 1.64 seconds. I have turned off think times for the test.
When I run my request in Fiddler concurrently with the Load test, I am seeing sub-second responses, even when spamming the execute button. My load test is doing effectively the same thing and getting 1.64 second response times.
What am I missing?
Code running in my unit test (which is then called for my load test):
var client = new HttpClient { BaseAddress = new Uri(CloudServiceUrl) };
var response = client.GetAsync($"{AuthAsyncTestUri}&bankSimTime={bankDelay}&databaseSimTime={databaseDelay}");
AuthAsyncTestUri is the endpoint for my cloud-hosted service.
There are several delay(), sleep(), pause(), etc methods available to a process. These methods cause the thread (or possible the program or process for some of them) to pause execution. Calling them from code used in a load test is not recommended, see the bottom of page 187 of the Visual Studio Performance Testing Quick Reference Guide (Version 3.6).
Visual Studio load tests do not have one thread per virtual user. Each operating system thread runs many virtual users. On a four-core computer I have seen a load test using four threads for the virtual users.
Suppose a load test is running on a four-core computer and Visual Studio starts four threads to execute the test cases. Suppose one virtual user calls sleep() or similar. That will suspend that thread, leaving three threads available to execute other virtual user activity. Suppose that four virtual users call sleep() or similar at approximately the same time. That will stop all four threads and no virtual users will be able to execute.
Responding to the following comment that was added to the question
I did try running it with a 5 user load, and saw average test times of less than 500 ms, which match what I see in my Fiddler requests. I'm still trying to figure out why the time goes up dramatically for the 500 user test while staying the same for Fiddler requests run in the middle of the 500 user test.
I think that this comment highlights the problem. At a low user load, the Visual Studio load test and the Fiddler test give similar times. At higher loads something between the load test and the server is limiting throughput and causing the slowdown. It would be worth examining the network route between the computer running the tests and the system being tested. Are there any slow segments on that path? Are there any segments that might see the load test as a denial of service attack and hence might slow down the traffic?
Running a test for as little as 2 minutes does not really show how the test runs. The details in the question do net tell how many tests started, how many finished and how many were abandoned at the end of the two minute run. It is possible that many tests cases were abandoned and that the average time of those that completed was 1.6 second.
If you have the results of the problem run then look at the "details" section of the results. Expand the slider below the image to include the whole run. Tick the option (top left corner) to highlight failing tests. I would expect to see a lot of red at the two minute mark for failing tests. However, the two minute run may be too short compared to the sampling interval (in the run settings) to see much.
Running a first test at 500 users tells you very little. It tells you either that the system copes with that load or that it does not. You need to run the test at several different user loads. Then you start to learn where the boundary between working and not working lies. Hence I recommend using a stepped load.
I believe you need at least one more test run to understand what is happening. I suggest doing a run as follows. Set a one minute cool-down period. Set a stepped load: start at 5 users as you know that that works. Increment by 1 user every two seconds until 100 users. That will take 190 seconds. Run for about another minute at that 100 user load. Total of 4 minutes 10 seconds. Call it 4 minutes. Adding in the one minute cool down makes (5 minutes) x (100 VU) = 500 VUM, which is a small portion of the free minutes per month. After the run look at the graphs of average test times. If all is OK on that test then you could try another that ramps up more quickly to say 500 users.

Generating graphs from distributed test seems to give results for one client/slave

I'm running a distributed test using 5 JMeter clients (slaves). Each client is set to run 50 users. At the conclusion of the test I generate a series of graphs from the resulting JTL along with a SynthesisReport. The SynthesisReport details 250 samples for each request, as you'd expect, however the TimeVsThreads and the ThreadsStateOverTime peak at 50 users, as if they were showing the results from just one of the clients.
I've confirmed that the jmeter.properties files for each client are the same as I suspected that it was possibly an issue with the each clients results file configuration and settings for saveservice.
I can't imagine this is by design, has anyone experienced something similar and if so how did was it solved?
As per documentation:
http://jmeter-plugins.org/wiki/ActiveThreadsOverTime/
Just name your thread groups using a unique id for each generator ( hostname or a property you pass to injector and use with function __P if you have more than 1 injector per host) and it will work fine.
This is normal for Jmeter Distributed Testing.
The reason this occurs is that each load generator separately starts User Threads 1-50, so when cmdrunner runs, it sees 1 responses from each User Thread 1 (5 total), but can't differentiate between them.
If you're using a custom reporter tool (that wraps cmdrunner), you can multiply your peak users by your load generators to display a more accurate number at the top of your Report. But as long as you're calling cmdrunner, you won't be able to see the actual number of users on your graphs.
This is normal behavior of JMeter. 5 clients will run each 50 threads.
Open JMeter and run on all remote hosts, and check Active threads each will run 50 threads.

Difference between Jmeter load test scenarios

I am testing asp.net website using Jmeter. I have used below scenarios to load test. Scenario 1 give me correct result(What I expect and can be wrong) and Scenario 2 is not giving same result. But I have used same number of requests within same time. Can someone explain me why is this?
Scenario 1.
Scenario 2.
Ramp up time does not determine when any of your tests are going to complete. It only controls when your test is going to start.
Also, the number of threads any test can create concurrently is limited to the memory you've allocated to JMeter. Even though you've set the thread count to 60000, if you've hit the maximum memory you've allocated, the threads will either queue up or never generate (you can watch the JMeter logs for thread creating or errors).
I recommend tuning your JMeter instance so you have some stability to your tests, here's a good guide. LINK
No of requests you have sent might be same. But the concurrent user load on the server is completely different.
I had clarified similar question few weeks ago. You can check the answer here.
Check Here

With JMeter, if we increase load,is it possible that our tested server crash?

If suppose I will run load testing with 5000 threads or may be more ,
Will the main server under test crash at a certain level.
As EJP said, it is JMeter purpose to find the limit of tested application and how it will react under performance.
So yes it is perfectly possible.
You should read JMeter Manual.
If you are doing any kind of performance tests (load, stress, soak etc) you will want to know at what point your application server falls over i.e. its breaking point.
Once you've found out what your upper limit is, start dialing back the number of threads until you find your application's "sweet spot" for example CPU usage <80% & >70%

does testing a website through JMeter actually overload the main server

I am using to test my web server https://buyandbrag.in .
I have tested it for 100 users. But the main server is not showing like it is crowded or not.
I want to know whether it is really pressuring the main server(a cloud server I am using).Or just use the client resourse where the tool is installed.
Yes as mentioned you should be monitoring both servers to see how they handle the load. The simplest way to do this is with TOP (if your server OS is *NIX) also you should be watching the network activity i.e. Bandwidth, connection status (time wait, close wait and so on).
Also if your using apache keep an eye on the logs you should see the requests being logged there
Good luck with the tests
I want to know "how many users my website can handele ?",when I tested with 50 threads ,the cpu usage of my server increased but not the connections log(It showed just 2 connections).also the bandwidth usage is not that much
Firstly what connections are you referring to? Apache, DB etc?
Secondly if you want to see how many users your current setup can hand you need to create a profile or traffic model of what an average user will do on your site.
For example:
Say 90% of the time they will search for something
5% of the time they will purchase x
5% of the time they login.
Once you have your "Traffic Model" defined, implement it in jMeter then start increasing your load in increments i.e. running your load test for 10mins with x users, after 10mins increment that number and so on until you find your breaking point.
If you graph your responses you should see two main things:
1) The optimum response time / number of users before the service degrades
2) The tipping point i.e. at what point you start returning 503's etc
Now you'll have enough data to scale your site or to start making performance improvements from a code point of view.

Resources