Why is VS2013 load testing only running 7 requests per second? - visual-studio

I am running some load tests, and for some reason VS is displaying as 7 req/sec, is this normal?
I have a stepped profile, starting at 10, ending at 100, and I would have thought it would run the test for each user.
I.e - 10 users, 10 requests per second?

First, you're running load testing from your local machine (Controller = Local Run). You can run load tests from your developer machine, but you usually can't generate enough traffic to really see how the application responds. To simulate a lot of users, you need a Load Test Rig. (on Premises, or using Windows Azure Cloud Testing). This can be a problem especially, if you're testing a web site hosted on the same computer.
Check the CPU on your machine when running the load test (in the graph) : if it's over 70%, results are biaised.
Second, how do you recorded web tests ? when using web test recorder (in IE), it will add a think time to each request. Think times are used to simulate human behavior that causes people to wait between interactions with a Web site : a real user will never open 4 pages in the same second. You can check Think Time in each request properties. A high value may explain why you've only a few requests/sec if the CPU is still low.
I have a stepped profile, starting at 10, ending at 100, and I would
have thought it would run the test for each user.
In Run settings, you have the option to configure the maximum number of iterations : this will run N scenarios, without any time limit. It's not activated by default.
You have to understand the notion of virtual user : Basically, a virtual user executes concurrently only one test case at the same time, taken from configured web tests, according to test mix/percentage/sceanrios... So 10 concurrent virtual users, will execute at most 10 tests at the same time. The Step goal is usually used to increase the load until the server reaches a point that where performance diminishes significantly.
A complete description of all Load Patterns are available here.
At end, if the number of request/sec is still low, and if it's not because of Load Testing configuration, you may have a problem on your web site ;-)

It all depends on your test configuration, but if your test is setup to do ~1 req/s with one user it should deliver ~10 req/s with 10 users.
I would say that it's probably because your server can't handle responding with more than 7 req/s. To find out where the bottle neck is try to run smaller steps and see where the breaking point is, you can do some monitoring on the servers at the same time to find out what resources are running out and on which server (CPU, mem, bw etc). Like mentioned in the comments profiling is a very good approach to find out what parts of the code and which queries is the resource hog.
Hope this helps!

There are a variety of reasons throughput could be low.
Check your settings for "Think Time Between Test Iterations", step load pattern - step duration is another setting you could modify.
Remember to keep the test moving so looking at think times for each request and making sure you are not taking too long to perform each test end to end.
I have seen where these settings can extend the overall time to more that a few minutes thus reducing the minute by minute transactions.
Check your end to end run time per webtest if run independent from the load test to make sure you know how much time the test takes overall.
Hope this helps.
- Jim Nye

Related

503 Error while Running JMeter for Thread 400,Is it Because of Server issues?

Getting 503 Error while Running the JMeter for the Thread User 400,Is it Because of Server issues.? When I run the thread group for 100 user with ramp up period 25 seconds then it will be working fine but for the user 400 users its giving 503 error.
Given you don't experience any issues with 100 users and have issues with 400 users most probably it's a server issue connected with the overload so congratulations on finding the bottleneck.
You can either report it as is or perform a little bit deeper investigation in order to find the cause, suggested steps:
Instead of kicking off 400 users at once try increasing the load gradually at the same time looking at Response Times vs Threads and Transaction Throughput vs Threads charts. Ideally response time should remain the same and throughput should be growing as the number of threads increase. When response time starts increasing and throughput starts decreasing it indicates the saturation point and at this stage you can state that this is the maximum number of users your application can support
Check your application logs and configuration as it might be not properly tuned for the high loads, you can use 15 Simple ASP.NET Performance Tuning Tips as a reference or look for a similar guide for your application technology stack
Ensure that your application has enough headroom to operate in terms of CPU, RAM, Network, etc. as it might be the case that it's basically a lack of resources, it can be done using i.e. JMeter PerfMon Plugin
Repeat your test with profiler tool telemetry in place, this way you will be able to localize the problem and state where is the problematic piece of code or inefficient algo lives.
If server isn't down/restarted, then yes, 503 indicate overload
Common causes are a server that is down for maintenance or that is overloaded
You need to find what stop server from serving 400 concurrent requests/users
Notice that if you are testing on a test environment which isn't equal/similar to production environment, it may not reflect the load that production server can endure

How to determine if connection error is due to test or host machine

I have come across a situation where I can`t decide what scenario would be best. I have written my test in JMeter as follows:
I have one test plan that runs test in consecutive.
I have 4 thread groups and each thread group has the following properties:
No of threads: 8000
Ramp-up period: 60 sec
Loop count : 10
Same user on each iteration: true
I was having connection error, connection time out error.
So, to make it work , when I test from localhost (same machine), I have to enter a response time out of 1800000 ms, whereas when I do the same test on a remote server, I have to enter the response time out of 3600000 ms.
Can someone please advise :
Is it a good idea to include response time? Is there any other issue I should look for instead of including a response time? Is it an alert for other issue?
Can I improve the test without using response time?
First of all, a couple of questions to you:
Do you really think that the real user will really wait for 1 hour for getting the response from the application?
How did you come to this 8000 users?
Now recommendations:
Never have JMeter and the system under test on the same machine, they are both resource intensive and they will start struggling for CPU cycles, memory pages, etc. and you won't be able to tell whether JMeter is not capable of sending requests fast enough or your application cannot respond properly
Follow JMeter Best Practices
Although the number of virtual users you can simulate with JMeter is very high it's limited by the machine/operating system resources so make sure JMeter has enough headroom to operate in terms of CPU, RAM, Network and Disk IO. The metrics can be checked using JMeter PerfMon Plugin. Once you notice that any of monitored metrics start exceeding i.e. 80% of total available capacity - mention how many users are online and this is how many you can simulate from this machine for this test. If you need more - go for Distributed Testing
The same for the system under test, if you hit the resources consumption limits you need either to upgrade the hardware or to deploy your application in clustered mode (if it supports such a mode)

How many concurrent users can run in JMeter in one machine

This is for e-commerce project where the number of users login will be more. I have been given a benchmark 8000 concurrent users need to login and response time should be 3 minutes
#abi , hi .
Let me provide couple notes here.
Depending upon Your connection bandwidth , from my experience as performance test engineer, I'd say jmeter single instance usually holds up to 1k(1000)- 2k(2000) in best case users load.
Considering You have a requirement for 8k (8000 users) load, You need to launch jmeter in distributed mode ( master <-> slaves).
For this config setup I'd recommend to go with 1 master node and 4slaves. For that - You will need 5 machines (aws/azure, whatever) in the same sub-network.
Re more technical details on distributed setup, please take a look:
in public jmeter documentation
please also look into this step-by-step setup manual
Also, when i've been doing set-up for 10k load for one of my recent projects - I did couple notes for myself in g-doc . Let me know if it opens fine for You.
Last note, If You need to do some load/performance tests on APIs that require AUTHZ, I'd recommend to split authorize (IDP bypass) and performance scenario itself - in different thread groups. As usually IDP in DEVs/Stagings does not hold much load .
So at first You need to authorize w/o any load (1st Thread group).
And in 2nd Thread group - start calling target APIs under the test.
It depends on:
Your machine specifications (CPU, RAM, NIC card, hard drive, etc.)
The nature of your Test Plan (number of requests, size of requests/responses, number of pre/post processors, assertions, timers, etc.)
Response time of your application
So if your test is a simple GET request which returns small text response - you might simulate 10 000 of users on a mid-range modern laptop. And if your test is connected with heavy requests, large responses, file uploads, etc. - it might be 1000 users.
Make sure to follow recommendations from JMeter Best Practices
Make sure to have monitoring of resources usage of your system (CPU, RAM, Swap, etc.). You can use JMeter PerfMon Plugin for this.
Make sure that your test behaves like a real browser
Start with 1 virtual user and gradually increase the load until you reach 8000 virtual users or JMeter starts lacking resources, whatever comes the first. If you can simulate 8000 users from a single machine - you're good to go. If not - you will have to consider Distributed Testing.

Apache Jmeter Concurrent Users Performance Testing

I want to test 400 Concurrency Users Which allow us to pass our load testing scenario as I am using below configuration setting in Apache JMeter which will through us lots of errors.
Number of Thread (Users): 400
Ramp-Up Time: 1
Loop Count: Forever Until ( Period of 1 minutes )
We are not telepathic enough to tell what's wrong with your setup without seeing the configuration and the nature of errors.
Several generic hints:
Run your test with 1-2 users/iterations to ensure it works fine and does what it is supposed to be doing. Check requests and responses details using View Results Tree listener
Make sure to run your test in command-line non-GUI mode and disable all the Listeners while your test is running.
It is better to increase and decrease the load gradually so consider using longer ramp-up time and increase test duration accordingly. I.e.
During the first minute virtual users arrive
They then hold the load for another minute
During the last minute virtual users leave
This way you will be able to tell what was the load when the errors started occurring, what is the maximum number of users your application can support, where is the saturation point, does it recover when the load gets back to normal, etc. See JMeter Ramp-Up - The Ultimate Guide article for more details.
It might be the case you found the bottleneck, i.e. your application fails to support 400 concurrent users, now you need to find the reason which may be in:
incorrect middleware configuration (wrong web server, database, load balancer settings)
your application simply lacks resources (CPU, RAM, Network, Swap, etc.). You can check this using JMeter PerfMon Plugin
if infrastructure configuration is OK and there is enough headroom for the application to operate most probably the reason is in the application code, you need to inspect what it is doing using APM or Profiler tools and report the issue.

Why difference in out when using Jmeter to load test vs HP Load runner?

Here is the scenario
We are load testing a web application. The application is deployed on two VM servers with a a hardware load balancer distributing the load.
There are tow tools used here
1. HP Load Runner (an expensive tool).
2. JMeter - free
JMeter was used by development team to test for a huge number of users. It also does not have any licensing limit like Load Runner.
How the tests are run ?
A URL is invoked with some parameters and web application reads the parameter , process results and generates a pdf file.
When running the test we found that for a load of 1000 users spread over period of 60 seconds, our application took 4 minutes to generate 1000 files.
Now when we pass the same url through JMeter, 1000 users with a ramp up time of 60 seconds,
application takes 1 minutes and 15 seconds to generate 1000 files.
I am baffled here as to why this huge difference in performance.
Load runner has rstat daemon installed on both servers.
Any clues ?
You really have four possibilities here:
You are measuring two different things. Check your timing record structure.
Your request and response information is different between the two tools. Check with Fiddler or Wireshark.
Your test environment initial conditions are different yielding different results. Test 101 stuff, but quite often overlooked in tracking down issues like this.
You have an overloaded load generator in your loadrunner environment which is causing all virtual users to slow. For example you may be logging everything resulting in your file system becoming a bottleneck for the test. Deliberately underload your generators, reduce your logging levels and watch how you are using memory for correlations so you don't create a physical memory oversubscribed condition which results in high swap activity.
As to the comment above as to JMETER being faster, I have benchmarked both and for very complex code the C based solution for Loadrunner is faster upon execution from iteration to iteration than the Java based solution in JMETER. (method: complex algorithm for creating data files on the fly for upload for batch mortgage processing. p3: 800Mhz. 2GB of RAM. LoadRunner 1.8 million iterations per hour ungoverned for a single user. JMETER, 1.2 million) Once you add in pacing it is the response time of the server which is determinate to both.
It should be noted that LoadRunner tracks its internal API time to directly address accusations of the tool influencing the test results. If you open the results set database set (.mdb or Microsoft SQL server instance as appropriate) and take a look at the [event meter] table you will find a reference for "Wasted Time." The definition for wasted time can be found in the LoadRunner documentation.
Most likely the culprit is in HOW the scripts are structured.
Things to consider:
Think / wait time: When recording,
Jmeter does not automatically put in
waits.
Items being requested: Is
Jmeter ONLY requesting/downloading
HTML pages while Load runner gets all
embedded files?
Invalid Responses:
are all 1000 Jmeter responses valid?
If you have 1000 threads from a
single desktop, I would suspect you
killed Jmeter and not all your
responses were valid.
Dont forget that the testing application itself measures itself, since the arrival of the response is based on the testing machine time. So from this perspective it could be the answer, that JMeter is simply faster.
The second thing to mention is the wait times mentioned by BlackGaff.
Always check results with result tree in jmeter.
And always put the testing application onto separate hardware to see real results, since testing application itself loads the server.

Resources