Visual Studio Load Testing framework vs Console App - strange results - visual-studio

I've been using the Visual Studio Load Testing framework to load test a web service.
If I keep my test simple and use a constant load pattern of 1 user from my local machine, I am able to generate 'x' requests per second.
Alternatively, if I use console app that runs the same test, making synchronous calls to the web service, the console app is generating twice the load that I get using the Visual Studio Load Testing framework.
The same is true if I try to scale my load tests to use multiple test agents (8 cores) - the VS framework does not generate near the amount of load as a console app running multiple instances.
These are the two different unit tests I am using to generate load:
//Unit test used for load testing
[TestMethod]
public void HappyReturnCase_Test()
{
HttpWebRequest req = WebRequest.Create("http://myurl") as HttpWebRequest;
req.Method = "GET";
req.GetResponse().Close();
}
//Console app version
private static void Main(string[] args)
{
for (int i = 0; i < 200000; i++)
{
HttpWebRequest req = WebRequest.Create("http://myurl") as HttpWebRequest;
req.Method = "GET";
req.GetResponse().Close();
}
}
Can any explain to me why I might be seeing this kind of behaviour?
Thanks in advance.
Kevin

There are a few things that comes into mind when dealing with load testing in Visual Studio.
What kind of data capture are you using?
Is there instrumentation for Code Coverage?
You also need to keep in mind that in Visual Studio you're tunning the code within a test framework that does way more than simply call your code. It analyse the result, check for exceptions, logs all the data to and fro the code being called, generate reports with the captured data...
And all this stuff that comes out of the box, and that we think is "free" does have its toll on performance of the said test.
While the number of request per seconds, as you put, is lower in VS than its app counterpart, you also needs to weigh in all the other stuff the testing framework is doing for you.

With the console application, there's no throttling of the requests--you're just going full-bore from the client. With the VS Load Tests, there are other factors that limit the number of requests (like the total number of iterations).
For example, if you have test iterations enabled, you'll spread them throughout the duration of the load test. Generally, that will bring your test frequency down. If you have 100 test iterations set, and you're running your test for an hour, and each test takes 30 seconds, you'll run 20 fewer tests because of it (evenly spread throughout the hour).
There is also a callback model going on here. The load tests support a load test plugin model and a request plugin model, so the unit test will yield to the load test runner which may be swapping out to a new virtual user; even if the test is set for 1 virtual user, it may not be the same virtual user throughout the test. You'll be reporting and logging, plus you may be starting up a new application host "container" for your unit test, and a few other activities. Even if that's not the case, you're not spending all your time in the context of the unit test.
Even inside the unit test, there are other methods running, like ClassInitialize, TestInitialize, setting timers, etc. Plus, there is a thread pool being used, even if only for one user. See http://blogs.msdn.com/b/billbar/archive/2007/10/12/features-and-behavior-of-load-tests-containing-unit-tests-in-vsts-2008.aspx for some more information on how unit tests are run by the load test runner. Even if you data bound that unit test to run 100 rows of data, it probably wouldn't run as quickly as the loop that you'd written, but it's got the benefit of easily configuring extra work and running multiple unit tests together.
You may want to take a read through of the performance testing quick reference guide at http://vsptqrg.codeplex.com/.
Now, setting the constant load to 1 user doesn't take advantage of any of the benefits of the load rig--you've taken on the overhead of the thread pool without running multiple users. You'd expect to start seeing benefits if you increase the number of users and let the VS load test manage that context switching for you. Another benefit is creating a test mix that you can easily alter, plus collecting the perfmon statistics, applying threshold rules, etc. You're not really doing any of those in the console app.

Related

Whats the impact of response code 400,503 ? Can we ignore these codes if my primary focus is to measure loading time of web application?

I am testing a web application login page loading time with 300 thread users and ramp up period of 300 secs.Most of my samples return response code 200.But few of them return response code 400,503.
My goal is to just check the performance of the web application if 300 users start using it.
I am new to Jmeter and have basic knowledge of programming.
My Question :-
1.Can i ignore these errors and focus just on timings from the summary report ?
2.If i really need to fix these errors, how to fix it ?
There are 2 different problems indicated by these errors:
HTTP Status 400 stands for Bad Request - it means that you're sending malformed requests which cannot be understood by the server. You should inspect request details and amend JMeter configuration as it is the problem in your script.
HTTP Status 503 stands for Service Unavailable - it indicates the problem on server side, i.e. server is not capable of handling the load you're generating. This is something you can already report as the application issue. You can try to identify the underlying cause by:
looking into your application log files
checking whether your application has enough headroom to operate in terms of CPU, RAM, Network, Disk, etc. It can be done using APM tool or JMeter PerfMon Plugin
re-running your test with profiler tool telemetry to deep dive into what's under the hood of the longest response times
So first of all you should ensure that your test is doing what it is supposed to be doing by running it with 1-2 users/loops and inspecting requests/response details. At this stage you should not be having any errors.
Going forward you should increase the load gradually and correlate the increasing number of virtual users with the increasing response time/number of errors
`
Performance testing is different from load testing. What you are doing is load testing.
Performance testing is more about how quickly an action takes. I typically capture performance on a system not under load for a given action.
This gives a baseline that I can then refer to during load tests.
Hopefully, you’ve been given some performance figures to test. E.g. must be able to handle 300 requests in two minutes.
When moving onto load, I run a series of load tests with increasing number of users/threads and capture the results from each test.
Armed with this, I can see how load degrades performance to the point where errors start to show up. This gives you an idea of how much typical load the system can handle.
I’d also look to run soak tests too. This where I’d run JMeter for a long period with typical (not peak) load to make sure the system can handle sustained load.
In terms of the errors you’re seeing, no I would not ignore them. Assuming your test is calling the same endpoint, it seems safe to say the code is fine, its the infrastructure struggling with the load you’re throwing at it.

Can use Specflow scenario with Visual Studio 2013 Load Test

I plan to reuse existing Specflow scenarios (Currently is using for acceptance and automated test) for VS Load Test as well, to avoid duplication and extra work. Specflow works fine for those test since it runs them once but in context of Load test when it executes each Specflow scenario more than one time and parallel it runs into issues and errors and with higher number of user it gets more
These errors can fail some of the test which at end creates incorrect test result, for instance using one Specflow scenario as test scenario with load test of 20 users and time period of 2 minutes can caused 50 errors similar to below. So test result shows that particular scenario is executed 200 times where 150 passed and 50 failed test and failure is caused by Specflow errors. In context of Load test this result is totally wrong and incorrect since the test itself has issues.
Error message:
ScenarioTearDown threw exception. System.NullReferenceException: System.NullReferenceException: Object reference not set to an instance of an object.
TechTalk.SpecFlow.Infrastructure.TestExecutionEngine.HandleBlockSwitch(ScenarioBlock block)
TechTalk.SpecFlow.Infrastructure.TestExecutionEngine.ExecuteStep(StepInstance stepInstance) TechTalk.SpecFlow.Infrastructure.TestExecutionEngine.Step(StepDefinitionKeyword stepDefinitionKeyword, String keyword, String text, String multilineTextArg, Table tableArg)
TechTalk.SpecFlow.TestRunner.Then(String text, String multilineTextArg, Table tableArg, String keyword)
After some investigation it seems Specflow cannot generate and run same scenario parallel which caused this conflict and fails some test but I also have some doubt about that and seeking to see if there is any workaround about this or if I am missing anything and wondering if Specflow scenarios can be used for Load Test at all?
I understand wanting to reuse your tests for load testing (Don't Repeat Yourself), however a load test has a very different purpose than acceptance tests. Load tests should take realistic every day usage scenarios, and throw increasing numbers of users at them. For this reason I would urge you to keep your load tests separate from your acceptance and automated tests. They really are testing different things.
Load tests should test the performance of the application under high usage for every day scenarios, and acceptance and automated tests make sure the application is functioning according to spec.
Load testing is the process of putting demand on a system or device and measuring its response. Load testing is performed to determine a system’s behavior under both normal and anticipated peak load conditions. It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation.
Source: Wikipedia: Load testing
An acceptance test is a formal description of the behaviour of a software product, generally expressed as an example or a usage scenario. ... For many Agile teams acceptance tests are the main form of functional specification; sometimes the only formal expression of business requirements. In other cases, they merely complement a specification document resulting from a less specifically Agile technique or formalism, such as uses cases or more narrative documents.
Source: Agile Aliance: Acceptance Testing
They are different things, so the tests, and the test frameworks, should be different as well. You aren't really "repeating yourself" by keeping a separate suite for load tests.
As for the technical reason why this is failing? SpecFlow, when run using the normal Visual Studio test runner, was not built to run tests in parallel. There are parallel test runners available, but most are paid software.

Why is VS2013 load testing only running 7 requests per second?

I am running some load tests, and for some reason VS is displaying as 7 req/sec, is this normal?
I have a stepped profile, starting at 10, ending at 100, and I would have thought it would run the test for each user.
I.e - 10 users, 10 requests per second?
First, you're running load testing from your local machine (Controller = Local Run). You can run load tests from your developer machine, but you usually can't generate enough traffic to really see how the application responds. To simulate a lot of users, you need a Load Test Rig. (on Premises, or using Windows Azure Cloud Testing). This can be a problem especially, if you're testing a web site hosted on the same computer.
Check the CPU on your machine when running the load test (in the graph) : if it's over 70%, results are biaised.
Second, how do you recorded web tests ? when using web test recorder (in IE), it will add a think time to each request. Think times are used to simulate human behavior that causes people to wait between interactions with a Web site : a real user will never open 4 pages in the same second. You can check Think Time in each request properties. A high value may explain why you've only a few requests/sec if the CPU is still low.
I have a stepped profile, starting at 10, ending at 100, and I would
have thought it would run the test for each user.
In Run settings, you have the option to configure the maximum number of iterations : this will run N scenarios, without any time limit. It's not activated by default.
You have to understand the notion of virtual user : Basically, a virtual user executes concurrently only one test case at the same time, taken from configured web tests, according to test mix/percentage/sceanrios... So 10 concurrent virtual users, will execute at most 10 tests at the same time. The Step goal is usually used to increase the load until the server reaches a point that where performance diminishes significantly.
A complete description of all Load Patterns are available here.
At end, if the number of request/sec is still low, and if it's not because of Load Testing configuration, you may have a problem on your web site ;-)
It all depends on your test configuration, but if your test is setup to do ~1 req/s with one user it should deliver ~10 req/s with 10 users.
I would say that it's probably because your server can't handle responding with more than 7 req/s. To find out where the bottle neck is try to run smaller steps and see where the breaking point is, you can do some monitoring on the servers at the same time to find out what resources are running out and on which server (CPU, mem, bw etc). Like mentioned in the comments profiling is a very good approach to find out what parts of the code and which queries is the resource hog.
Hope this helps!
There are a variety of reasons throughput could be low.
Check your settings for "Think Time Between Test Iterations", step load pattern - step duration is another setting you could modify.
Remember to keep the test moving so looking at think times for each request and making sure you are not taking too long to perform each test end to end.
I have seen where these settings can extend the overall time to more that a few minutes thus reducing the minute by minute transactions.
Check your end to end run time per webtest if run independent from the load test to make sure you know how much time the test takes overall.
Hope this helps.
- Jim Nye

Web Performance Test that requires login: How do you make it work in isolation and in load test?

I have a Visual Studio 2010 Load test, which contains a number of web performance tests. Running the web performance tests requires you to be logged in to the website under test. Accordingly, the load test contains an initialization step - a small web performance test which does the log in, and which uses a plug-in to cache the cookie so obtained. The 'real' web performance tests - the ones that actually do the work also each have a plug-in that reads the cached cookie and adds it to the test, so that each test functions correctly:
public override void PreWebTest(object sender, PreWebTestEventArgs e)
{
if (CookieCache.Cookies != null) // CookieCache is a static class of mine
e.WebTest.Context.CookieContainer.Add(CookieCache.Cookies);
The problem is that while this all works absolutely fine when I run the load test, it means I can't run any of the web performance tests in isolation because if the load test initializer hasn't run then there's no cookie, so the web performance test won't be logged in and will fail.
Is there any recommended solution for this situation? In other words, if a web performance test needs to have logged in, is there any way to get it to run both in isolation and when it's part of a load test?
The obvious way to run each web performance test in isolation would be to have it call the login test first, but I can't do that because that'll be incorrect behaviour for the load test (where logging in should happen only once per user, right at the beginning of the load test).
The solution is to add the Login test to your individual web performance tests (via "Insert Call to Web Test"), but gated by a "Context Parameter Exists" Conditional Rule that looks for the absence of the context parameter $LoadTestUserContext. That parameter only exists if the web test is running in a load test.
This way you get just one Login whether in or outside of a load test.
Why not try and use the PreRequest Function instead of the PreWebTestFunction
Public Overrides Sub PreRequest(sender As Object, e As PreRequestEventArgs)
MyBase.PreRequest(sender, e)
Dim cookie As System.Net.Cookie = New System.Net.Cookie(...)
e.Request.Cookies.Add(cookie)
That way both the Load test and the Web Test will work.
I'm not familiar with Visual Studio 2010 Load Testing, but it sounds like you need the equivalent of NUnit's SetUp and TearDown methods which run once for all tests, whether you have selected a single test or all the tests in an assembly.
A bit of searching implies that the equivalent is the Init and Term tests.
1) Right click on a scenario node in load test and select Edit Test
Mix...
2) In the edit test mix dialog, Look at the bottom the Form. You will
see 2 check boxes. One for an init test and one for a term test.
The init test will run prior to each user and term test will run when
user completes. To make sure the term test runs, you also need to set
the cooldown time for a load test. The cooldown time is a property on
the run setting node. This setting gives tests a chance to bleed out
when duration completes. You can set this to 5 minutes. The cooldown
period does not necessarily run for 5 minutes. It will end when all
term tests have completed. If that takes 20 seconds, then that is
when load test will complete.

Why difference in out when using Jmeter to load test vs HP Load runner?

Here is the scenario
We are load testing a web application. The application is deployed on two VM servers with a a hardware load balancer distributing the load.
There are tow tools used here
1. HP Load Runner (an expensive tool).
2. JMeter - free
JMeter was used by development team to test for a huge number of users. It also does not have any licensing limit like Load Runner.
How the tests are run ?
A URL is invoked with some parameters and web application reads the parameter , process results and generates a pdf file.
When running the test we found that for a load of 1000 users spread over period of 60 seconds, our application took 4 minutes to generate 1000 files.
Now when we pass the same url through JMeter, 1000 users with a ramp up time of 60 seconds,
application takes 1 minutes and 15 seconds to generate 1000 files.
I am baffled here as to why this huge difference in performance.
Load runner has rstat daemon installed on both servers.
Any clues ?
You really have four possibilities here:
You are measuring two different things. Check your timing record structure.
Your request and response information is different between the two tools. Check with Fiddler or Wireshark.
Your test environment initial conditions are different yielding different results. Test 101 stuff, but quite often overlooked in tracking down issues like this.
You have an overloaded load generator in your loadrunner environment which is causing all virtual users to slow. For example you may be logging everything resulting in your file system becoming a bottleneck for the test. Deliberately underload your generators, reduce your logging levels and watch how you are using memory for correlations so you don't create a physical memory oversubscribed condition which results in high swap activity.
As to the comment above as to JMETER being faster, I have benchmarked both and for very complex code the C based solution for Loadrunner is faster upon execution from iteration to iteration than the Java based solution in JMETER. (method: complex algorithm for creating data files on the fly for upload for batch mortgage processing. p3: 800Mhz. 2GB of RAM. LoadRunner 1.8 million iterations per hour ungoverned for a single user. JMETER, 1.2 million) Once you add in pacing it is the response time of the server which is determinate to both.
It should be noted that LoadRunner tracks its internal API time to directly address accusations of the tool influencing the test results. If you open the results set database set (.mdb or Microsoft SQL server instance as appropriate) and take a look at the [event meter] table you will find a reference for "Wasted Time." The definition for wasted time can be found in the LoadRunner documentation.
Most likely the culprit is in HOW the scripts are structured.
Things to consider:
Think / wait time: When recording,
Jmeter does not automatically put in
waits.
Items being requested: Is
Jmeter ONLY requesting/downloading
HTML pages while Load runner gets all
embedded files?
Invalid Responses:
are all 1000 Jmeter responses valid?
If you have 1000 threads from a
single desktop, I would suspect you
killed Jmeter and not all your
responses were valid.
Dont forget that the testing application itself measures itself, since the arrival of the response is based on the testing machine time. So from this perspective it could be the answer, that JMeter is simply faster.
The second thing to mention is the wait times mentioned by BlackGaff.
Always check results with result tree in jmeter.
And always put the testing application onto separate hardware to see real results, since testing application itself loads the server.

Resources