Recorded Web Performance Test in VS 2015 shows error - visual-studio

I'm new to load testing using VS 2015. Right now, I'm working on load testing for a web project which will need recorded web performance tests for each interaction that users would typically do with our application.
I recorded a web performance test for the simple logging-in of user in the website. After clicking the stop button in browser, the web performance test was generated in the VS 2015 but with an error.
Although I successfully logged in during the recorded web performance test, I was wondering if should I be worried with the error displayed and would affect the load testing which I will be using the recorded web performance test for.
Error message: 1 primary requests, 1 dependant requests and 0 conditional rules failed
When the error message is clicked, the following details would show up:
Access denied (authentication_failed)
404 - File or Directory not found SERVER ERROR
Please help. Thanks

After the stop button on the browser is pressed, Visual Studio runs the test once to help it find dynamic data, etc. Commonly this execution of the test fails, so do not worry about this failure.
You should then expect to run the test yourself a number of times, to make it work properly. Before each execution you may need to make changes, for example:
for data driving
for adding credentials
for adding verification rules
to sort out dynamic data that Visual Studio has not detected, this will probably include adding extraction rules

Related

Visual Studio web test transaction total request time wrong

I have a very quick question. I have recorded web scenarios and created different transactions on top of those recorded web requests as well as load tests on it.
When I run the load test and wanted to see the detail of each run, I figured the transaction total time does not match the total of each request under it.
Do you have any idea why?

VS Load Test - why am I seeing more requests in results to recorded

Newbie to Visual Studio load testing
When I record a web performance test I get a list of requests, when I run this test some of the 'requests' expand to show more.
Q: why can I not see all these requests in the recorded list?
If I run the same actions with a tool like LoadUIWeb I see all these requests in the recording, none hidden.
I want to be able to test the website with and without calls to external sites like google-analytics, etc.
I found from googling that it appears that the only way around this is to write a plugin.
I'm surprised that I'd have to do this as other tools will show all requests on the recorded script. I want to ensure that there isn't something I'm missing...
Thanks
The "expanded request" or "child requests" are actually dependent requests. If your site has dependencies to other resources (e.g. images, scripts, css files, etc...) it will result on additional HTTP requests made to obtain these resources (assuming they are not cached locally). VS will simulate this behavior by analyzing the body and issuing dependent requests.
The reason they are not recorded is because they may change.
The following question provides very useful information about the behavior of dependent requests: How can I turn off request caching in a Visual Studio load test

How to amend expected status codes in Visual Studio Web Performance Tests

How do you (or even can you) amend the expected HTTP status codes in a Visual Studio Web Performance Test?
I'm checking some new bits of a clients site, so using some Visual Studio Web Performance Tests to drive the pages.
Run the tests and the expected actions occur on the server ... BUT the tests are failing
The reason is there are a few hidden links to some missing GIF files, which are returning a 404 status
I can't get the client to add the files, but I don't want to check the various tests each time to check whether the "fail" is one of the expected 404's, or a real fail
In the properties of the request you can change "Expected HTTP Status Code" to the expected (Int32) Http Status Code.
Instead of expecting a different status code you can write a web test plugin that removes dependant requests. Basically the plugin scans the dependant requests and any that match some criteria are removed. I have done this on a few performance test projects to avoid getting test failures for things that I know would fail every test. Of course your test report should describe the missing files. See also See http://blogs.msdn.com/b/edglas/archive/2008/07/17/dependent-request-filter.aspx

Ending a Visual Studio loadtest user session if a website error is hit

I'm using a VS2010 loadtest against a website, but the site being tested is throwing some errors (eg, SiteUnavailable or other general site-specific errors).
The loadtest continues execution even if an error is returned in the response - so our .NET server logs are showing many errors for a single user session - and the subsequent errors may well be caused because we are trying to continue a web journey that should really have ended.
So is it possible to end the erroring user session as soon as an error is hit in a loadtest without ending the whole loadtest? I would then want the virtual user to continue with another new web journey.
My loadtest is not scripted (it's using the default view) as I read somewhere that loadtests are less efficient when scripted.
However I can't see a setting that would enable me to do what I want, so I'm thinking that scripting would be the way to go.
Any pointers/suggestions gratefully received.
Dave
In case anyone else needs the answer to this, I had it answered via the MS forum. There is a setting in the webtest "StopOnError" - this should be set to True and will end the webtest running, NOT the loadtest, if an error occurs. This setting avoids the chain of potentially unrelated errors that may occur as a result of a single error.

Can I exclude counts from failed webtests in a VS.Net 2010 Loadtest?

I am using Visual Studio 2010 Ultimate to perform loadtests. These loadtest use recorded webtests.
When running a loadtest with an increasing number of concurrent users, some steps in my webtests will start to fail. The first error is often an internal server error 500. This will give a wrong impression of the average page_load, because these internal server errors are often returned very fast, in contrast to the generation a succesful response. So, when the load increases, the average page_load drops.
Of course, I need to attend to these internal server errors, but in the meantime, I would like to exclude failed webtests from my measurements.
Does anybody know if this can be done?
Thanks in advance.
It may be possible to run your own query on the test results database that ignores errors, but even that will be inaccurate.
Remember that the page return stats are really only useful when read in conjunction with the load on the hardware.
Essentially, the load test is recording the effect on your hardware of a given load. If you website is returning a large number of 500 error pages quickly, the load on the hardware will be affected and any page stats will reflect the change in server loading.
You will have to investigate the cause of the 500 errors and either fix the issue or report in your load testing results that once a load of 'x' is reached on the servers, the pages 'y' will give an internal server error 500 result instead of the requested page.
This gives the business owners of you app some information to make the decision whether to fix the problem or live with it.

Resources