I've created a VS 2010 load test. The test executes on a dedicated test agent which fires http posts at a couple of web servers.
After running the tests I realised that I needed to make a few changes to the counter sets that I'd assigned. Problem is, even though I've changed the counter sets and updated the Counter Set Mappings under the active run settings I'm still getting back results using the the counter sets from my 1st load test run.
Also, test tests run ok but I get lots of pop-up msg boxes saying "index was outside the bounds of the array"
Any ideas what I need to fix this would be greatly appreciated.
After rebooting the server, the counters reset.
Related
I'm new to load testing using VS 2015. Right now, I'm working on load testing for a web project which will need recorded web performance tests for each interaction that users would typically do with our application.
I recorded a web performance test for the simple logging-in of user in the website. After clicking the stop button in browser, the web performance test was generated in the VS 2015 but with an error.
Although I successfully logged in during the recorded web performance test, I was wondering if should I be worried with the error displayed and would affect the load testing which I will be using the recorded web performance test for.
Error message: 1 primary requests, 1 dependant requests and 0 conditional rules failed
When the error message is clicked, the following details would show up:
Access denied (authentication_failed)
404 - File or Directory not found SERVER ERROR
Please help. Thanks
After the stop button on the browser is pressed, Visual Studio runs the test once to help it find dynamic data, etc. Commonly this execution of the test fails, so do not worry about this failure.
You should then expect to run the test yourself a number of times, to make it work properly. Before each execution you may need to make changes, for example:
for data driving
for adding credentials
for adding verification rules
to sort out dynamic data that Visual Studio has not detected, this will probably include adding extraction rules
We have a huge number of test running against our web application and i have come across a very strange error.
We have a function that will upload a file into the application, within this it will click the browse button, input a location and click ok followed by upload.
This works on 90% of the tests and its the same function being called into all the seperate scripts, but in some tests its failing because its unable to locate the object (in this case the browse button on the dialog box)
Its being tested on the multiple machines, its the same target server we are testing against, its the same IE version but we care getting a different results and im running out of ideas.
although when you map the object in TestComplete and compare this against what the test is looking for, they are identical.
Mapped Object using the object spy
Aliases.browser.pageModspace.panelMangoentryformC.panelMangoentryformAddFile.panelBd.panelEntryformcontent.panelModspacedialog.formEntryform.tableFilesourceTable.cellFilesourceOptionFile.fileFilesourceinputfield
The object it failed to find
Aliases.browser.pageModspace.panelMangoentryformC.panelMangoentryformAddFile.panelBd.panelEntryformcontent.panelModspacedialog.formEntryform.tableFilesourceTable.cellFilesourceOptionFile.fileFilesourceinputfield
Does anyone have any ideas?
Load of objects can take different time on different computers. You can try the following:
Modify your test to get the problematic object via the WaitAliasChild method. In this case, TestComplete will wait for an object during the specified time:
Aliases.browser.pageModspace.panelMangoentryformC.panelMangoentryformAddFile.
panelBd.panelEntryformcontent.panelModspacedialog.formEntryform.tableFilesourceTable.
cellFilesourceOptionFile.WaitAliasChild(“fileFilesourceinputfield”, 20000)
Details: http://smartbear.com/viewarticle/55413/
Increase the Auto-wait timeout project option. This will make TestComplete wait for objects longer. However, you need to use this option very carefully as it can affect the total time execution. Details: http://smartbear.com/viewarticle/55316/
I have a Visual Studio 2010 Load test, which contains a number of web performance tests. Running the web performance tests requires you to be logged in to the website under test. Accordingly, the load test contains an initialization step - a small web performance test which does the log in, and which uses a plug-in to cache the cookie so obtained. The 'real' web performance tests - the ones that actually do the work also each have a plug-in that reads the cached cookie and adds it to the test, so that each test functions correctly:
public override void PreWebTest(object sender, PreWebTestEventArgs e)
{
if (CookieCache.Cookies != null) // CookieCache is a static class of mine
e.WebTest.Context.CookieContainer.Add(CookieCache.Cookies);
The problem is that while this all works absolutely fine when I run the load test, it means I can't run any of the web performance tests in isolation because if the load test initializer hasn't run then there's no cookie, so the web performance test won't be logged in and will fail.
Is there any recommended solution for this situation? In other words, if a web performance test needs to have logged in, is there any way to get it to run both in isolation and when it's part of a load test?
The obvious way to run each web performance test in isolation would be to have it call the login test first, but I can't do that because that'll be incorrect behaviour for the load test (where logging in should happen only once per user, right at the beginning of the load test).
The solution is to add the Login test to your individual web performance tests (via "Insert Call to Web Test"), but gated by a "Context Parameter Exists" Conditional Rule that looks for the absence of the context parameter $LoadTestUserContext. That parameter only exists if the web test is running in a load test.
This way you get just one Login whether in or outside of a load test.
Why not try and use the PreRequest Function instead of the PreWebTestFunction
Public Overrides Sub PreRequest(sender As Object, e As PreRequestEventArgs)
MyBase.PreRequest(sender, e)
Dim cookie As System.Net.Cookie = New System.Net.Cookie(...)
e.Request.Cookies.Add(cookie)
That way both the Load test and the Web Test will work.
I'm not familiar with Visual Studio 2010 Load Testing, but it sounds like you need the equivalent of NUnit's SetUp and TearDown methods which run once for all tests, whether you have selected a single test or all the tests in an assembly.
A bit of searching implies that the equivalent is the Init and Term tests.
1) Right click on a scenario node in load test and select Edit Test
Mix...
2) In the edit test mix dialog, Look at the bottom the Form. You will
see 2 check boxes. One for an init test and one for a term test.
The init test will run prior to each user and term test will run when
user completes. To make sure the term test runs, you also need to set
the cooldown time for a load test. The cooldown time is a property on
the run setting node. This setting gives tests a chance to bleed out
when duration completes. You can set this to 5 minutes. The cooldown
period does not necessarily run for 5 minutes. It will end when all
term tests have completed. If that takes 20 seconds, then that is
when load test will complete.
I'm using a VS2010 loadtest against a website, but the site being tested is throwing some errors (eg, SiteUnavailable or other general site-specific errors).
The loadtest continues execution even if an error is returned in the response - so our .NET server logs are showing many errors for a single user session - and the subsequent errors may well be caused because we are trying to continue a web journey that should really have ended.
So is it possible to end the erroring user session as soon as an error is hit in a loadtest without ending the whole loadtest? I would then want the virtual user to continue with another new web journey.
My loadtest is not scripted (it's using the default view) as I read somewhere that loadtests are less efficient when scripted.
However I can't see a setting that would enable me to do what I want, so I'm thinking that scripting would be the way to go.
Any pointers/suggestions gratefully received.
Dave
In case anyone else needs the answer to this, I had it answered via the MS forum. There is a setting in the webtest "StopOnError" - this should be set to True and will end the webtest running, NOT the loadtest, if an error occurs. This setting avoids the chain of potentially unrelated errors that may occur as a result of a single error.
I am using Visual Studio 2010 Ultimate to perform loadtests. These loadtest use recorded webtests.
When running a loadtest with an increasing number of concurrent users, some steps in my webtests will start to fail. The first error is often an internal server error 500. This will give a wrong impression of the average page_load, because these internal server errors are often returned very fast, in contrast to the generation a succesful response. So, when the load increases, the average page_load drops.
Of course, I need to attend to these internal server errors, but in the meantime, I would like to exclude failed webtests from my measurements.
Does anybody know if this can be done?
Thanks in advance.
It may be possible to run your own query on the test results database that ignores errors, but even that will be inaccurate.
Remember that the page return stats are really only useful when read in conjunction with the load on the hardware.
Essentially, the load test is recording the effect on your hardware of a given load. If you website is returning a large number of 500 error pages quickly, the load on the hardware will be affected and any page stats will reflect the change in server loading.
You will have to investigate the cause of the 500 errors and either fix the issue or report in your load testing results that once a load of 'x' is reached on the servers, the pages 'y' will give an internal server error 500 result instead of the requested page.
This gives the business owners of you app some information to make the decision whether to fix the problem or live with it.