I'm using Visual Studio Online Load Testing to test an API with variable parameters coming from a CSV file.
My setup looks like this:
In properties I set "Show Separate Request Results" to True, hoping that I would be able to see which parameters were used during the test, but I cannot find anything on this in the report?
Is this the way to do this or am I doing something wrong?
Visual Studio load tests are not great at showing how individual test cases worked. The test case logs show the data source values used by a test, look in the context section of the log. By default, logs of the first 200 test cases that fail are retained; altered via Maximum test logs in run settings. Logs of successful tests can also be retained by altering Save log frequency for completed tests in run settings.
Whilst the log files have the data in their context sections, it is hard work (ie lots of mouse waving and mouse clicking) to open each log file, view the context, scroll the right section into view, close the log file, etc.
The mechanism I use to record data source usage etc is to have a web test plugin with a PostWebTest method. It writes useful data to a simple text file as each test case finishes. I write one line per test case, formatted as a CSV so it can easily be read and analysed in a spreadsheet. The data written includes date, time, test outcome, some data source values, and some context parameter values extracted or generated during the run. Tests run with multiple agents will get one file written on each agent. Gathering these files will be a little work but less than viewing individual test case log files. Unfortunately I have not found a way of collecting these files from load tests run with Visual Studio Team Services (previously known as Visual Studio Online).
An early version of the plugins I wrote can be found here.
Related
I have a solution where I am running various load test scenarios via the command line for Visual Studio 2017. I know I can use MSTest to do this, and am using /testcontainer arg for picking which .loadtest file, and the Set Test.UseRunSetting= WhateverRunSetting arg to pick which run setting I want (controller duration, context parameters, etc).
I can't seem to find a way to change the Step Load Pattern from command line, however. This would allow me to, from the command line, set things like ramp up time, max users, initial users, and other flags. The other way I can think to do this is if I can say which scenario to run (instead of all scenarios in the .loadtest file) based on a command line arg.
Well, the route I used was to write a Powershell script that accepts the parameters I want, such as initial users, step duration/ramp time, context parameters, max users, etc. Then takes those and edits my .loadtest file (since it's an XML file) and saves the .loadtest with the edits, including setting it as the active run-setting.
Then using MSTest as noted above, I can make sure that run setting and .loadtest is used and kick off my test(s).
Another way to do this is use of plug-ins. You can create a plug-in in your load test. The plug-in will read the external excel or xml file to control the user load. You can modify the excel or xml through automation.
I have same thought as yours for modifying load test files externally. Never thried though.
We have a solution that contains a series of projects used for our tests. One of those projects contains a set of files that are needed for the tests. Those files are schemas that will get validated against every time an API route is called. Tests, naturally, call one or more API routes.
The solution has a .testsettings file. This file has deployment enabled, and it specifies that these schemas need to be deployed. In addition, every schema file is set to Copy Always. Also. the .testsettings file is in the solution, under Solution Items.
The problem is that the .testsettings file is only occasionally respected. Sometimes the files are copied; sometimes they are not. When they don't copy, we can do the following to fox it:
Go to the Test -> Test Settings menu and Choose Select Test Settings
Select the .testsettings file in the solution
Rebuild the solution
Rerun the tests
This usually works at least once. But inevitably, it stops working and the files aren't deployed again.
Note that when you go to the Test -> Test Settings menu, our current .testsettings file is always already checked. So choosing a new .testsettings file just means choosing the one that the UI already says is chosen.
We thought of going the DeploymentItem route, but this is impractical for a two reasons, surrounding code maintenance.
From what I can tell, DeploymentItem can only be placed on individual tests. With literally hundreds of tests, we'd be sprinkling it everywhere. It'd become a code maintenance nightmare. I thought of placing it on the global TestInitialize method, but that would just re-copy the files every time a test is run, which seems unnecessary. Not to mention that I'd have to put literally dozens of DeploymentItem attributes on the method and we'd need to keep that up-to-date every time a new schema is added.
Related to that, adding new schemas means altering existing tests where necessary. Again, we have hundreds of tests.
A far better solution would be to have the files copied over once, and then have the code look in the communal pool of schemas whenever it needs one.
I also looked at replacing .testsettings with .runsettings, but it doesn't seem to have a DeploymentEnabled node in the XML, and where that option exists it's again specific to DeploymentEnabled.
Does anyone have a solution to this, or does anyone know if it's a known bug? Schema validation happens behind the scenes -- individual test authors don't have to explicitly call it -- and it doesn't fail the test if it doesn't occur (we don't always have schemas available yet for every API call, so we don't want to fail the test if that's the case). As a result, it's often difficult to immediately ascertain whether or not validation happened. Because of that, we sometimes get false passes on tests where the schema actually broke, all because the .testsettings file didn't actually deploy our files like it's set to.
So I found out the problem: apparently this issue was fixed in Visual Studio 2015 Update 3. We were using Update 2. Once we got the new update, this problem went away.
I am working with Visual studio Load test. I want to prepare Excel report after successful Load test. I want to trigger a Exe or create custom C# class inside the Load test solution for the report generation. But for both i need the test ending event. Is there anyway to find out Report completion?
Thanks in advance,
Subbiah K
A load test plugin can run at the end of the test. Its documentation is not clear on whether that means after the tests have executed or after all the results have been saved. I am not aware of any way of "triggering" the Excel report generation by program.
If you run the load test by using mstest.exe or a similar program then you can follow that by a command by another that you write to do the reporting you require.
The ".testsettings" file has an option for a command (batch or exe etc) to be run after a test completes. I have not tried using it with a load test but it may provide a place to call your reporting program.
I have created a number of (separate) CodedUI projects within Visual Studio 2013, in order to test the basic functions of a website.
Test-cases are created as separate projects, as I expect some (or all?) of them to change over time, and thus to ensure 'modularity' of capture for ease of subsequent maintenance.
Now, I see I can easily create an Ordered Test within each project, which will allow the same test-case to be run and re-run as many times as I wish; But, it's not obvious to me how I can create an Ordered Test whereby I can add different test-cases created as different projects. Certainly, not directly.
Is it possible?
Also, can I rename the Ordered Test list and save it to a separate folder where I can store differing Ordered Tests to test functionality, as I wish?
Ideally, I'd like to create an Ordered Test external to any specific project, so I can go into any project I wish and add whatever tests I wish, as the test-environment is always the same.
you should have created a single project for your application. To ensure 'modularity', coded ui has given us the option of creating different UI Maps within a same project. Each UI MAP will represent a module of your application. This approach will give you easy maintenance and it will also help you to create ordered test cases which contain test cases from different UI Maps.
for more details please see this link
https://msdn.microsoft.com/en-us/library/ff398056.aspx
Thanks
Yes, I sort of see that. And I guess it's easy enough to move the code to become separate 'solutions' within a 'project'.
However, I want to work with TFS server too, so will look at the MTM route as well.
But it may be that I need my captured CodedUI to be 'solutions' within a single project too - though I really want my modules to be 'stand-alone' projects for safe-keeping.
Will investigate further.
I have a few tests that need to be fed with external data from excel files. The files are included in the test project, and in Visual Studio, I have edited the test settings file (Local.testsettings) to deploy the data files. This makes it work fine i VS.
We are, however, also running continous integration with TeamCity, and in TeamCity this doesn't work. My data files are unavailable to the test. Seems that the tests are run from a temporary folder named "C:\TeamCity\buildAgent\temp\buildTmp\ciuser_AS40VS6 2009-12-11 09_40_17\Out", and the data files are not copied there.
I have tried changing the build action for the data files to "Resource" and setting copy to output dir to "Always", but that didn't help.
Does anyone know how to make this work?
I am running Visual Studio 2010 beta 2 and TeamCity 4.5.5, which is why I'm running MSTest in the first place, and not NUnit...
I get round this by adding my data files (in my case usually XML) as embedded resources and I extract them from the test assembly.
[TestInitialize]
public void InitializeTests()
{
var asm = Assembly.GetExecutingAssembly();
this.doc = new XmlDocument();
this.doc.Load(asm.GetManifestResourceStream("TestAssembly.File.xml"));
}
This post answers this question: MSTest copy file to test run folder
The accepted answer is technically correct. However, from my experience, I find that the embedding files as resources requires an additional step of remembering to set the property "Embedded Resource". This becomes a challenge when you have a large number of data files. Also, with increasing number of data files, the size of the unit test assembly keeps growing . In my case, I had over 500MB of test data files and packing all them into the assembly was not a good idea.
What is the alternative?
Let the data files remain as they are. Do not use DeploymentItemAttribute, do not use embedded resources. Please refer my proposed solution How do I make a data file available to unit tests?