I've been trying to get Jmeter load tests to run in VSTS thus far without avail. I've been back and forth (very slowly!) with the Microsoft support team about this, but as the issues are ironed out I would like to at least run a small set of load tests on our build machine using Jmeter and then have the results uploaded somehow to VSTS so they are easier to track. I have part 1 of this working: From the VSTS release definition I run a batch file that runs the load tests locally, and then generates an aggregate spreadsheet with results.
The question is - how can I get those results loaded into VSTS?
In our case we had to export the results to xml using the jmeter.test.xmlouput configuration. Then we had a script to transform the xml in a proper Xunit result file and we finally used a publish test results to gather this file and add the results to the release. (this approach would work with build definitions too).
It's a little bit complicated, requires some scripting and surely would be easier if a dedicated task was available.
Related
We have a TFS build definition that kicks off NUnit tests tagged with the 'Regression' test category. This uses the NUnit console runners annotation of
where cat = 'Regression'
However, we have multiple different environments where some tests will fail in one environment, they will pass in the other. We have not made much use of the Playlist feature, because I can not find a way to target a playlist when running remotely on TFS. Does anyone know how this can be done? Thanks!
Unfortunately, there is no way of specifying this/playlist in the TFS Build Definition for now. A related uservoice:
Allow test playlists to be used by TFS build servers
https://visualstudio.uservoice.com/forums/330519-visual-studio-team-services/suggestions/3853614-allow-test-playlists-to-be-used-by-tfs-build-serve
As a workaround, you could use .orderedtest instead of .playlist.
Ordered tests can be created and edited in VS2013 and later. The format is otherwise similar to .playlist but it contains links to test GUIDs so its more complicated to modify programmatically.
TFS is able to run orderedtest in build pipeline, how to achieve this you could refer below links:
TFS - order of automata tests to execute
How to use Vnext build: ordered tests, distribute test, collect results
I created a build definition that runs automated tests using MTM build environments and test suites. I recently created a Visual Studio Load Test, which can be added to a test suite just like any test method marked with the [TestMethod] attribute. However, when I run the build, I get no errors and it appears the aggregate tests don't run. Is there a way to make this work?
I found this article: https://blogs.msdn.microsoft.com/testingspot/2013/01/22/how-to-automatically-run-a-load-test-as-part-of-a-build/ which describes a way to do it, but I can't find a build template that matches what he describes, and it appears this only allows you to run a single load test.
Also, when you configure a test controller, there is an option to configure it for load testing, but to do this, you must unregister it from the Team Project Collection. If this is done, it appears the controller can no longer be used in an environments to run project automated tests. This defeats the purpose of what I want to do and makes it seem that Load Tests and Team Projects are mutually exclusive. Is this the case? If so, this is a big oversight. Load tests are the kind of thing you would like to run automatically. Thanks for the help.
You are unfortunately right. A test controller used for load testing cannot be used for other automated test execution 'at the same time'. In your scenario I would recommend that you setup a different test controller and agent for load testing and you would be able to queue it as a part of your build to achieve what you are looking for.
There is no special build process template for this case.
We have a web application. We want to run the same test across multiple environments to ensure everything is still working correctly.
UAT -> STAGING -> PRODUCTION
We want to run these tests after each deploy to each environment. Each environment has as a different URL. I have created three test plans in MTM. I have added test cases for only UAT environment and I have created an environment in Lab Center. By the way, I have recorded test cases with coded ui test and I have associated them for automated testing (only UAT environment). How can I do testing other environments. How can I achieve this without changing the recording or code everytime? Thanks,
If you generated the tests using the default Test Builder, you can try to write something like this on your [CodedUITest] class:
[TestInitialize()]
public void MyTestInitialize()
{
// the url I could read from a config file
string url = "http://stackoverflow.com/";
this.UIMap.RecordedMethodXXParams.XXChWindowUrl = url;
}
Where RecordedMethodXXParams and XXChWindowUrl are auto generated. You can check the generated names in the UIMap class.
This is too late on this but just in case it helps readers.
You do not need to create multiple test plans or test suites in MTM for this. What you need is, the builds to be smart enough to choose the right config based on the target environment. As Ciaran suggested you could use xml config that have all the details of each environment and then you write some filtering code to filter out the details based on the target environment but maintainability could become a bit of pain. Ideally you would like to have one xml layout for app.config and that will load different values for each config based on the target environment. ie xml in app.config is transformed based on the target environment.
SlowCheetah does exactly that for you. a bit of reading and understanding is required to implement this.
After you have all the transforms in place, use the "Configuration Manager" in visual studio to describe all the target environments. You can find it in the dropdown next to your green start/run button in visual studio.
Create a separate CI build (ie trigger = checkin) of the test code (ie coded UI tests project) targeting each test environment using the Process>Build>Configurations section of the build definition.
Create a lab test run build (ie one using LabDefaultTemplate) for each target environment that uses the same test suite from the test manager. make sure that each of the build maps to the corresponding CI build in the build section of the process workflow wizard.
Queue away all the builds and you ll have all the builds running together in all the environments simultaneously and each of them smartly picking up the right configs.
You will probably need to edit the Coded UI tests to change the browser URL which gets launched when the tests run. When I performed automated Coded UI tests on different browsers, when the tests started, I made it read from an XML configuration file on each test environment to get the correct browser URL (and any other relevant configuration data). So in other words you will need at least a little bit of code to handle the different URL's or any configuration data for each test environment.
For actually running the tests on remote environments, you should download the Microsoft Test Controller and Test Agents (Download link). And here's the documentation for installing and configuring the agents.
The idea is that your main machine (perhaps the main build/test machine) has the test controller installed, and the test controller remotely connects to the test agents which are installed on your test environment and launches the automated Coded UI tests.
Microsoft Test Manager also has command-line options so that you can schedule automated tests (e.g. you could run a script from the Windows task scheduler).
I can't remember the exact details of implementing these but hopefully I at least will put you in the right direction so that you can research these things further.
There are plenty of nuances with automating tests using test agents, so I would prepare to invest a fair amount of time in this.
UPDATE:
It's been a long time since I've worked with test automation so I don't remember details of my implementation, but as far as I remember, in my system, I had an XML configuration file stored on the test environment (e.g. C:\MyTestConfig\config.xml that had XML values for various configuration options, the important one being the URL that I want to launch, e.g.
<browserUrl>http://localhost:1659/whatever</browserUrl>
Then, I had a class in the test project which on instantiation would get the configuration XML file (it would be stored on the same place in each test environment), and read the values. It's been a long time since I've done it though, so I can't remember my exact implementation, but there is plenty of documentation on the web for reading XML in C# .NET.
From my test classes, I then inherited the class which reads the configuration values, and then from the test setup methods in the test classes this would launch the browser with the browser URL from the XML file and start the tests. If you don't know how to create test setup methods I would look at the documentation for the test framework you are using (which will most likely be the Visual Studio Unit testing framework as this is used by default with the Coded UI tests).
I'm new to load testing in Visual Studio/MSTest, and I created a new Load Test recently to validate some high-traffic scenarios for a WCF service. I want to add this to the tests project for the service, but I don't want the test to be executed whenever I "Run All Tests in Solution" nor as part of our Continuous Integration build-verification process because a) it takes 5 minutes to run, and b) the service call that it is testing generates many thousands of email messages. Basically, I'd like to do the equivalent of adding the [Ignore] attribute to a unit test so that the load test is only executed when I explicitly choose to run it.
This MSDN Article ("How to: Disable and Enable Tests") suggests that the only to disable the test is to use Test Lists (.vsmdi files), but I don't have much experience with them, they seem like a hassle to manage, I don't want to have to modify our CI Build Definition, and this blog post says that Test Lists are deprecated in VS2012. Any other ideas?
Edit: I accepted Mauricio's answer, which was to put the load tests into a separate project and maintain separate solutions, one with the load tests and one without. This enables you to run the (faster-running) unit tests during development and also include the (slower-running) load tests during build verification without using test lists.
This should not be an issue for your CI Build Definition. Why?
To run unit tests as part of your build process you need to configure the build definition to point to a test container (usually a .dll file containint your test classes and methods). Load tests do not work this way, they are defined within .loadtest files (which are just xml files) that are consumed by the MSTest engine.
If you do not make any further changes to your CI Build definition the load test will be ignored.
If you want to run the test as part of a build, then you need to configure the build definition to use the .loadtest file.
Stay away from testlists. Like you said, they are being deprecated in VS11.
Edit: The simplest way to avoid running the load test as part of Visual Studio "Run All" tests is to create a different solution for your load tests.
Why don't you want to use Test Lists. I think is the best way to do that. Create different Test Lists for each test type (unit test, load test...) and then in your MSTest command run the Test List(s) you want:
MSTest \testmetadata:testlists.vsmdi \testlist:UnitTests (only UnitTests)
MSTest \testmetadata:testlists.vsmdi \testlist:LoadTests (only LoadTests)
MSTest \testmetadata:testlists.vsmdi \testlist:UnitTests \testlist:LoadTests (UnitTests & LoadTests)
I am tasked to improve quality and implement TeamCity for continuous integration. My experience with TeamCity is very limited - I use mostly TFS myself and have some experience with CC.NET.
A lot should happen within a build process... actually the build is already pushed into three different configurations that will run one after the next.
My main problem is that in each of those I actually would need to start multiple runners. For example, the first build step shall consist of:
The generation of new AssemblyInfo.cs files for consistent in assembly numbering
The actual compilation
A partial unit test run (all tests that run fast and check core functionality)
An FxCop run
A StyleCop run
The current version of TeamCity only allows to configure one runner ... which leaves me stuck with a lot of things.
How you would approach this? My current idea is going towards using the MsBuild runner for everything and basically start my own MsBuild based script which then does all the things, pretty much the way that TFS handles it (and the same way i did things back in the cc.net way with my own Nant build script).
On a further problem the question is how to present statistical information, for example from unit tests running in different stages (build configurations). We have some further down that take some time to run and want that to run in a 2nd or 3rd step (the latest for example testing database generation code which, including loading base data, takes about 15+ minutes to run). OTOH we would really like test results to be somehow consolidated.
Anyone any ideas?
Thanks.
TeamCity 6.0 allows multiple build steps for a single build configuration. Isn't it what you're looking for?
You'll need to script this out, at least parts of it. TeamCity provides some nice UI based config for some of your needs, but not all. Here's my suggestion:
Create an msbuild script to handle your first two bullet points, AssemblyInfo generation and compilation. Configure the msbuild runner to run your script, and to run your tests. Collect your assemblies as artifacts.
Create a second build configuration for FxCop. Trigger it from the first build. Give it an 'artifact dependency' on the first build, which is how it gets a hold of your dlls.
For StyleCop, TC doesn't support it out of the box like it does FxCop. Add it to your msbuild script manually, and have it produce an html report (which TeamCity can then display).
You need to take a look at the Dependencies functionality in the TeamCity. This feature allows you to create a sequence of build configurations. In other words, you need to create a build configuration for each step and then link all them as dependencies.
For consolidating test results please take a loot at the Artifact Dependencies. It might help.