Running automation tests on 2 TeamCity agents in parallel - teamcity

We use TeamCity with MsTest to manage and run an automation test suite for front-end testing of a WPF application.
Currently the test suite is running on one dedicated test agent (where TC is installed) and I'm now at a stage where I need to dramatically reduce the overall time it takes to run. I want to do this by adding another test agent to run the tests in parallel.
My test scenarios are large so I have them separated into approx 4 Specflow feature files that run in sequence. All these test scenarios are also categorised by their functional areas.
Firstly:
Is it possible to configure TeamCity to have one test agent managing the sending of tests to be run on each test agent? And then collating all results at the end!
Secondly:
And also keeping the categorised tests together that need to run in sequence?

I decided to use 2 separate project configurations in my TC setup. Each project is pointing at a different test agent using the Agent Requirements step. And I have simply divided up the test categories (that I have setup in my test scenarios anyhow) for each project (half and half).
Pro:
Simple solution and easy to maintain
Con:
Results for each build are separated in TC

Related

How to run Test cases sequentially in cypress?

lets take one spec file in cypress in that i have 5 test cases i have to run sequentially
I am tried with npx cypress run command it will run test cases one by one is it write or wrong? i am confused.
Yes it is totally normal. Here is a link to the documentation of Cypress, which explains what Parallelization is and how to set it up.
You need to activate parallelization first, in order to use it. However it is not recommended to be done on a singular machine. It needs a significant amount of resources.
Cypress Parallelization
If your project has a large number of tests, it can take a long time for tests to complete running serially on one machine. Running tests in parallel across many virtual machines can save your team time and money when running tests in Continuous Integration (CI).
Cypress can run recorded tests in parallel across multiple machines since version 3.1.0. While parallel tests can also technically run on a single machine, we do not recommend it since this machine would require significant resources to run your tests efficiently.
This guide assumes you already have your project running and recording within Continuous Integration. If you have not set up your project yet, check out our Continuous Integration guide. If you are running or planning to run tests across multiple browsers (Firefox, Chrome, or Edge), we also recommend checking out our Cross Browser Testing guide for helpful CI strategies when using parallelization.
Turning on parallelization
Refer to your CI provider's documentation on how to set up multiple machines to run in your CI environment.
Once multiple machines are available within your CI environment, you can pass the --parallel key to cypress run to have your recorded tests parallelized.
cypress run --record --key=abc123 --parallel
Running tests in parallel requires the --record flag be passed. This ensures Cypress can properly collect the data needed to parallelize future runs. This also gives you the full benefit of seeing the results of your parallelized tests in Cypress Cloud. If you have not set up your project to record, check out our setup guide.
Source: Cypress Documentation

How to set up Jenkins for build, unit test and system tests

I want to set up Jenkins for a decent build chain for a JavaFX application that controls a robotic arm and other hardware:
We are using BitBucket with the Git Flow model.
We have 2 development modules and 1 System Test module in IntelliJ that we build with Maven.
We have Unit tests, Integration test and System tests. Unit and Integration tests use JUnit and can run on the Jenkins master, for the System tests we use TestNG and they can run the TestFX tests on a Jenkins agent.
(I think TestNG is more suited for System tests than JUnit)
Development build project (build, unit+integration tests) was already in place. The Test chain has been recently set up by copying the development project, adding the system tests and ignoring the Unit/Integration tests. (so building the application is done twice)
We have 2 types of System tests:
Tests that are fairly simple and run on the application itself
Tests that are more complex and run on the application that interacts with several simulators for the robotic arm
Now I need to set up the 2nd type of tests.
My question would be: what is the best way to set this up in Jenkins?
I'm reading about Jenkins Pipelines and the Blue Ocean plugin here, about a matrix configuration project here. To me it is all a bit confusing what is the ideal way to achieve my goals.
I have no clue how to scale from a testng.xml file in my SystemTest module to flexible tests.
Can I put a sort of capabilities tag to tests so that the correct preconditions are set? For example, for tests in category 1, only the main application needs to be started for TestFX. However, for tests in the category 2, several simulators needs to be started and configured. I think using a sort of capabilities tag, will make this much more maintainable.
My goals:
Easy to maintain Jenkins flow
Efficient building, so preference to copying artifacts instead of building a second time
Possibility to split the system tests over multiple agents, preferably without me having to be concerned about what runs where (similar to Selenium Grid)
Correct dependencies (simulators etc) are started depending if the test needs them
We are looking into running the tests on VMs with OpenGL 3D acceleration due to a canvas used in the application. If tests are able to allocate, start, stop VMs on demand, that would be cool (but would only save some electricity)
Easy reporting where all test results are gathered from all agents. Notice that I prefer the JUnit report that highlights which tests were #Ignored. TestNg report format, doesn't say anything about #Ignored tests.

what need to be done between continuous integration and continuous delivery

According to my understanding, continuous integration means whenever a developer checkin the code to a branch, the code is automatically built, unit test (or other basic test) and then merged to master branch. one tool to do that is Jenkins.
continuous delivery means the code is always READY to be or CAN be deployed, though it may not be deployed.
so what else should be done to move the step from continuous integration to continuous delivery? package the code after more detailed tests like integration/performance/stress tests, tests in difference OS, in different stages (test, production),etc?
There is a long and a short answer. The short one is: automate all the steps of packaging and deploying to production and create safety net that automatically checks that the software is ready for release.
The first includes automation database migration taking into consideration zero time deployment (if needed), packaging the binaries, updating configuration files, gradually deploying to different data centers.
The second includes creating test suites for functional and non functional tests. Such as performance, load testing, security penetration, licensing etc.

Execute Visual Studio Test Runner in Parallel on Team Foundation Server

I have a few test sets in a single TFS build definition. I'm looking for a way to run all my tests sets in parallel to cut down on the time that the build takes to run.
Below is a screen shot of my build definition and my automated test configurations.
Does any one know how to accomplish this? I'm not seeing a setting in the build definition and I've seen test setting files being used to set this, but I'm not sure where to set the test setting file in the build definition .
The new vNext build system can run different build configurations in parallel (on different build agents) so you could, but I'm not sure how practical it would be.
Your tests sets would have to be in 4 different projects. Then you'd need to have different configurations:
- anycpu / release-with-tests1
- anycpu / release-with-tests2
- anycpu / release-with-tests3
- anycpu / release-with-tests4
With that you can enable (build) only 1 test project per configuration and set your build to run all tests (*test.dll), each configuration would only run one test project because the other ones were not built.
You enable the parallel build and it runs on up to 4 agents, depending on how many you have. Each agent would have to download the sources and build them before running the tests, so it could be interesting or not depending on how long your tests actually are.
There is a filter for the binaries you want to copy to your drop folder, so you can copy them for only 1 configuration and not waste disk space.
You can use the test frameworks that support multiple threaded execution to run the testing in parallel with multiple cores machines or distribute your test to multiple machines. Refer to this blog from MSDN for details: http://blogs.msdn.com/b/visualstudioalm/archive/2015/07/30/speeding-up-test-execution-in-tfs.aspx

Is there a way to disable/ignore a Load Test in Visual Studio 2010 without using Test Lists?

I'm new to load testing in Visual Studio/MSTest, and I created a new Load Test recently to validate some high-traffic scenarios for a WCF service. I want to add this to the tests project for the service, but I don't want the test to be executed whenever I "Run All Tests in Solution" nor as part of our Continuous Integration build-verification process because a) it takes 5 minutes to run, and b) the service call that it is testing generates many thousands of email messages. Basically, I'd like to do the equivalent of adding the [Ignore] attribute to a unit test so that the load test is only executed when I explicitly choose to run it.
This MSDN Article ("How to: Disable and Enable Tests") suggests that the only to disable the test is to use Test Lists (.vsmdi files), but I don't have much experience with them, they seem like a hassle to manage, I don't want to have to modify our CI Build Definition, and this blog post says that Test Lists are deprecated in VS2012. Any other ideas?
Edit: I accepted Mauricio's answer, which was to put the load tests into a separate project and maintain separate solutions, one with the load tests and one without. This enables you to run the (faster-running) unit tests during development and also include the (slower-running) load tests during build verification without using test lists.
This should not be an issue for your CI Build Definition. Why?
To run unit tests as part of your build process you need to configure the build definition to point to a test container (usually a .dll file containint your test classes and methods). Load tests do not work this way, they are defined within .loadtest files (which are just xml files) that are consumed by the MSTest engine.
If you do not make any further changes to your CI Build definition the load test will be ignored.
If you want to run the test as part of a build, then you need to configure the build definition to use the .loadtest file.
Stay away from testlists. Like you said, they are being deprecated in VS11.
Edit: The simplest way to avoid running the load test as part of Visual Studio "Run All" tests is to create a different solution for your load tests.
Why don't you want to use Test Lists. I think is the best way to do that. Create different Test Lists for each test type (unit test, load test...) and then in your MSTest command run the Test List(s) you want:
MSTest \testmetadata:testlists.vsmdi \testlist:UnitTests (only UnitTests)
MSTest \testmetadata:testlists.vsmdi \testlist:LoadTests (only LoadTests)
MSTest \testmetadata:testlists.vsmdi \testlist:UnitTests \testlist:LoadTests (UnitTests & LoadTests)

Resources