Is there any way to divide my cucumber features into different directories with different setups?
I'm creating a 'premium' version of my application with different features enabled behind feature toggles, and I need to somehow figure out how to divide my tests with different grouped setup.
If I could re-write the feature toggles .js file before each 'premium' test runs, this would get the job done.
Can cucumber tests be grouped, and be given different setups by group?
This sounds like a great fit for tags. You could use a #premium tag and apply it to any scenario that is only enabled in the premium version. You can run different sets of tests as follows:
cucumber . #Run all tests
cucumber . --tags ~#premium #Run non-premium tests
Related
There is currently a --parallel option that can run scenarios in parallel using cucumberjs 5.1.0. I want to find out if there is a way to run feature files in parallel instead of scenarios, with this version of cucumberjs.
Looking at the code it looks like there is only the capability to run by scenario https://github.com/cucumber/cucumber-js/blob/master/src/runtime/parallel/master.js#L105 . What is your reason for wanting to run by Feature instead of scenario?
A scenario is meant to be a completely independent test that don't rely on anything set up by the scenarios that ran before it, which is why the makers of cucumber have provided parallel functionality in the way that they have.
Hooks can be used to set up the scenarios individually (by tags or the scenario's name) or run scripts to register users and other test data needed for the tests.
In summary, the way they provide the parallel testing functionality means you are forced to stick to best practices.
I want to set up Jenkins for a decent build chain for a JavaFX application that controls a robotic arm and other hardware:
We are using BitBucket with the Git Flow model.
We have 2 development modules and 1 System Test module in IntelliJ that we build with Maven.
We have Unit tests, Integration test and System tests. Unit and Integration tests use JUnit and can run on the Jenkins master, for the System tests we use TestNG and they can run the TestFX tests on a Jenkins agent.
(I think TestNG is more suited for System tests than JUnit)
Development build project (build, unit+integration tests) was already in place. The Test chain has been recently set up by copying the development project, adding the system tests and ignoring the Unit/Integration tests. (so building the application is done twice)
We have 2 types of System tests:
Tests that are fairly simple and run on the application itself
Tests that are more complex and run on the application that interacts with several simulators for the robotic arm
Now I need to set up the 2nd type of tests.
My question would be: what is the best way to set this up in Jenkins?
I'm reading about Jenkins Pipelines and the Blue Ocean plugin here, about a matrix configuration project here. To me it is all a bit confusing what is the ideal way to achieve my goals.
I have no clue how to scale from a testng.xml file in my SystemTest module to flexible tests.
Can I put a sort of capabilities tag to tests so that the correct preconditions are set? For example, for tests in category 1, only the main application needs to be started for TestFX. However, for tests in the category 2, several simulators needs to be started and configured. I think using a sort of capabilities tag, will make this much more maintainable.
My goals:
Easy to maintain Jenkins flow
Efficient building, so preference to copying artifacts instead of building a second time
Possibility to split the system tests over multiple agents, preferably without me having to be concerned about what runs where (similar to Selenium Grid)
Correct dependencies (simulators etc) are started depending if the test needs them
We are looking into running the tests on VMs with OpenGL 3D acceleration due to a canvas used in the application. If tests are able to allocate, start, stop VMs on demand, that would be cool (but would only save some electricity)
Easy reporting where all test results are gathered from all agents. Notice that I prefer the JUnit report that highlights which tests were #Ignored. TestNg report format, doesn't say anything about #Ignored tests.
I need to have (I have to, it's not my decision and I cannot alter it) test-sets in both jasmine and cucumber. So I've made two folders with specs and two conf.js files, one for each framework.
When I need to run both jasmine and cucumber tests, I need to run protractor with one config and then run it again with another config.
So the question is: is there any way to make protractor run jasmine and cucumber tests consequently (e.g. first run all jasmine tests, then run all cucumber tests) in "one click"?
If you need any details about environment: I'm currently running tests in IDEA, and later there will be a Jenkins (or Hudson) job for it.
P.S.: I think, the question won't be so important when we'll use Jenkins, because we can make two jobs running one after another. But even though we might not need it, I'm still curious if it's possible.
We use TeamCity with MsTest to manage and run an automation test suite for front-end testing of a WPF application.
Currently the test suite is running on one dedicated test agent (where TC is installed) and I'm now at a stage where I need to dramatically reduce the overall time it takes to run. I want to do this by adding another test agent to run the tests in parallel.
My test scenarios are large so I have them separated into approx 4 Specflow feature files that run in sequence. All these test scenarios are also categorised by their functional areas.
Firstly:
Is it possible to configure TeamCity to have one test agent managing the sending of tests to be run on each test agent? And then collating all results at the end!
Secondly:
And also keeping the categorised tests together that need to run in sequence?
I decided to use 2 separate project configurations in my TC setup. Each project is pointing at a different test agent using the Agent Requirements step. And I have simply divided up the test categories (that I have setup in my test scenarios anyhow) for each project (half and half).
Pro:
Simple solution and easy to maintain
Con:
Results for each build are separated in TC
I have all the tests for my web application (written with the Visual Studio test framework -- Microsoft.Quality DLLs) divided into several (currently two) ordered tests. Is there an easy way to find all the tests that are not in any list?
(The reason I need to use ordered tests is because the initial tests test that installation/setup/configuration of my application worked, and subsequent tests would fail without that.)
There's on easy way to do this. The best thing to do is switch to a framework that doesn't require every test to be on a list -- I recommend MbUnit. It has a great DependsOn attribute to easily configure dependencies between tests.