Is there a way to run feature files in parallel using cucumber-js v5.1.0? - cucumberjs

There is currently a --parallel option that can run scenarios in parallel using cucumberjs 5.1.0. I want to find out if there is a way to run feature files in parallel instead of scenarios, with this version of cucumberjs.

Looking at the code it looks like there is only the capability to run by scenario https://github.com/cucumber/cucumber-js/blob/master/src/runtime/parallel/master.js#L105 . What is your reason for wanting to run by Feature instead of scenario?

A scenario is meant to be a completely independent test that don't rely on anything set up by the scenarios that ran before it, which is why the makers of cucumber have provided parallel functionality in the way that they have.
Hooks can be used to set up the scenarios individually (by tags or the scenario's name) or run scripts to register users and other test data needed for the tests.
In summary, the way they provide the parallel testing functionality means you are forced to stick to best practices.

Related

Can we auto-tag scheduled builds?

We are setting up an scheduled nightly build that we'd like to be able to tag as such when it runs. Is this possible with TeamCity 10?
The reason for this is that if we don't tag it, it's tricky to find in in the build list, since the only way to identify it is by looking at the run time.
For now we ended up cloning our configuration and only have it run every night. But I don't like this since it duplicates the configuration which means we have to maintain 2 sets of build steps, agent requirements, etc etc.
Any ideas?
A couple of ideas:
Use a build template, which makes it easy to maintain two build configurations with slightly different requirements. I would recommend this as the most straightforward solution with very little overhead.
You can tag builds conditionally through the REST API. Write a script that checks the %teamcity.build.triggeredBy% parameter. If it's a scheduled build, add a tag using the REST API.
I would think these are the simplest way to achieve what you're asking.

How to automatically test configuration managing scripts?

I am using tools like Puppet/Chef/Ansible to set up and config development environments and production servers.
Whenever I update the configuration, I run the tool against my development environment and log in to check manually if things works as expected.
But this is tedious to do, and I can't test everything every time, so is there any way I can automate the testing?
There are Infrastructure Testing Frameworks for this:
ServerSpec / InSpec - ruby-based. Famous, big community, nice looking and best in his class.
BATS - Bash Automated Testing System, which is a bit easier.
TestInfra - Python-based infra testing framework. Still pretty young, very small community. Intro.
Goss - Fast (written in Go), small tool for validating server/infra configuration. Test scenarios are written in yaml.
Automation:
There is interesting Molecule project - some automation for testing Ansible roles, designed by Cisco. Never tried it yet.
Step further would be using TestKitchen which handles automation to spin up Vagrant or Docker or even AWS instance and test Puppet/Chef/Ansible with Rspec/BATS against just spinned up machines.
So what you need - pick up framework, write tests and run your playbooks/recipes & tests against mock VMs.
Ideally is to keep your "infra as code" in vcs and configure ci like TravisCI to run your tests for every PR once you bring new changes in your repository.
You can even follow tdd here: write tests first, make them fail, then write actual implementation in your favorite configuration management tool and see if that change makes tests green/passed.
MOAR Infrastructure Testing & Automation!
If you can let us know, what you want to test. We can help better.
But,
Did you check the dry-run mode? I think, Puppet and Ansible supports it, you can have a cron or some automated script which runs all the puppet/ansible modules against a single(test) node.
More info:
1. http://docs.ansible.com/ansible/playbooks_checkmode.html
2. Check the noop mode in https://docs.puppet.com/puppet/latest/reference/man/agent.html

Is it possible to use Chutzpah with Jenkins?

I'm no experience with Jenkins, I'm currently researching different options for PHP & JS automated unit testing with Jenkins.
I've come across Chutzpah (which uses PhantomJS's headless WebKit browser) but:
Is it possible to use Chutzpah with Jenkins?
There's very little documentation on Chutzpah. Although it does state on the Chutzpah homepage that it can be integrated into the TeamCity continuous integration server.
What's the minimum requirements for something to be compatible with Jenkins?
It is possible to use Chutzpah with Jenkins and with the 2.1 release of Chutzpah it is easier. Chutzpah's command line client can now take /junit argument that lets you specify a file name to output a junit-xml compatible file to. You can use Jenkins to pick this file up and report the test results.
I am not the downvoter, but I agree it is difficult to give a good answer to this question.
I believe the minimum requirement for something to be compatible with Jenkins is: It can be executed from a shell or cmd script. (If it's not, you need to find or write a plugin.)
Additionally, the thing should exit with code 0 for success and anything else for failure. (If it doesn't, you need to find or write a plugin.)
If you are interested in having Jenkins publish test results, the results must be in xml files using junit compatible notation. (If they are not, you need find or write a plugin.)
Additional requirements might be imposed by the tool you want to execute: It might need to draw windows or access the mouse or other parts of a graphical UI desktop/session. If that's the case, you need to run Jenkins in a context/session where it has access to those. (Windows, Mac and Linux all restrict background daemon/service access to the GUI desktop.)
Also, if your tool needs to access resources which are accessible by only certain user, you need to run Jenkins as that user.
This is a very open-ended question. Please try it out and come back with more concrete questions.

How can I run Ruby tests automatically?

I have a system thats highly reliant on various web APIs. I would like to run my API specific tests at least once per day to make sure all API's are still playing nicely and alive. I have a set of unit tests (just plain rb files that test API calls for expected data) and would like to run these every 24 hours. If something breaks, I would like to take an action (e.g. email or sms me).
How best to setup automated Ruby tests and parse the result? Can I just setup a cron job to handle the .rb files? How would I take an action and detect programmatically if the tests are failing? Maybe there is some kind of continuous integration solution for RUby that can handle this?
I've just gone through the process of setting up Hudson CI as my integration server, using this amazing tutorial from Dr. Nic. It installs through a gem, coming pretty much preconfigured, and was extremely simple to get working.
I'm using rspec and cucumber, and Hudson runs all tests when it sees a new commit on my git repository. If all tests pass, it merges the code into my master branch. If any test fails, it holds its horses and sends me an email.
EDIT:
I also want to give ten thumbs up to the ChuckNorris plugin for Hudson. Agile doesn't get better than pair programming with Walker, Texas Ranger.
Ruby has Test::Unit built-in, RSpec, ZenTest, shoulda, cucumber and probably many more tools to help test. Being built-in, Test::Unit is used a lot and is the target to be beaten by the other tools.
ZenTest and RSpec can do continuous testing: You make a change and save a file and they'll see it and run the test suite. I like that because then I know the state of things right away.
I haven't used cucumber, but have used the rest. I heard cucumber's emphasis is on integration testing but that might have been the commenter's feelings rather than by design of the developers. The list of tutorials for cucumber is interesting browsing. In particular there's webrat: Automated Acceptance Testing with RSpec or Cucumber.
Any of these could be wired up with cron to run periodically; Just treat them as you would any other set of command-line apps.
It should be easy to tie in web-testing too, but you'll have to identify the gems/modules needed and write the glue code. I haven't had need for such a beast as I'd go at it using Mechanize and/or one of the other HTTP gems plus Nokogiri to ransack the pages.

What is the best way to setup an integration testing server?

Setting up an integration server, I’m in doubt about the best approach regarding using multiple tasks to complete the build. Is the best way to set all in just one big-job or make small dependent ones?
You definitely want to break up the tasks. Here is a nice example of CruiseControl.NET configuration that has different targets (tasks) for each step. It also uses a common.build file which can be shared among projects with little customization.
http://code.google.com/p/dot-net-reference-app/source/browse/#svn/trunk
I use TeamCity with an nant build script. TeamCity makes it easy to setup the CI server part, and nant build script makes it easy to do a number of tasks as far as report generation is concerned.
Here is an article I wrote about using CI with CruiseControl.NET, it has a nant build script in the comments that can be re-used across projects:
Continuous Integration with CruiseControl
The approach I favour is the following setup (Actually assuming you are in a .NET project):
CruiseControl.NET.
NANT tasks for each individual step. Nant.Contrib for alternative CC templates.
NUnit to run unit tests.
NCover to perform code coverage.
FXCop for static analysis reports.
Subversion for source control.
CCTray or similar on all dev boxes to get notification of builds and failures etc.
On many projects you find that there are different levels of tests and activities which take place when someone does a checkin. Sometimes these can increase in time to the point where it can be a long time after a build before a dev can see if they have broken the build with a checkin.
What I do in these cases is create three builds (or maybe two):
A CI build is triggered by checkin and does a clean SVN Get, Build and runs lightweight tests. Ideally you can keep this down to minutes or less.
A more comprehensive build which could be hourly (if changes) which does the same as the CI but runs more comprehensive and time consuming tests.
An overnight build which does everything and also runs code coverage and static analysis of the assemblies and runs any deployment steps to build daily MSI packages etc.
The key thing about any CI system is that it needs to be organic and constantly being tweaked. There are some great extensions to CruiseControl.NET which log and chart build timings etc for the steps and let you do historical analysis and so allow you to continously tweak the builds to keep them snappy. It's something that managers find hard to accept that a build box will probably keep you busy for a fifth of your working time just to stop it grinding to a halt.
We use buildbot, with the build broken down into discrete steps. There is a balance to be found between having build steps be broken down with enough granularity and being a complete unit.
For example at my current position, we build the sub-pieces for each of our platforms (Mac, Linux, Windows) on their respective platforms. We then have a single step (with a few sub steps) that compiles them into the final version that will end up in the final distributions.
If something goes wrong in any of those steps it is pretty easy to diagnose.
My advice is to write the steps out on a whiteboard in as vague terms as you can and then base your steps on that. In my case that would be:
Build Plugin Pieces
Compile for Mac
Compile for PC
Compile for Linux
Make final Plugins
Run Plugin tests
Build intermediate IDE (We have to bootstrap building)
Build final IDE
Run IDE tests
I would definitely break down the jobs. Chances are you're likely to make changes in the builds, and it'll be easier to track down issues if you have smaller tasks instead of searching through one monolithic build.
You should be able to create one big job from the smaller pieces, anyways.
G'day,
As you're talking about integration testing my big (obvious) tip would be to make the test server built and configured as close as possible to the deployment environment as possible.
</thebloodyobvious> (-:
cheers,
Rob
Break your tasks up into discrete goal/operations, then use a higher-level script to tie them all together appropriately.
This makes your build process easier to understand for other people (you're documenting as you go so anyone on your team can pick it up, right?), as well as increasing the potential for re-use. It's likely you won't reuse the high-level scripts (although this could be possible if you have similar projects), but you can definitely reuse (even if it's copy/paste) the discrete operations rather easily.
Consider the example of getting the latest source from your repository. You'll want to group the tasks/operations for retrieving the code with some logging statements and reference the appropriate account information. This is the sort of thing that's very easy to reuse from one project to the next.
For my team's environment, we use NAnt since it provides a common scripting environment between dev machines (where we write/debug the scripts) and the CI server (since we just execute the same scripts in a clean environment). We use Jenkins to manage our builds, but at their core each project is just calling into the same NAnt scripts and then we manipulate the results (ie, archive the build output, flag failing tests etc).

Resources