How to report the cucumber tests after a rerun? - ruby

I have a suite of tests that are launched in continuous integration using parallel test. After the deploys the environment is a little unstable, so when the suite is finished, the failed tests are relaunched (by this moment the environment is stable again).
Basically, there is only one jenkins build, and it will execute this:
task :jenkins1_with_rerun do
selenium_successful = system "bundle exec parallel_cucumber features/web/ -o \"-p jenkins1\" -n 3"
p 'start rerun'
rerun_successful = run_rake_task("features:jenkins_rerun")
unless selenium_successful || rerun_successful
raise 'Cucumber tests failed'
end
end
The first execution launches the suite with parallel tests. Here is the cucumber profile:
jenkins1:
<%= std_opts %>
HEADLESS=true
--tags #part1
--tags ~#wip
--format junit
--out log/junit
--format ParallelTests::Cucumber::FailuresLogger
--out parallel_cucumber_failures.log
After this is finished. The second execution starts, launching the failed tests recorded in a file. The cucumber profile for this execution is this:
jenkins_rerun:
ENVIRONMENT=master
SCREENSHOT=true
HEADLESS=true
--format pretty
--no-source
--format junit
--out log/junit_rerun
#parallel_cucumber_failures.log
Now. Everything works. The only problem is that I use the junit report in jenkins to make graphs of failed and successful tests. I may have reports like this:
-------------------Tests---KO---OK
First execution---75----10----65
Rerun-------------10-------0----10
This is 100% green, because all the problems were caused by instability after the deploy. So I want the junit report in jenkins to said that 75 tests have been launched, 75 OK, 0 KO, or something equivalent.
Right now, the junit report of the first execution says that of 75 tests, I have 10 KOs; and the second junit says that of 10 tests, there are 0 KOs.
What may be a good solution to this? It possible to mix the results of both Junit reports?
I will also accept to be able to display in jenkins both junit reports, each one with a graph. But I think that jenkins only allows to show one junit report graph.

I finally resolved this problem.
The solution was to create a "parser" that searches (with RegEx) through the Junit files generated by the rerun, and then goes to the Jnuit files of the original/first run and rewrites the results of those tests that have passed during the rerun.
In the end, if everything is ok in the rerun, the junit graph is 100% green and there are not alerts of failures.

Related

Run Surefire tests in Maven with HTML output & a nonzero exit code on failure

I'm working on a Gitlab CI/CD pipeline for various Dockerized Java apps, using Maven to build them. The first step of the pipeline builds the app and pushes it to our registry. The second step is supposed to run unit tests and ensure they all pass before allowing the pipeline to continue to the deployment stage.
The problem I'm having is finding a way to invoke Maven which does what I want. What I need the test step to do is:
Execute the tests once with Surefire
Print to STDOUT or STDERR the test results (so they are visible in the job log)
Output XML test reports which will be uploaded to Gitlab as job artifacts (which will be parsed by Gitlab to populate the test results in the web GUI)
Output an HTML test report which will be uploaded to Gitlab as a job artifact (to give a downloadable, human-readable, all-in-one summary of the test results)
Exit with a nonzero return code if and only if any tests failed (so that the pipeline job will be marked as failed in the Gitlab pipeline)
1-3 are handled automatically by Surefire. The problem is 4 & 5; I'm using the Surefire-report plugin to generate the HTML report, but I can't find a way to invoke it which does what I want:
If I invoke mvn 'test' 'surefire-report:report', then, if any tests fail, Maven exits with a nonzero exit code but the HTML report is not generated.
I think this is because, in the case of a failed test, Maven considers the test goal to have failed and exits without running the surefire-report goal at all
If there are no failed tests, then the surefire-report goal does run and the report is generated, but a report that's only generated when everything passed is of limited use.
If I invoke mvn 'surefire-report:report' or mvn 'surefire-report:report' 'test', then, if any tests fail, an HTML report is still generated, but the return code is 0.
I think this is because Maven considers the surefire-report goal to be successful as long as it successfully generates a report, and then considers the test goal to be redundant because it already ran the tests in order to generate the report
My current workaround is mvn 'surefire-report:report' && mvn 'test', which works, but also runs the tests twice (or at least prints their results to STDOUT twice), and also requires the Docker container in which the tests are run to use a shell script instead of running a single Maven command
Someone suggested mvn 'test' ; result=$? ; mvn 'surefire-report:report-only' && exit ${result}, which seems to work, but I'd much rather have a single command rather than a shell script

Can we ignore scenarios in calabash Ruby feature file

I was trying to Ignore my scenarios for my Calabash Feature file (Ruby) using #ignore. But i can still run ignored scenarios. I have latest calabash version with Ruby 2.0. please suggest me other ways to ignore scenarios in calabash Ruby feature file
The easiest way is to use tags.
For example if you have 5 scenario and you want to run only 3 of it, then assign a tag to the those three scenarios which you want to run and run the script with tags.
example:
scenario 1
#run
steps
scenario 2
steps
scenario 3
steps
scenario 4
#run
steps
scenario 5
#run
steps
command: calabash-android run application.apk --tags #run
this will run your scenario 1,4 and 5
Cucumber doesn't automatically ignore tests that are labeled with #ignore, it is the same as any other tag.
When you run the tests add this (note the tilda ~)
--tags ~#ignore
and cucumber will ignore those tests.
i.e.
rake cucumber --tags ~#ignore
You can also use tags the same way to control tests you want to only run on certain environments etc.

Running multiple Cucumber features in Jenkins

I've been working with Cucumber and Watir for the past few months locally to run some regression tests on our application. I have several Cucumber feature files in one repository with one step file and one environment file. Recently I have moved from running this locally to running it on Jenkins along with the cucumber-jvm reports plugin.
I set the build steps up with the following:
cucumber --format json all/features/publish.feature > result_publish.json
cucumber --format json all/features/signin.feature > result_signin.json
cucumber --format json all/features/reports.feature > result_reports.json
When there are no failures with the tests all feature files run successfully one after the other. However, if there is a failure with the first test the build will fail and the subsequent tests will not run.
Is there any way I can force all feature files to run even if one of them fails?
Put the features in the same folder and run
cucumber --format json all/features/integration -o results.json
This will create one single report with all the tests and will run all the features regarding if they fail or not

Determining the Gradle test execution order

I was wondering whether it's possible to obtain information about test execution order somehow.
I have a project in Maven and all the tests are passing. After I migrated the project to Gradle, one of the tests started failing. The test itself is working: when I execute gradle test -Dtest.single=..., it passes. However, when I run the tests for the whole project, the test fails.
It's possible that some tests that run before the failing test do not release resources correctly and therefore the test fails. But I need to somehow find out which tests are causing this problem.
Tell Gradle to log more events about test processing. There is a documentation how to do that http://www.gradle.org/docs/current/dsl/org.gradle.api.tasks.testing.logging.TestLoggingContainer.html
The beforeTest method can be used to output details about each test before it is run, based on a TestDescriptor for each test:
test {
beforeTest { testDescriptor ->
println "${testDescriptor.className} > ${testDescriptor.name} STARTED"
}
}

Fitnesse in Cruisecontrol.net exec: do not fail the build when tests fail

I'm trying to integrate fitnesse with our cruisecontrol setup.
I'd like to have a set of acceptance tests that we develop before the features for the release are worked on. Develop our acceptance tests, run them against our nightly build (not on every check in, we have a job for that but I suspect our acceptance tests would slow that down too much).
So I want to be able to run the fitnesse test suite and not have it fail the build when some tests fail (the expectation is that some of the tests will fail, until we have finished the release).
I have the acceptance tests building on the integration server and the the fitnesse suite running from the command line (as an exec task in the integration job).
At the moment it is failing the build (runner.exe has a non-zero exit code when any test fails).
So... does anyone have a list of exit codes for fitsharp runner.exe? Is there any way to tell a cruisecontrol exec task that I really don't care about the return value from the job? Is there another cc.net task I should use instead?
Edit:
Current best idea is to wrap the fitsharp runner in a batch file or powershell script and swallow the return code from fitness
Edit2:
Return code from fitsharp runner.exe is the number of tests that failed (making setting the success return codes element for the cruisecontrol.net exec task difficult)
I think the best way to do it is with nant, which integrates very nicely with ccnet. You can simply tell a nant exec task to not fail the build, see http://nant.sourceforge.net/release/latest/help/tasks/exec.html
When you're close to release, simply set the failonerror property to true.

Resources