Turns out this question is a duplicate of Command line to run all examples in RSpec, including ones that are filtered out?
(Good catch, #anothermh. My google skills failed today.)
How do I run all rspec tests -- even those excluded by default?
Several of my tests are tagged with :slow, and I have these tests excluded by default:
config.filter_run_excluding slow: true
Running rspec executes all the tests except the "slow" ones (as desired).
Running rspec --tag slow executes the "slow" tests only.
What command do I use to run all the tests, including the slow ones (without having to use two separate commands)?
Related
I am trying to make a CircleCI pipeline that runs a bunch of tests in parallel but there are a few functional tests in the same files that will crash the pipeline if they are run in parallel.
Unfortunately, moving these tests to another directory is not an option.
Instead, I am planning on avoiding this scenario by marking them with a pytest mark and then telling pytest to ignore marked tests, then running them later in another step.
TESTFILES=$(circleci tests glob tests/**/test*.py | circleci tests split --split-by=timings)
pytest -m "not functional" $TESTFILES
Is it ok to tell pytest to skip these tests when using a timing based split?
I figure since they don't take any execution time to run if they are ignored, circle will learn to ignore them. I am not sure if this will cause split balancing issues.
I have multiple tests with same name but under different packages
com.private.db.dao.pf.ent.DAOBaseFuncTest
com.private.db.dao.internal.pf.ent.DAOBaseFuncTest
com.private.db.dao.core.ent.DAOBaseFuncTest
I wanted to run com.private.db.dao.pf.ent.DAOBaseFuncTest. Usually I run a test with this command,
mvn -Prun-tests -pl test/func -Dit.test=DAOBaseFuncTest verify
But I was surprised to see 3 test suites running one after another. Is it possible to run a particular test suite instead of running all the three?
Never mind. I tried giving the specific test suite and it worked.
mvn -Prun-tests -pl test/func -Dit.test=com.private.db.dao.pf.ent.DAOBaseFuncTest verify
I have a suite of tests that are launched in continuous integration using parallel test. After the deploys the environment is a little unstable, so when the suite is finished, the failed tests are relaunched (by this moment the environment is stable again).
Basically, there is only one jenkins build, and it will execute this:
task :jenkins1_with_rerun do
selenium_successful = system "bundle exec parallel_cucumber features/web/ -o \"-p jenkins1\" -n 3"
p 'start rerun'
rerun_successful = run_rake_task("features:jenkins_rerun")
unless selenium_successful || rerun_successful
raise 'Cucumber tests failed'
end
end
The first execution launches the suite with parallel tests. Here is the cucumber profile:
jenkins1:
<%= std_opts %>
HEADLESS=true
--tags #part1
--tags ~#wip
--format junit
--out log/junit
--format ParallelTests::Cucumber::FailuresLogger
--out parallel_cucumber_failures.log
After this is finished. The second execution starts, launching the failed tests recorded in a file. The cucumber profile for this execution is this:
jenkins_rerun:
ENVIRONMENT=master
SCREENSHOT=true
HEADLESS=true
--format pretty
--no-source
--format junit
--out log/junit_rerun
#parallel_cucumber_failures.log
Now. Everything works. The only problem is that I use the junit report in jenkins to make graphs of failed and successful tests. I may have reports like this:
-------------------Tests---KO---OK
First execution---75----10----65
Rerun-------------10-------0----10
This is 100% green, because all the problems were caused by instability after the deploy. So I want the junit report in jenkins to said that 75 tests have been launched, 75 OK, 0 KO, or something equivalent.
Right now, the junit report of the first execution says that of 75 tests, I have 10 KOs; and the second junit says that of 10 tests, there are 0 KOs.
What may be a good solution to this? It possible to mix the results of both Junit reports?
I will also accept to be able to display in jenkins both junit reports, each one with a graph. But I think that jenkins only allows to show one junit report graph.
I finally resolved this problem.
The solution was to create a "parser" that searches (with RegEx) through the Junit files generated by the rerun, and then goes to the Jnuit files of the original/first run and rewrites the results of those tests that have passed during the rerun.
In the end, if everything is ok in the rerun, the junit graph is 100% green and there are not alerts of failures.
I've been working with Cucumber and Watir for the past few months locally to run some regression tests on our application. I have several Cucumber feature files in one repository with one step file and one environment file. Recently I have moved from running this locally to running it on Jenkins along with the cucumber-jvm reports plugin.
I set the build steps up with the following:
cucumber --format json all/features/publish.feature > result_publish.json
cucumber --format json all/features/signin.feature > result_signin.json
cucumber --format json all/features/reports.feature > result_reports.json
When there are no failures with the tests all feature files run successfully one after the other. However, if there is a failure with the first test the build will fail and the subsequent tests will not run.
Is there any way I can force all feature files to run even if one of them fails?
Put the features in the same folder and run
cucumber --format json all/features/integration -o results.json
This will create one single report with all the tests and will run all the features regarding if they fail or not
I'm trying to integrate fitnesse with our cruisecontrol setup.
I'd like to have a set of acceptance tests that we develop before the features for the release are worked on. Develop our acceptance tests, run them against our nightly build (not on every check in, we have a job for that but I suspect our acceptance tests would slow that down too much).
So I want to be able to run the fitnesse test suite and not have it fail the build when some tests fail (the expectation is that some of the tests will fail, until we have finished the release).
I have the acceptance tests building on the integration server and the the fitnesse suite running from the command line (as an exec task in the integration job).
At the moment it is failing the build (runner.exe has a non-zero exit code when any test fails).
So... does anyone have a list of exit codes for fitsharp runner.exe? Is there any way to tell a cruisecontrol exec task that I really don't care about the return value from the job? Is there another cc.net task I should use instead?
Edit:
Current best idea is to wrap the fitsharp runner in a batch file or powershell script and swallow the return code from fitness
Edit2:
Return code from fitsharp runner.exe is the number of tests that failed (making setting the success return codes element for the cruisecontrol.net exec task difficult)
I think the best way to do it is with nant, which integrates very nicely with ccnet. You can simply tell a nant exec task to not fail the build, see http://nant.sourceforge.net/release/latest/help/tasks/exec.html
When you're close to release, simply set the failonerror property to true.