Can I run multiple Test Cases from multiple scripts but have a single output that either says "100% Pass" or "X Failed" and lists out the failed tests?
For example I want to see something like:
>runtests.rb all #runs all the scripts in the directory
Finished in 4.523 Seconds
100% Pass
>runtests.rb category #runs all the scripts in a specified sub-directory
Finished in 2.1 Seconds
2 Failed:
test_my_test
test_my_test_2
1 Error:
test_my_test_3
I use the built-in MiniTest::Unit along with the autotest command that is part of ZenTest and get output like:
autotest
/Users/tinman/.rvm/rubies/ruby-1.9.2-p290/bin/ruby -I.:lib:test -rubygems -e "%w[test/unit tests/test_domains.rb tests/test_regex.rb tests/test_vlan.rb tests/test_nexus.rb tests/test_switch.rb tests/test_template.rb].each { |f| require f }"
Loaded suite -e
Started
........................................
Finished in 0.143375 seconds.
40 tests, 276 assertions, 0 failures, 0 errors, 0 skips
Test run options: --seed 62474
Is that similar to what you are talking about?
Related
I have a set of rspec groups + examples, with a few groups and examples marked pending using the pending: 'Some notes about why' approach to marking them as such.
My suite runs, and successfully reports:
Finished in 35.63 seconds (files took 6.51 seconds to load)
65 examples, 0 failures, 26 pending
Unfortunately, it still returns a non-zero exit status, causing CI to still treat this as a failure:
petejohanson#xo-mb:~/Dev/xo-web/api $ echo $?
1
I don't want to ignore the non-zero code altogether, because then I will miss true test failures.
Anyone successfully used pending test and not had it give you problems with CI systems?
Consider this output when I use the bash time utility to benchmark a trivial minitest run:
$time ruby spec/trivial_spec.rb
Run options: --seed 4156
# Running:
.
Finished in 0.001077s, 928.9209 runs/s, 928.9209 assertions/s.
1 runs, 1 assertions, 0 failures, 0 errors, 0 skips
real 0m0.443s
user 0m0.341s
sys 0m0.083s
As you can see, the test itself runs almost instantly (0.001s), but actually permorming that test takes nearly half a second, presumably because of the load time of minitest itself.
Is there any way to remove this delay? Either through minitest config options or perhaps using another tool that preloads it?
I'm using shell script to run protractor tests.
I want to make sure that if the test fails (exit code != 0) then it will run again - three times most.
I'm already using Teamcity, but Teamcity sends the 'FAIL' email and only then tries again. I want the test will run three times before sending a message.
this is part of my script:
if [ "$#" -eq 0 ];
then
/usr/local/bin/protractor proactor-config.js --suite=sanity
now I want to somehow check whether the Exit Code was 0 and of not - run again.
Thanks.
I wrote a small module to do this called protractor flake. It can be used via the cli
# defaults to 3 attempts
protractor-flake -- protractor.conf.js
Or programatically.
One nice thing here is that it will only re-run failed spec files, instead of your test suite.
There is a long standing feature request for this in the protractor issue queue. It probably won't be baked into the core of the framework.
function to check status
function test {
"$#"
local status=$?
if [ $status -ne 0 ]; then
echo "error with $1" >&2
fi
return $status
}
test command1
test command2
If you use protractor with cucumber-js, you can choose to give each scenario (or all scenarios tagged as unstable) a number of retries:
./node_modules/cucumber/bin/cucumber-js --help
...
--retry <NUMBER_OF_RETRIES> specify the number of times to retry failing test cases (default: 0) (default: 0)
--retryTagFilter <EXPRESSION> only retries the features or scenarios with tags matching the expression (repeatable).
This option requires '--retry' to be specified. (default: "")
Unfortunately if every failed scenario has been successfully retried, Protractor will still return with exit code 1:
https://github.com/protractor-cucumber-framework/protractor-cucumber-framework/issues/176
As a workaround, when starting the protractor I append the following to its command line:
const directory = 'build';
ensureDirSync(directory);
const cucumberSummary = join(directory, 'cucumberSummary.log');
protractorCommandLine += ` --cucumberOpts.format=summary:${cucumberSummary} \
|| grep -P "^(\\d*) scenarios? \\(.*?\\1 passed" ${cucumberSummary} \
&& rm ${cucumberSummary}`;
In an effort to use Cucumber for a command-line script, I've installed the aruba gem as per the instructions provided. It's in my Gemfile, I can verify that the correct version is installed and I've included
require 'aruba/cucumber'
in 'features/env.rb'
In order to ensure it works, I wrote the following scenario:
#announce
Scenario: Testing cucumber/aruba
Given a blank slate
Then the output from "ls -la" should contain "drw"
assuming the thing should fail.
It does fail, but it fails for the wrong reasons:
#announce
Scenario: Testing cucumber/aruba
Given a blank slate
Then the output from "ls -la" should contain "drw"
You have a nil object when you didn't expect it!
You might have expected an instance of Array.
The error occurred while evaluating nil.[] (NoMethodError)
features/dataloader.feature:9:in `Then the output from "ls -la" should contain "drw"'
Anyone have any ideas why this isn't working? This seems to be very basic aruba behavior.
You are missing a 'When' step - the aruba "output should contain" step requires the command to have already run (it does not run it itself, it only looks it up).
#announce
Scenario: Testing cucumber/aruba
Given a blank slate
When I run `ls -la`
Then the output from "ls -la" should contain "drw"
This produces, on my machine:
#announce
Scenario: Testing cucumber/aruba # features/test_aruba.feature:8
When I run `ls -la` # aruba-0.4.11/lib/aruba/cucumber.rb:56
$ cd /Users/d.chetlin/dev/mine/ladder/tmp/aruba
$ ls -la
total 0
drwx------ 2 d.chetlin staff 68 Feb 15 23:38 .
drwx------ 7 d.chetlin staff 238 Feb 15 23:38 ..
Then the output from "ls -la" should contain "drw" # aruba-0.4.11/lib/aruba/cucumber.rb:86
1 scenario (1 passed)
2 steps (2 passed)
0m0.465s
I have a pretty big spec suite (watirspec), I am running it against a Ruby gem (safariwatir) and there are a lot of failures:
1002 examples, 655 failures, 1 pending
When I make a change in the gem and run the suite again, sometimes a lot of previously failing specs pass (52 in this example):
1002 examples, 603 failures, 1 pending
I would like to know which previously failing specs are now passing, and of course if any of the previously passing specs are now failing. What I do now to compare the results is to run the tests with --format documentation option and output the results to a text file, and then diff the files:
rspec --format documentation --out output.txt
Is there a better way? Comparing text files is not the easiest way to see what changed.
Just save the results to file like you're doing right now and then just diff those results with some random diff-ing tool.
I don't know of anything out there that can do exactly that. Said that, if you need it so badly you don't mind spending some time hacking your own formatter, take a look at Spec::Runner::Formatter::BaseFormatter.It is pretty well documented.
I've implemented #Serabe's solution for you. See the gist: https://gist.github.com/1142145.
Put the file my_formatter.rb into your spec folder and run rspec --formatter MyFormatter. The formatter will compare current run result with previous run result and will output the difference as a table.
NOTE: The formatter creates/overwrites file result.txt in the current folder.
Example usage:
D:\Projects\ZPersonal\equatable>rspec spec --format MyFormatter
..........
No changes since last run
Finished in 0.011 seconds
10 examples, 0 failures
No changes since last run line was added by the formatter.
And now I intentionally broken one and rerun rspec:
D:\Projects\ZPersonal\equatable>rspec spec --format MyFormatter
..F.......
Affected tests (1).
PS CS Description
. F Equatable#== should be equal to the similar sock
PS - Previous Status
CS - Current Status
Failures:
1) Equatable#== should be equal to the similar sock
Failure/Error: subject.should == Sock.new(10, :black, 0)
expected: #<Sock:0x2fbb930 #size=10, #color=:black, #price=0>
got: #<Sock:0x2fbbae0 #size=10, #color=:black, #price=20> (using ==)
Diff:
## -1,2 +1,2 ##
-#<Sock:0x2fbb930 #color=:black, #price=0, #size=10>
+#<Sock:0x2fbbae0 #color=:black, #price=20, #size=10>
# ./spec/equatable_spec.rb:30:in `block (3 levels) in <top (required)>'
Finished in 0.008 seconds
10 examples, 1 failure
Failed examples:
rspec ./spec/equatable_spec.rb:29 # Equatable#== should be equal to the similar sock
The table with affected specs was added by the formatter:
Affected tests (1).
PS CS Description
. F Equatable#== should be equal to the similar sock
PS - Previous Status
CS - Current Status
If some spec status is different between current and previous run, the formatter outputs previous status, current status and spec description. '.' stands for passed specs, 'F' for failed and 'P' for pending.
The code is far from perfect, so feel free to criticize and change it as you want.
Hope this helps. Let me know if you have any questions.