How to compare results of two RSpec suite runs? - ruby

I have a pretty big spec suite (watirspec), I am running it against a Ruby gem (safariwatir) and there are a lot of failures:
1002 examples, 655 failures, 1 pending
When I make a change in the gem and run the suite again, sometimes a lot of previously failing specs pass (52 in this example):
1002 examples, 603 failures, 1 pending
I would like to know which previously failing specs are now passing, and of course if any of the previously passing specs are now failing. What I do now to compare the results is to run the tests with --format documentation option and output the results to a text file, and then diff the files:
rspec --format documentation --out output.txt
Is there a better way? Comparing text files is not the easiest way to see what changed.

Just save the results to file like you're doing right now and then just diff those results with some random diff-ing tool.

I don't know of anything out there that can do exactly that. Said that, if you need it so badly you don't mind spending some time hacking your own formatter, take a look at Spec::Runner::Formatter::BaseFormatter.It is pretty well documented.

I've implemented #Serabe's solution for you. See the gist: https://gist.github.com/1142145.
Put the file my_formatter.rb into your spec folder and run rspec --formatter MyFormatter. The formatter will compare current run result with previous run result and will output the difference as a table.
NOTE: The formatter creates/overwrites file result.txt in the current folder.
Example usage:
D:\Projects\ZPersonal\equatable>rspec spec --format MyFormatter
..........
No changes since last run
Finished in 0.011 seconds
10 examples, 0 failures
No changes since last run line was added by the formatter.
And now I intentionally broken one and rerun rspec:
D:\Projects\ZPersonal\equatable>rspec spec --format MyFormatter
..F.......
Affected tests (1).
PS CS Description
. F Equatable#== should be equal to the similar sock
PS - Previous Status
CS - Current Status
Failures:
1) Equatable#== should be equal to the similar sock
Failure/Error: subject.should == Sock.new(10, :black, 0)
expected: #<Sock:0x2fbb930 #size=10, #color=:black, #price=0>
got: #<Sock:0x2fbbae0 #size=10, #color=:black, #price=20> (using ==)
Diff:
## -1,2 +1,2 ##
-#<Sock:0x2fbb930 #color=:black, #price=0, #size=10>
+#<Sock:0x2fbbae0 #color=:black, #price=20, #size=10>
# ./spec/equatable_spec.rb:30:in `block (3 levels) in <top (required)>'
Finished in 0.008 seconds
10 examples, 1 failure
Failed examples:
rspec ./spec/equatable_spec.rb:29 # Equatable#== should be equal to the similar sock
The table with affected specs was added by the formatter:
Affected tests (1).
PS CS Description
. F Equatable#== should be equal to the similar sock
PS - Previous Status
CS - Current Status
If some spec status is different between current and previous run, the formatter outputs previous status, current status and spec description. '.' stands for passed specs, 'F' for failed and 'P' for pending.
The code is far from perfect, so feel free to criticize and change it as you want.
Hope this helps. Let me know if you have any questions.

Related

RSpec pending groups/tests results in non-zero exit status

I have a set of rspec groups + examples, with a few groups and examples marked pending using the pending: 'Some notes about why' approach to marking them as such.
My suite runs, and successfully reports:
Finished in 35.63 seconds (files took 6.51 seconds to load)
65 examples, 0 failures, 26 pending
Unfortunately, it still returns a non-zero exit status, causing CI to still treat this as a failure:
petejohanson#xo-mb:~/Dev/xo-web/api $ echo $?
1
I don't want to ignore the non-zero code altogether, because then I will miss true test failures.
Anyone successfully used pending test and not had it give you problems with CI systems?

Remove half second delay when minitest starts?

Consider this output when I use the bash time utility to benchmark a trivial minitest run:
$time ruby spec/trivial_spec.rb
Run options: --seed 4156
# Running:
.
Finished in 0.001077s, 928.9209 runs/s, 928.9209 assertions/s.
1 runs, 1 assertions, 0 failures, 0 errors, 0 skips
real 0m0.443s
user 0m0.341s
sys 0m0.083s
As you can see, the test itself runs almost instantly (0.001s), but actually permorming that test takes nearly half a second, presumably because of the load time of minitest itself.
Is there any way to remove this delay? Either through minitest config options or perhaps using another tool that preloads it?

How to print capistrano current thread hash?

An example output from capistrano:
INFO [94db8027] Running /usr/bin/env uptime on leehambley#example.com:22
DEBUG [94db8027] Command: /usr/bin/env uptime
DEBUG [94db8027] 17:11:17 up 50 days, 22:31, 1 user, load average: 0.02, 0.02, 0.05
INFO [94db8027] Finished in 0.435 seconds command successful.
As you can see, each line starts with "{type} {hash}". I assume the hash is some unique identifier for either the server or the running thread, as I've noticed if I run capistrano over several servers, each one has it's own distinct hash.
My question is, how do I get this value? I want to manually output some message during execution, and I want to be able to match my output, with the server that triggered it.
Something like: puts "DEBUG ["+????+"] Something happened!"
What do I put in the ???? there? Or is there another, built in way to output messages like this?
For reference, I am using Capistrano Version: 3.2.1 (Rake Version: 10.3.2)
This hash is a command uuid. It is tied not to the server but to a specific command that is currently run.
If all you want is to distinguish between servers you may try the following
task :some_task do
on roles(:app) do |host|
debug "[#{host.hostname}:#{host.port}] something happened"
end
end

Checking output from "command" should contain unexpected crash with NilClass

In an effort to use Cucumber for a command-line script, I've installed the aruba gem as per the instructions provided. It's in my Gemfile, I can verify that the correct version is installed and I've included
require 'aruba/cucumber'
in 'features/env.rb'
In order to ensure it works, I wrote the following scenario:
#announce
Scenario: Testing cucumber/aruba
Given a blank slate
Then the output from "ls -la" should contain "drw"
assuming the thing should fail.
It does fail, but it fails for the wrong reasons:
#announce
Scenario: Testing cucumber/aruba
Given a blank slate
Then the output from "ls -la" should contain "drw"
You have a nil object when you didn't expect it!
You might have expected an instance of Array.
The error occurred while evaluating nil.[] (NoMethodError)
features/dataloader.feature:9:in `Then the output from "ls -la" should contain "drw"'
Anyone have any ideas why this isn't working? This seems to be very basic aruba behavior.
You are missing a 'When' step - the aruba "output should contain" step requires the command to have already run (it does not run it itself, it only looks it up).
#announce
Scenario: Testing cucumber/aruba
Given a blank slate
When I run `ls -la`
Then the output from "ls -la" should contain "drw"
This produces, on my machine:
#announce
Scenario: Testing cucumber/aruba # features/test_aruba.feature:8
When I run `ls -la` # aruba-0.4.11/lib/aruba/cucumber.rb:56
$ cd /Users/d.chetlin/dev/mine/ladder/tmp/aruba
$ ls -la
total 0
drwx------ 2 d.chetlin staff 68 Feb 15 23:38 .
drwx------ 7 d.chetlin staff 238 Feb 15 23:38 ..
Then the output from "ls -la" should contain "drw" # aruba-0.4.11/lib/aruba/cucumber.rb:86
1 scenario (1 passed)
2 steps (2 passed)
0m0.465s

Ruby Test Unit: Multiple Scripts, One Output

Can I run multiple Test Cases from multiple scripts but have a single output that either says "100% Pass" or "X Failed" and lists out the failed tests?
For example I want to see something like:
>runtests.rb all #runs all the scripts in the directory
Finished in 4.523 Seconds
100% Pass
>runtests.rb category #runs all the scripts in a specified sub-directory
Finished in 2.1 Seconds
2 Failed:
test_my_test
test_my_test_2
1 Error:
test_my_test_3
I use the built-in MiniTest::Unit along with the autotest command that is part of ZenTest and get output like:
autotest
/Users/tinman/.rvm/rubies/ruby-1.9.2-p290/bin/ruby -I.:lib:test -rubygems -e "%w[test/unit tests/test_domains.rb tests/test_regex.rb tests/test_vlan.rb tests/test_nexus.rb tests/test_switch.rb tests/test_template.rb].each { |f| require f }"
Loaded suite -e
Started
........................................
Finished in 0.143375 seconds.
40 tests, 276 assertions, 0 failures, 0 errors, 0 skips
Test run options: --seed 62474
Is that similar to what you are talking about?

Resources