RSpec pending groups/tests results in non-zero exit status - ruby

I have a set of rspec groups + examples, with a few groups and examples marked pending using the pending: 'Some notes about why' approach to marking them as such.
My suite runs, and successfully reports:
Finished in 35.63 seconds (files took 6.51 seconds to load)
65 examples, 0 failures, 26 pending
Unfortunately, it still returns a non-zero exit status, causing CI to still treat this as a failure:
petejohanson#xo-mb:~/Dev/xo-web/api $ echo $?
1
I don't want to ignore the non-zero code altogether, because then I will miss true test failures.
Anyone successfully used pending test and not had it give you problems with CI systems?

Related

While running the "pmrep massupdate" command, I see an error "workflow object workflow_name cannot be fetched. Any idea how to resolve it?

Command used:
{
pmrep massupdate -i modify_session -t 'session_config_property' -n 'Stop on errors' -v 1 -u output.log_modify
}
Snippet from output.log_modify
workflow object wf_DS_DW_SampleDim cannot be fetched.
Massupdate Summary:
Number of reusable sessions that are successfully updated: 0.
Number of non-reusable sessions that are successfully updated: 0.
Number of session instances that are successfully updated: 0.
Number of reusable sessions that fail to be updated: 0.
Number of non-reusable sessions that fail to be updated: 1.
Number of session instances that fail to be updated: 0.
Not sure why it fails for certain workflows and works for others. Any idea?

# Variables empty in Gherkins/Cucumber test

This is my test, but even though #timeout_exception is working during running of the code, it's empty during the test. So how can I test if this variable is set?
Then(/^the output should be '(.*)'$/) do |expectedException|
expect(#timeout_exception).to eq(expectedException)
end
This is the output of the bundle exec cucumber run.
And the output should be 'Execution Timeout Error: This deployment has taken too long to run' # features/step_definitions/my_steps.rb:309
expected: "Execution Timeout Error: This deployment has taken too long to run"
got: nil
(compared using ==)
(RSpec::Expectations::ExpectationNotMetError)
./features/step_definitions/my_steps.rb:310:in `/^the output should be '(.*)'$/'
features/timeout_lengthy_deploys.feature:25:in `And the output should be 'Execution Timeout Error: This deployment has taken too long to run''
Failing Scenarios:
cucumber features/timeout_lengthy_deploys.feature:11 # Scenario: Normal deploy that times out because it takes too long
Selenium has its own waits and if they are set less than your expected wait then you will never see your expected wait triggered. Make sense?
The following sets the max wait for page load to 5 seconds
#browser.driver.manage.timeouts.page_load = 5
Script timeout is another (generally used with Ajax)
#browser.driver.manage.timeouts.script_timeout = 5
#browser.execute_script("return jQuery.active")
Implicit Wait is the maximum wait time for Selenium to wait for an operation on an element to complete. If this triggers first then your expect will fail.
#browser.driver.manage.timeouts.implicit_wait = 5
I would suggest setting implicit_wait higher than your timeout just prior to the test and then setting it back just afterwards. BTW, if your timeout raises an exception you will need a rescue block.

Remove half second delay when minitest starts?

Consider this output when I use the bash time utility to benchmark a trivial minitest run:
$time ruby spec/trivial_spec.rb
Run options: --seed 4156
# Running:
.
Finished in 0.001077s, 928.9209 runs/s, 928.9209 assertions/s.
1 runs, 1 assertions, 0 failures, 0 errors, 0 skips
real 0m0.443s
user 0m0.341s
sys 0m0.083s
As you can see, the test itself runs almost instantly (0.001s), but actually permorming that test takes nearly half a second, presumably because of the load time of minitest itself.
Is there any way to remove this delay? Either through minitest config options or perhaps using another tool that preloads it?

Ruby Test Unit: Multiple Scripts, One Output

Can I run multiple Test Cases from multiple scripts but have a single output that either says "100% Pass" or "X Failed" and lists out the failed tests?
For example I want to see something like:
>runtests.rb all #runs all the scripts in the directory
Finished in 4.523 Seconds
100% Pass
>runtests.rb category #runs all the scripts in a specified sub-directory
Finished in 2.1 Seconds
2 Failed:
test_my_test
test_my_test_2
1 Error:
test_my_test_3
I use the built-in MiniTest::Unit along with the autotest command that is part of ZenTest and get output like:
autotest
/Users/tinman/.rvm/rubies/ruby-1.9.2-p290/bin/ruby -I.:lib:test -rubygems -e "%w[test/unit tests/test_domains.rb tests/test_regex.rb tests/test_vlan.rb tests/test_nexus.rb tests/test_switch.rb tests/test_template.rb].each { |f| require f }"
Loaded suite -e
Started
........................................
Finished in 0.143375 seconds.
40 tests, 276 assertions, 0 failures, 0 errors, 0 skips
Test run options: --seed 62474
Is that similar to what you are talking about?

How to compare results of two RSpec suite runs?

I have a pretty big spec suite (watirspec), I am running it against a Ruby gem (safariwatir) and there are a lot of failures:
1002 examples, 655 failures, 1 pending
When I make a change in the gem and run the suite again, sometimes a lot of previously failing specs pass (52 in this example):
1002 examples, 603 failures, 1 pending
I would like to know which previously failing specs are now passing, and of course if any of the previously passing specs are now failing. What I do now to compare the results is to run the tests with --format documentation option and output the results to a text file, and then diff the files:
rspec --format documentation --out output.txt
Is there a better way? Comparing text files is not the easiest way to see what changed.
Just save the results to file like you're doing right now and then just diff those results with some random diff-ing tool.
I don't know of anything out there that can do exactly that. Said that, if you need it so badly you don't mind spending some time hacking your own formatter, take a look at Spec::Runner::Formatter::BaseFormatter.It is pretty well documented.
I've implemented #Serabe's solution for you. See the gist: https://gist.github.com/1142145.
Put the file my_formatter.rb into your spec folder and run rspec --formatter MyFormatter. The formatter will compare current run result with previous run result and will output the difference as a table.
NOTE: The formatter creates/overwrites file result.txt in the current folder.
Example usage:
D:\Projects\ZPersonal\equatable>rspec spec --format MyFormatter
..........
No changes since last run
Finished in 0.011 seconds
10 examples, 0 failures
No changes since last run line was added by the formatter.
And now I intentionally broken one and rerun rspec:
D:\Projects\ZPersonal\equatable>rspec spec --format MyFormatter
..F.......
Affected tests (1).
PS CS Description
. F Equatable#== should be equal to the similar sock
PS - Previous Status
CS - Current Status
Failures:
1) Equatable#== should be equal to the similar sock
Failure/Error: subject.should == Sock.new(10, :black, 0)
expected: #<Sock:0x2fbb930 #size=10, #color=:black, #price=0>
got: #<Sock:0x2fbbae0 #size=10, #color=:black, #price=20> (using ==)
Diff:
## -1,2 +1,2 ##
-#<Sock:0x2fbb930 #color=:black, #price=0, #size=10>
+#<Sock:0x2fbbae0 #color=:black, #price=20, #size=10>
# ./spec/equatable_spec.rb:30:in `block (3 levels) in <top (required)>'
Finished in 0.008 seconds
10 examples, 1 failure
Failed examples:
rspec ./spec/equatable_spec.rb:29 # Equatable#== should be equal to the similar sock
The table with affected specs was added by the formatter:
Affected tests (1).
PS CS Description
. F Equatable#== should be equal to the similar sock
PS - Previous Status
CS - Current Status
If some spec status is different between current and previous run, the formatter outputs previous status, current status and spec description. '.' stands for passed specs, 'F' for failed and 'P' for pending.
The code is far from perfect, so feel free to criticize and change it as you want.
Hope this helps. Let me know if you have any questions.

Resources