Print only the number of failed examples with rspec - ruby

Is there a way to make rspec just print number of failed examples without any other information?
I want the output to be like this:
Finished in 2.35 seconds (files took 7.33 seconds to load)
1207 examples, 40 failures, 15 pending
Thank You

Out of the box RSpec does not have a formatter that matches the output you want. However, it is possible to write your own formatter.
Here's is what it would look like for RSpec 3+
RSpec::Support.require_rspec_core "formatters/base_text_formatter"
class SimpleFormatter < RSpec::Core::Formatters::BaseTextFormatter
RSpec::Core::Formatters.register self, :example_passed, :example_pending, :example_failed, :dump_pending, :dump_failures, :dump_summary
def example_passed(message)
# Do nothing
end
def example_pending(message)
# Do nothing
end
def example_failed(message)
# Do nothing
end
def dump_pending(message)
# Do nothing
end
def dump_failures(message)
end
def dump_summary(message)
colorizer = ::RSpec::Core::Formatters::ConsoleCodes
output.puts "\nFinished in #{message.formatted_duration} " \
"(files took #{message.formatted_load_time} to load)\n" \
"#{message.colorized_totals_line(colorizer)}\n"
end
end
# Example from RSpec Core - https://github.com/rspec/rspec-core/blob/bc1482605763cc16efa55c98b8da64a89c8ff5f9/lib/rspec/core/formatters/base_text_formatter.rb
Then from the command line run rspec spec/ -r ./path/to/formatter -f SimpleFormatter.
To avoid having to type out the -r and -f options constantly, you can place them in your .rspec
--require ./path/to/formatter
--format SimpleFormatter
Now just running rspec spec/ will automatically use the formatter we just created.

Related

Minitest - Tests Don't Run - No Rails

I'm just starting a small project to emulate a Carnival's ticket sales booth and one of the guidelines was to test that a user can enter the number of tickets. The program runs in the console and I eventually (hopefully) figured it out how to implement this test thanks to #Stefan's answer on this question.
The problem is that now, when I run the test file, minitest says:
0 runs, 0 assertions, 0 failures, 0 errors, 0 skips
I get the same result when I try to run the test by name using ruby path/to/test/file.rb --name method-name. I'm not sure if this is because my code is still faulty of if it's because I've set up minitest incorrectly. I've tried to look up similar problems on SO but most questions seem to involve using minitest with rails and I just have a plain ruby project.
Here's my test file:
gem 'minitest', '>= 5.0.0'
require 'minitest/spec'
require 'minitest/autorun'
require_relative 'carnival'
class CarnivalTest < MiniTest::Test
def sample
assert_equal(1, 1)
end
def user_can_enter_number_of_tickets
with_stdin do |user|
user.puts "2"
assert_equal(Carnival.new.get_value, "2")
end
end
def with_stdin
stdin = $stdin # global var to remember $stdin
$stdin, write = IO.pipe # assign 'read end' of pipe to $stdin
yield write # pass 'write end' to block
ensure
write.close # close pipe
$stdin = stdin # restore $stdin
end
end
In a file called carnival.rb in the same folder as my test file I have
Class Carnival
def get_value
gets.chomp
end
end
If anyone can help figure out why the test is not running I'd be grateful!
By convention, tests in Minitest are public instance methods that start with test_, so the original test has no actual test methods. You need to update your test class so that the methods with assertions follow the convention as:
class CarnivalTest < Minitest::Test
def test_sample
assert_equal(1, 1)
end
def test_user_can_enter_number_of_tickets
with_stdin do |user|
user.puts "2"
assert_equal(Carnival.new.get_value, "2")
end
end
# snip...
end
Yeah always start all your tests with test_ so it knows that you want to that function/method
class CarnivalTest < MiniTest::Test
def test_sample
assert_equal(1, 1)
end
def test_user_can_enter_number_of_tickets
with_stdin do |user|
user.puts "2"
assert_equal(Carnival.new.get_value, "2")
end
end
and that should work for you

In RSpec, how to determine the time each spec file takes to run?

Background: My project's continuous integration build runs RSpec in several parallel runs. Specs are partitioned across parallel runs by spec file. That means long spec files dominate test suite run time. So I want to know the time each spec file takes to run (not just the time each example takes to run).
How can I get RSpec to tell me the time each spec file takes to run? Several of RSpec's stock formatters tell me the time each example takes to run, but they don't sum the time for each spec.
I'm using RSpec 3.2.
I addressed this need by writing my own RSpec formatter. Put the following class in spec/support, make sure it's required, and run rspec like so:
rspec --format SpecTimeFormatter --out spec-times.txt
class SpecTimeFormatter < RSpec::Core::Formatters::BaseFormatter
RSpec::Core::Formatters.register self, :example_started, :stop
def initialize(output)
#output = output
#times = []
end
def example_started(notification)
current_spec = notification.example.file_path
if current_spec != #current_spec
if #current_spec_start_time
save_current_spec_time
end
#current_spec = current_spec
#current_spec_start_time = Time.now
end
end
def stop(_notification)
save_current_spec_time
#times.
sort_by { |_spec, time| -time }.
each { |spec, time| #output << "#{'%4d' % time} seconds #{spec}\n" }
end
private
def save_current_spec_time
#times << [#current_spec, (Time.now - #current_spec_start_time).to_i]
end
end

Ruby - Print with color output

I have a ruby script (Guardfile) that executes a rake command.
guard :shell do
watch(%r{^manifests\/.+\.pp$}) do |m|
spec = `rake spec`
retval = $?.to_i
case retval
when 0
if spec.length > 0 then
puts spec
n "#{m[0]} Tests Failed!", 'Rake Spec', :pending
else
puts spec
n "#{m[0]} Tests Passed!", 'Rake Spec', :pending
end
end
end
When I run a 'rake spec' from the command line, outputs are colorized.
How could I make it so the output of the ruby script is also colorized?
From command line:
From ruby script:
Update
I was able to sort-of work around the problem by using script
bash command preserve color when piping
spec = `script -q /dev/null rake spec`
This still has the downside of not scrolling the text in real time. While it does preserve the colors, it does not output anything until the very end.
Is there a more native way to do this that will allow for scrolling?
First, rake spec --color won't work, because you're passing --color to rake, and not rspec.
Jay Mitchell's suggestion for color should work - by putting this in your .rspec file:
--color
As for having "live" output, guard-shell has an eager command for this:
https://github.com/guard/guard-shell/blob/master/lib/guard/shell.rb#L37-L51
Unfortunately, guard-shell has 2 important shortcomings:
it doesn't give you access to the exit code
it doesn't properly report failures in Guard (which causes other tasks to run)
So the eager method of Guard::Shell is useless for our needs here.
Instead, the following should work:
# a version of Guard::Shell's 'eager()' which returns the result
class InPty
require 'pty'
def self.run(command)
PTY.spawn(command) do |r, w, pid|
begin
$stdout.puts
r.each {|line| $stdout.print line }
rescue Errno::EIO
end
Process.wait(pid)
end
$?.success?
rescue PTY::ChildExited
end
end
# A hack so that Guard::Shell properly throws :task_has_failed
class ProperGuardPluginFailure
def to_s
throw :task_has_failed
end
end
guard :shell, any_return: true do
watch(%r{^manifests\/.+\.pp$}) do |m|
ok = InPty.run('rake spec')
status, type = ok ? ['Passed', :success] : ['Failed', :failed]
n "#{m[0]} Tests #{status}!", 'Rake Spec', type
ok ? nil : ProperGuardPluginFailure.new
end
end
The above looks ideal for a new guard plugin - good idea?
I am unfamiliar with Guardfiles. Can you use gems? The colorize gem is great.
https://github.com/fazibear/colorize
Install it:
$ sudo gem install colorize
Use it:
require 'colorize'
puts "Tests failed!".red
puts "Tests passed!".green

Override rake test:units runner

I recently decided to write a simple test runtime profiler for our Rails 3.0 app's test suite. It's a very simple (read: hacky) script that adds each test's time to a global, and then outputs the result at the end of the run:
require 'test/unit/ui/console/testrunner'
module ProfilingHelper
def self.included mod
$test_times ||= []
mod.class_eval do
setup :setup_profiling
def setup_profiling
#test_start_time = Time.now
end
teardown :teardown_profiling
def teardown_profiling
#test_took_time = Time.now - #test_start_time
$test_times << [name, #test_took_time]
end
end
end
end
class ProfilingRunner < Test::Unit::UI::Console::TestRunner
def finished(elapsed_time)
super
tests = $test_times.sort{|x,y| y[1] <=> x[1]}.first(100)
output("Top 100 slowest tests:")
tests.each do |t|
output("#{t[1].round(2)}s: \t #{t[0]}")
end
end
end
Test::Unit::AutoRunner::RUNNERS[:profiling] = proc do |r|
ProfilingRunner
end
This allows me to run the suites like so rake test:xxx TESTOPTS="--runner=profiling" and get a list of Top 100 tests appended to the end of the default runner's output. It works great for test:functionals and test:integration, and even for test:units TEST='test/unit/an_example_test.rb'. But if I do not specify a test for test:units, the TESTOPTS appears to be ignored.
In classic SO style, I found the answer after articulating clearly to myself, so here it is:
When run without TEST=/test/unit/blah_test.rb, test:units TESTOPTS= needs a -- before its contents. So the solution in its entirety is simply:
rake test:units TESTOPTS='-- --runner=profiling'

Are there any good ruby testing traceability solutions?

I'm writing some ruby (not Rails) and using test/unit with shoulda to write tests.
Are there any gems that'll allow me to implement traceability from my tests back to designs/requirements?
i.e.: I want to tag my tests with the name of the requirements they test, and then generate reports of requirements that aren't tested or have failing tests, etc.
Hopefully that's not too enterprisey for ruby.
Thanks!
Update: This solution is available as a gem: http://rubygems.org/gems/test4requirements
Are there any gems that'll allow me to implement traceability from my
tests back to designs/requirements?
I don't know any gem, but your need was inspiration for a little experiment, how it could be solved.
You have to define your Requirements with RequirementList.new(1,2,3,4)
This requirements can be assigned with requirements in a TestCase
each test can be assigned to a requirement with requirement
after the test results you get an overview which requirements are tested (successfull)
And now the example:
gem 'test-unit'
require 'test/unit'
###########
# This should be a gem
###########
class Test::Unit::TestCase
def self.requirements(req)
##requirements = req
end
def requirement(req)
raise RuntimeError, "No requirements defined for #{self}" unless defined? ##requirements
caller.first =~ /:\d+:in `(.*)'/
##requirements.add_test(req, "#{self.class}##{$1}")
end
alias :run_test_old :run_test
def run_test
run_test_old
#this code is left if a problem occured.
#in other words: if we reach this place, then the test was sucesfull
if defined? ##requirements
##requirements.test_successfull("#{self.class}##{#method_name}")
end
end
end
class RequirementList
def initialize( *reqs )
#requirements = reqs
#tests = {}
#success = {}
#Yes, we need two at_exit.
#tests are done also at_exit. With double at_exit, we are after that.
#Maybe better to be added later.
at_exit {
at_exit do
self.overview
end
}
end
def add_test(key, loc)
#fixme check duplicates
#tests[key] = loc
end
def test_successfull(loc)
#fixme check duplicates
#success[loc] = true
end
def overview()
puts "Requirements overiew"
#requirements.each{|req|
if #tests[req] #test defined
if #success[#tests[req]]
puts "Requirement #{req} was tested in #{#tests[req] }"
else
puts "Requirement #{req} was unsuccessfull tested in #{#tests[req] }"
end
else
puts "Requirement #{req} was not tested"
end
}
end
end #RequirementList
###############
## Here the gem end. The test will come.
###############
$req = RequirementList.new(1,2,3, 4)
class MyTest < Test::Unit::TestCase
#Following requirements exist, and must be tested sucessfull
requirements $req
def test_1()
requirement(1) #this test is testing requirement 1
assert_equal(2,1+1)
end
def test_2()
requirement(2)
assert_equal(3,1+1)
end
def test_3()
#no assignment to requirement 3
pend 'pend'
end
end
class MyTest_4 < Test::Unit::TestCase
#Following requirements exist, and must be tested sucessfull
requirements $req
def test_4()
requirement(4) #this test is testing requirement 4
assert_equal(2,1+1)
end
end
the result:
Loaded suite testing_traceability_solutions
Started
.FP.
1) Failure:
test_2(MyTest)
[testing_traceability_solutions.rb:89:in `test_2'
testing_traceability_solutions.rb:24:in `run_test']:
<3> expected but was
<2>.
2) Pending: pend
test_3(MyTest)
testing_traceability_solutions.rb:92:in `test_3'
testing_traceability_solutions.rb:24:in `run_test'
Finished in 0.65625 seconds.
4 tests, 3 assertions, 1 failures, 0 errors, 1 pendings, 0 omissions, 0 notifications
50% passed
Requirements overview:
Requirement 1 was tested in MyTest#test_1
Requirement 2 was unsuccessfull tested in MyTest#test_2
Requirement 3 was not tested
Requirement 4 was tested in MyTest_4#test_4
If you think, this could be a solution for you, please give me a feedback. Then I will try to build a gem out of it.
Code example for usage with shoulda
#~ require 'test4requirements' ###does not exist/use code above
require 'shoulda'
#use another interface ##not implemented###
#~ $req = Requirement.new_from_file('requirments.txt')
class MyTest_shoulda < Test::Unit::TestCase
#Following requirements exist, and must be tested sucessfull
#~ requirements $req
context 'req. of customer X' do
#Add requirement as parameter of should
# does not work yet
should 'fullfill request 1', requirement: 1 do
assert_equal(2,1+1)
end
#add requirement via requirement command
#works already
should 'fullfill request 1' do
requirement(1) #this test is testing requirement 1
assert_equal(2,1+1)
end
end #context
end #MyTest_shoulda
With cucumber you can have your requirement be the test, doesn't get any more traceable than that :)
So a single requirement is a feature, and a feature has scenario that you want to test.
# addition.feature
Feature: Addition
In order to avoid silly mistakes
As a math idiot
I want to be told the sum of two numbers
Scenario Outline: Add two numbers
Given I have entered <input_1> into the calculator
And I have entered <input_2> into the calculator
When I press <button>
Then the result should be <output> on the screen
Examples:
| input_1 | input_2 | button | output |
| 20 | 30 | add | 50 |
| 2 | 5 | add | 7 |
| 0 | 40 | add | 40 |
Then you have step definitions written in ruby mapped to the scenario
# step_definitons/calculator_steps.rb
begin require 'rspec/expectations'; rescue LoadError; require 'spec/expectations'; end
require 'cucumber/formatter/unicode'
$:.unshift(File.dirname(__FILE__) + '/../../lib')
require 'calculator'
Before do
#calc = Calculator.new
end
After do
end
Given /I have entered (\d+) into the calculator/ do |n|
#calc.push n.to_i
end
When /I press (\w+)/ do |op|
#result = #calc.send op
end
Then /the result should be (.*) on the screen/ do |result|
#result.should == result.to_f
end

Resources