how to find the time taken for each testcase in Rspec - ruby

I am using Rspec in my project where I would like to print the time taken by each testcase, Is there any way Rspec is providing any prebuilt function? I can take the starting time of the testcase by example.execution_result.started_at but I don't know how to take the end time of testcase, If I can take the end time, then I can subtract the end time from starting time to get the time duration for each testcase. Is there any one help me at this place? I have written this code
around(:each) do |example|
startTime=Time.now
var=example.run
puts var
endTime=Time.now
duration=endTime-startTime
puts "Time Taken->#{duration.to_f/60.to_f}"
end
But I strongly believe Rspec must be giving some predefined method to return the duration of each testcase, do you anyone know that?

RSpec has a example_status_persistence_file_path configuration that generates a file with the run time for each individual test.
For example, given the following RSpec configuration/examples:
require 'rspec/autorun'
# Enable the reporting
RSpec.configure do |c|
c.example_status_persistence_file_path = 'some_file.txt'
end
# Run some tests
RSpec.describe 'some thing' do
it 'does stuff' do
sleep(3)
end
it 'does more stuff' do
sleep(2)
end
end
A report of each example's status and run time is generated:
example_id | status | run_time |
--------------- | ------ | ------------ |
my_spec.rb[1:1] | passed | 3.02 seconds |
my_spec.rb[1:2] | passed | 2.01 seconds |

If you want more detail and/or want to control the formatting, you can create a custom formatter.
For example, given the following specs:
RSpec.describe 'some thing' do
it 'does stuff' do
sleep(3)
raise('some error')
end
it 'does more stuff' do
sleep(2)
end
end
Output - Text
We can add a custom formatter to output the full test description, status, run time and exception:
class ExampleFormatter < RSpec::Core::Formatters::JsonFormatter
RSpec::Core::Formatters.register self
def close(_notification)
#output_hash[:examples].map do |ex|
output.puts [ex[:full_description], ex[:status], ex[:run_time], ex[:exception]].join(' | ')
end
end
end
RSpec.configure do |c|
c.formatter = ExampleFormatter
end
This gives us:
some thing does stuff | failed | 3.010178 | {:class=>"RuntimeError", :message=>"some error", :backtrace=>["my_spec.rb:21:in `block... (truncated for example)
some thing does more stuff | passed | 2.019578 |
The output can be modified to add headers, have nicer formatting, etc.
Output - CSV
The formatter can be modified to output to a CSV:
require 'csv'
class ExampleFormatter < RSpec::Core::Formatters::JsonFormatter
RSpec::Core::Formatters.register self
def close(_notification)
with_headers = {write_headers: true, headers: ['Example', 'Status', 'Run Time', 'Exception']}
CSV.open(output.path, 'w', with_headers) do |csv|
#output_hash[:examples].map do |ex|
csv << [ex[:full_description], ex[:status], ex[:run_time], ex[:exception]]
end
end
end
end
RSpec.configure do |c|
c.add_formatter(ExampleFormatter, 'my_spec_log.csv')
end
Which gives:
Example,Status,Run Time,Exception
some thing does stuff,failed,3.020176,"{:class=>""RuntimeError"", :message=>""some error"", :backtrace=>[""my_spec.rb:25:in `block...(truncated example)"
some thing does more stuff,passed,2.020113,

Every example gets an ExecutionResult object which has a run_time method, so example.execution_result.run_time should give you what you’re asking for

Related

How to make sure each Minitest unit test is fast enough?

I have a large amount of Minitest unit tests (methods), over 300. They all take some time, from a few milliseconds to a few seconds. Some of them hang up, sporadically. I can't understand which one and when.
I want to apply Timeout to each of them, to make sure anyone fails if it takes longer than, say, 5 seconds. Is it achievable?
For example:
class FooTest < Minitest::Test
def test_calculates_something
# Something potentially too slow
end
end
You can use the Minitest PLugin loader to load a plugin. This is, by far, the cleanest solution. The plugin system is not very well documented, though.
Luckily, Adam Sanderson wrote an article on the plugin system.
The best news is that this article explains the plugin system by writing a sample plugin that reports slow tests. Try out minitest-snail, it is probably almost what you want.
With a little modification we can use the Reporter to mark a test as failed if it is too slow, like so (untested):
File minitest/snail_reporter.rb:
module Minitest
class SnailReporter < Reporter
attr_reader :max_duration
def self.options
#default_options ||= {
:max_duration => 2
}
end
def self.enable!(options = {})
#enabled = true
self.options.merge!(options)
end
def self.enabled?
#enabled ||= false
end
def initialize(io = STDOUT, options = self.class.options)
super
#max_duration = options.fetch(:max_duration)
end
def record result
#passed = result.time < max_duration
slow_tests << result if !#passed
end
def passed?
#passed
end
def report
return if slow_tests.empty?
slow_tests.sort_by!{|r| -r.time}
io.puts
io.puts "#{slow_tests.length} slow tests."
slow_tests.each_with_index do |result, i|
io.puts "%3d) %s: %.2f s" % [i+1, result.location, result.time]
end
end
end
end
File minitest/snail_plugin.rb:
require_relative './snail_reporter'
module Minitest
def self.plugin_snail_options(opts, options)
opts.on "--max-duration TIME", "Report tests that take longer than TIME seconds." do |max_duration|
SnailReporter.enable! :max_duration => max_duration.to_f
end
end
def self.plugin_snail_init(options)
if SnailReporter.enabled?
io = options[:io]
Minitest.reporter.reporters << SnailReporter.new(io)
end
end
end

Run cleanup step if any it block failed

When one of my it blocks fails, I want to run a cleanup step. When all of the it blocks succeed I don't want to run the cleanup step.
RSpec.describe 'my describe' do
it 'first it' do
logic_that_might_fail
end
it 'second it' do
logic_that_might_fail
end
after(:all) do
cleanup_logic if ONE_OF_THE_ITS_FAILED
end
end
How do I implement ONE_OF_THE_ITS_FAILED?
Not sure if RSpec provides something out of the box, but this would work:
RSpec.describe 'my describe' do
before(:all) do
#exceptions = []
end
after(:each) do |example|
#exceptions << example.exception
end
after(:all) do |a|
cleanup_logic if #exceptions.any?
end
# ...
end
I digged a little into the RSpec Code and found a way to monkey patch the RSpec Reporter class. Put this into your spec_helper.rb:
class RSpecHook
class << self
attr_accessor :hooked
end
def example_failed(example)
# Code goes here
end
end
module FailureDetection
def register_listener(listener, *notifications)
super
return if ::RSpecHook.hooked
#listeners[:example_failed] << ::RSpecHook.new
::RSpecHook.hooked = true
end
end
RSpec::Core::Reporter.prepend FailureDetection
Of course it gets a little more complex if you wish to execute different callbacks depending on the spec you're running at the moment.
Anyway, this way you do not have to mess up your testing code with exceptions or counters to detect failures.

Rspec. The tested code is automatically started after test

I have a problem with the testing the Sensu Plugin.
Everytime when I start rspec to test plugin it test it, but anyway at the end of test, the original plugin is started automatically. So I have in my console:
Finished in 0 seconds (files took 0.1513 seconds to load)
1 example, 0 failures
CheckDisk OK: # This comes from the plugin
Short explanation how my system works:
Plugin call system 'wmic' command, processes it, checks the conditions about the disk parameters and returns the exit statuses (ok, critical, etc)
Rspec mocks the response from system and sets into the input of plugin. At the end rspec checks the plugin exit status when the mocked input is given.
My plugin looks like that:
require 'rubygems' if RUBY_VERSION < '1.9.0'
require 'sensu-plugin/check/cli'
class CheckDisk < Sensu::Plugin::Check::CLI
def initialize
super
#crit_fs = []
end
def get_wmic
`wmic volume where DriveType=3 list brief`
end
def read_wmic
get_wmic
# do something, fill the class variables with system response
end
def run
severity = "ok"
msg = ""
read_wmic
unless #crit_fs.empty?
severity = "critical"
end
case severity
when /ok/
ok msg
when /warning/
warning msg
when /critical/
critical msg
end
end
end
Here is my test in Rspec:
require_relative '../check-disk.rb'
require 'rspec'
def loadFile
#Load template of system output when ask 'wmic volume(...)
end
def fillParametersInTemplate (template, parameters)
#set mocked disk parameters in template
end
def initializeMocks (options)
mockedSysOutput = fillParametersInTemplate #loadedTemplate, options
po = String.new(mockedSysOutput)
allow(checker).to receive(:get_wmic).and_return(po) #mock system call here
end
describe CheckDisk do
let(:checker) { described_class.new }
before(:each) do
#loadedTemplate = loadFile
def checker.critical(*_args)
exit 2
end
end
context "When % of free disk space = 10 >" do
options = {:diskName => 'C:\\', :diskSize => 1000, :diskFreeSpace => 100}
it 'Returns ok exit status ' do
begin
initializeMocks options
checker.run
rescue SystemExit => e
exit_code = e.status
end
expect(exit_code).to eq 0
end
end
end
I know that I can just put "exit 0" after the last example, but this is not a solution because when I will try to start many spec files it will exit after the first one. How to start only test, without running the plugin? Maybe someone can help me and show how to handle with such problem?
Thank you.
You can stub the original plugin call and optionally return a dummy object:
allow(SomeObject).to receive(:method) # .and_return(double)
you can put it in the before block to make sure that all assertions will share the code.
Another thing is that you are using rescue blocks to catch the situation when your code aborts with an error. You should use raise_error matcher instead:
expect { run }.to raise_error(SystemExit)

Override rake test:units runner

I recently decided to write a simple test runtime profiler for our Rails 3.0 app's test suite. It's a very simple (read: hacky) script that adds each test's time to a global, and then outputs the result at the end of the run:
require 'test/unit/ui/console/testrunner'
module ProfilingHelper
def self.included mod
$test_times ||= []
mod.class_eval do
setup :setup_profiling
def setup_profiling
#test_start_time = Time.now
end
teardown :teardown_profiling
def teardown_profiling
#test_took_time = Time.now - #test_start_time
$test_times << [name, #test_took_time]
end
end
end
end
class ProfilingRunner < Test::Unit::UI::Console::TestRunner
def finished(elapsed_time)
super
tests = $test_times.sort{|x,y| y[1] <=> x[1]}.first(100)
output("Top 100 slowest tests:")
tests.each do |t|
output("#{t[1].round(2)}s: \t #{t[0]}")
end
end
end
Test::Unit::AutoRunner::RUNNERS[:profiling] = proc do |r|
ProfilingRunner
end
This allows me to run the suites like so rake test:xxx TESTOPTS="--runner=profiling" and get a list of Top 100 tests appended to the end of the default runner's output. It works great for test:functionals and test:integration, and even for test:units TEST='test/unit/an_example_test.rb'. But if I do not specify a test for test:units, the TESTOPTS appears to be ignored.
In classic SO style, I found the answer after articulating clearly to myself, so here it is:
When run without TEST=/test/unit/blah_test.rb, test:units TESTOPTS= needs a -- before its contents. So the solution in its entirety is simply:
rake test:units TESTOPTS='-- --runner=profiling'

Are there any good ruby testing traceability solutions?

I'm writing some ruby (not Rails) and using test/unit with shoulda to write tests.
Are there any gems that'll allow me to implement traceability from my tests back to designs/requirements?
i.e.: I want to tag my tests with the name of the requirements they test, and then generate reports of requirements that aren't tested or have failing tests, etc.
Hopefully that's not too enterprisey for ruby.
Thanks!
Update: This solution is available as a gem: http://rubygems.org/gems/test4requirements
Are there any gems that'll allow me to implement traceability from my
tests back to designs/requirements?
I don't know any gem, but your need was inspiration for a little experiment, how it could be solved.
You have to define your Requirements with RequirementList.new(1,2,3,4)
This requirements can be assigned with requirements in a TestCase
each test can be assigned to a requirement with requirement
after the test results you get an overview which requirements are tested (successfull)
And now the example:
gem 'test-unit'
require 'test/unit'
###########
# This should be a gem
###########
class Test::Unit::TestCase
def self.requirements(req)
##requirements = req
end
def requirement(req)
raise RuntimeError, "No requirements defined for #{self}" unless defined? ##requirements
caller.first =~ /:\d+:in `(.*)'/
##requirements.add_test(req, "#{self.class}##{$1}")
end
alias :run_test_old :run_test
def run_test
run_test_old
#this code is left if a problem occured.
#in other words: if we reach this place, then the test was sucesfull
if defined? ##requirements
##requirements.test_successfull("#{self.class}##{#method_name}")
end
end
end
class RequirementList
def initialize( *reqs )
#requirements = reqs
#tests = {}
#success = {}
#Yes, we need two at_exit.
#tests are done also at_exit. With double at_exit, we are after that.
#Maybe better to be added later.
at_exit {
at_exit do
self.overview
end
}
end
def add_test(key, loc)
#fixme check duplicates
#tests[key] = loc
end
def test_successfull(loc)
#fixme check duplicates
#success[loc] = true
end
def overview()
puts "Requirements overiew"
#requirements.each{|req|
if #tests[req] #test defined
if #success[#tests[req]]
puts "Requirement #{req} was tested in #{#tests[req] }"
else
puts "Requirement #{req} was unsuccessfull tested in #{#tests[req] }"
end
else
puts "Requirement #{req} was not tested"
end
}
end
end #RequirementList
###############
## Here the gem end. The test will come.
###############
$req = RequirementList.new(1,2,3, 4)
class MyTest < Test::Unit::TestCase
#Following requirements exist, and must be tested sucessfull
requirements $req
def test_1()
requirement(1) #this test is testing requirement 1
assert_equal(2,1+1)
end
def test_2()
requirement(2)
assert_equal(3,1+1)
end
def test_3()
#no assignment to requirement 3
pend 'pend'
end
end
class MyTest_4 < Test::Unit::TestCase
#Following requirements exist, and must be tested sucessfull
requirements $req
def test_4()
requirement(4) #this test is testing requirement 4
assert_equal(2,1+1)
end
end
the result:
Loaded suite testing_traceability_solutions
Started
.FP.
1) Failure:
test_2(MyTest)
[testing_traceability_solutions.rb:89:in `test_2'
testing_traceability_solutions.rb:24:in `run_test']:
<3> expected but was
<2>.
2) Pending: pend
test_3(MyTest)
testing_traceability_solutions.rb:92:in `test_3'
testing_traceability_solutions.rb:24:in `run_test'
Finished in 0.65625 seconds.
4 tests, 3 assertions, 1 failures, 0 errors, 1 pendings, 0 omissions, 0 notifications
50% passed
Requirements overview:
Requirement 1 was tested in MyTest#test_1
Requirement 2 was unsuccessfull tested in MyTest#test_2
Requirement 3 was not tested
Requirement 4 was tested in MyTest_4#test_4
If you think, this could be a solution for you, please give me a feedback. Then I will try to build a gem out of it.
Code example for usage with shoulda
#~ require 'test4requirements' ###does not exist/use code above
require 'shoulda'
#use another interface ##not implemented###
#~ $req = Requirement.new_from_file('requirments.txt')
class MyTest_shoulda < Test::Unit::TestCase
#Following requirements exist, and must be tested sucessfull
#~ requirements $req
context 'req. of customer X' do
#Add requirement as parameter of should
# does not work yet
should 'fullfill request 1', requirement: 1 do
assert_equal(2,1+1)
end
#add requirement via requirement command
#works already
should 'fullfill request 1' do
requirement(1) #this test is testing requirement 1
assert_equal(2,1+1)
end
end #context
end #MyTest_shoulda
With cucumber you can have your requirement be the test, doesn't get any more traceable than that :)
So a single requirement is a feature, and a feature has scenario that you want to test.
# addition.feature
Feature: Addition
In order to avoid silly mistakes
As a math idiot
I want to be told the sum of two numbers
Scenario Outline: Add two numbers
Given I have entered <input_1> into the calculator
And I have entered <input_2> into the calculator
When I press <button>
Then the result should be <output> on the screen
Examples:
| input_1 | input_2 | button | output |
| 20 | 30 | add | 50 |
| 2 | 5 | add | 7 |
| 0 | 40 | add | 40 |
Then you have step definitions written in ruby mapped to the scenario
# step_definitons/calculator_steps.rb
begin require 'rspec/expectations'; rescue LoadError; require 'spec/expectations'; end
require 'cucumber/formatter/unicode'
$:.unshift(File.dirname(__FILE__) + '/../../lib')
require 'calculator'
Before do
#calc = Calculator.new
end
After do
end
Given /I have entered (\d+) into the calculator/ do |n|
#calc.push n.to_i
end
When /I press (\w+)/ do |op|
#result = #calc.send op
end
Then /the result should be (.*) on the screen/ do |result|
#result.should == result.to_f
end

Resources