Set timeout to ruby Test::unit run - ruby

I am using a rake task to run tests written in Ruby.
The rake task:
desc "This Run Tests on my ruby app"
Rake::TestTask.new do |t|
t.libs << File.dirname(__FILE__)
t.test_files = FileList['test*.rb']
t.verbose = true
end
I would like to create a timeout so that if any test (or the entire suite) hangs a timeout exception will be thrown and the test will fail.
I tried to create a new task that would run the test task with a timeout:
desc "Run Tests with timeout"
task :run_tests do
Timeout::timeout(200) do
Rake::Task['test'].invoke
end
end
The result was that a timeout was thrown, but the test continued to run.

I've been looking for something similar, and ended up writing this:
require 'timeout'
# Provides an individual timeout for every test.
#
# In general tests should run for less then 1s so 5s is quite a generous timeout.
#
# Timeouts can be overridden per-class (in rare cases where tests should take more time)
# by setting for example `self.test_timeout = 10 #s`
module TestTimeoutHelper
def self.included(base)
class << base
attr_accessor :test_timeout
end
base.test_timeout = 5 # s
end
# This overrides the default minitest behaviour of measuring time, with extra timeout :)
# This helps out with: (a) keeping tests fast :) (b) detecting infinite loops
#
# In general however benchmarks test should be used instead.
# Timeout is very unreliable by the way but in general works.
def time_it
t0 = Minitest.clock_time
Timeout.timeout(self.class.test_timeout, Timeout::Error, 'Test took to long (infinite loop?)') do
yield
end
ensure
self.time = Minitest.clock_time - t0
end
end
This module should be included into either specific test cases, or a general test case.
It works with a MiniTest 5.x

This code adds a timeout for entire suite:
def self.suite
mysuite = super
def mysuite.run(*args)
Timeout::timeout(600) do
super
end
end
mysuite
end

Related

How to make sure each Minitest unit test is fast enough?

I have a large amount of Minitest unit tests (methods), over 300. They all take some time, from a few milliseconds to a few seconds. Some of them hang up, sporadically. I can't understand which one and when.
I want to apply Timeout to each of them, to make sure anyone fails if it takes longer than, say, 5 seconds. Is it achievable?
For example:
class FooTest < Minitest::Test
def test_calculates_something
# Something potentially too slow
end
end
You can use the Minitest PLugin loader to load a plugin. This is, by far, the cleanest solution. The plugin system is not very well documented, though.
Luckily, Adam Sanderson wrote an article on the plugin system.
The best news is that this article explains the plugin system by writing a sample plugin that reports slow tests. Try out minitest-snail, it is probably almost what you want.
With a little modification we can use the Reporter to mark a test as failed if it is too slow, like so (untested):
File minitest/snail_reporter.rb:
module Minitest
class SnailReporter < Reporter
attr_reader :max_duration
def self.options
#default_options ||= {
:max_duration => 2
}
end
def self.enable!(options = {})
#enabled = true
self.options.merge!(options)
end
def self.enabled?
#enabled ||= false
end
def initialize(io = STDOUT, options = self.class.options)
super
#max_duration = options.fetch(:max_duration)
end
def record result
#passed = result.time < max_duration
slow_tests << result if !#passed
end
def passed?
#passed
end
def report
return if slow_tests.empty?
slow_tests.sort_by!{|r| -r.time}
io.puts
io.puts "#{slow_tests.length} slow tests."
slow_tests.each_with_index do |result, i|
io.puts "%3d) %s: %.2f s" % [i+1, result.location, result.time]
end
end
end
end
File minitest/snail_plugin.rb:
require_relative './snail_reporter'
module Minitest
def self.plugin_snail_options(opts, options)
opts.on "--max-duration TIME", "Report tests that take longer than TIME seconds." do |max_duration|
SnailReporter.enable! :max_duration => max_duration.to_f
end
end
def self.plugin_snail_init(options)
if SnailReporter.enabled?
io = options[:io]
Minitest.reporter.reporters << SnailReporter.new(io)
end
end
end

Rspec. The tested code is automatically started after test

I have a problem with the testing the Sensu Plugin.
Everytime when I start rspec to test plugin it test it, but anyway at the end of test, the original plugin is started automatically. So I have in my console:
Finished in 0 seconds (files took 0.1513 seconds to load)
1 example, 0 failures
CheckDisk OK: # This comes from the plugin
Short explanation how my system works:
Plugin call system 'wmic' command, processes it, checks the conditions about the disk parameters and returns the exit statuses (ok, critical, etc)
Rspec mocks the response from system and sets into the input of plugin. At the end rspec checks the plugin exit status when the mocked input is given.
My plugin looks like that:
require 'rubygems' if RUBY_VERSION < '1.9.0'
require 'sensu-plugin/check/cli'
class CheckDisk < Sensu::Plugin::Check::CLI
def initialize
super
#crit_fs = []
end
def get_wmic
`wmic volume where DriveType=3 list brief`
end
def read_wmic
get_wmic
# do something, fill the class variables with system response
end
def run
severity = "ok"
msg = ""
read_wmic
unless #crit_fs.empty?
severity = "critical"
end
case severity
when /ok/
ok msg
when /warning/
warning msg
when /critical/
critical msg
end
end
end
Here is my test in Rspec:
require_relative '../check-disk.rb'
require 'rspec'
def loadFile
#Load template of system output when ask 'wmic volume(...)
end
def fillParametersInTemplate (template, parameters)
#set mocked disk parameters in template
end
def initializeMocks (options)
mockedSysOutput = fillParametersInTemplate #loadedTemplate, options
po = String.new(mockedSysOutput)
allow(checker).to receive(:get_wmic).and_return(po) #mock system call here
end
describe CheckDisk do
let(:checker) { described_class.new }
before(:each) do
#loadedTemplate = loadFile
def checker.critical(*_args)
exit 2
end
end
context "When % of free disk space = 10 >" do
options = {:diskName => 'C:\\', :diskSize => 1000, :diskFreeSpace => 100}
it 'Returns ok exit status ' do
begin
initializeMocks options
checker.run
rescue SystemExit => e
exit_code = e.status
end
expect(exit_code).to eq 0
end
end
end
I know that I can just put "exit 0" after the last example, but this is not a solution because when I will try to start many spec files it will exit after the first one. How to start only test, without running the plugin? Maybe someone can help me and show how to handle with such problem?
Thank you.
You can stub the original plugin call and optionally return a dummy object:
allow(SomeObject).to receive(:method) # .and_return(double)
you can put it in the before block to make sure that all assertions will share the code.
Another thing is that you are using rescue blocks to catch the situation when your code aborts with an error. You should use raise_error matcher instead:
expect { run }.to raise_error(SystemExit)

In RSpec, how to determine the time each spec file takes to run?

Background: My project's continuous integration build runs RSpec in several parallel runs. Specs are partitioned across parallel runs by spec file. That means long spec files dominate test suite run time. So I want to know the time each spec file takes to run (not just the time each example takes to run).
How can I get RSpec to tell me the time each spec file takes to run? Several of RSpec's stock formatters tell me the time each example takes to run, but they don't sum the time for each spec.
I'm using RSpec 3.2.
I addressed this need by writing my own RSpec formatter. Put the following class in spec/support, make sure it's required, and run rspec like so:
rspec --format SpecTimeFormatter --out spec-times.txt
class SpecTimeFormatter < RSpec::Core::Formatters::BaseFormatter
RSpec::Core::Formatters.register self, :example_started, :stop
def initialize(output)
#output = output
#times = []
end
def example_started(notification)
current_spec = notification.example.file_path
if current_spec != #current_spec
if #current_spec_start_time
save_current_spec_time
end
#current_spec = current_spec
#current_spec_start_time = Time.now
end
end
def stop(_notification)
save_current_spec_time
#times.
sort_by { |_spec, time| -time }.
each { |spec, time| #output << "#{'%4d' % time} seconds #{spec}\n" }
end
private
def save_current_spec_time
#times << [#current_spec, (Time.now - #current_spec_start_time).to_i]
end
end

Override rake test:units runner

I recently decided to write a simple test runtime profiler for our Rails 3.0 app's test suite. It's a very simple (read: hacky) script that adds each test's time to a global, and then outputs the result at the end of the run:
require 'test/unit/ui/console/testrunner'
module ProfilingHelper
def self.included mod
$test_times ||= []
mod.class_eval do
setup :setup_profiling
def setup_profiling
#test_start_time = Time.now
end
teardown :teardown_profiling
def teardown_profiling
#test_took_time = Time.now - #test_start_time
$test_times << [name, #test_took_time]
end
end
end
end
class ProfilingRunner < Test::Unit::UI::Console::TestRunner
def finished(elapsed_time)
super
tests = $test_times.sort{|x,y| y[1] <=> x[1]}.first(100)
output("Top 100 slowest tests:")
tests.each do |t|
output("#{t[1].round(2)}s: \t #{t[0]}")
end
end
end
Test::Unit::AutoRunner::RUNNERS[:profiling] = proc do |r|
ProfilingRunner
end
This allows me to run the suites like so rake test:xxx TESTOPTS="--runner=profiling" and get a list of Top 100 tests appended to the end of the default runner's output. It works great for test:functionals and test:integration, and even for test:units TEST='test/unit/an_example_test.rb'. But if I do not specify a test for test:units, the TESTOPTS appears to be ignored.
In classic SO style, I found the answer after articulating clearly to myself, so here it is:
When run without TEST=/test/unit/blah_test.rb, test:units TESTOPTS= needs a -- before its contents. So the solution in its entirety is simply:
rake test:units TESTOPTS='-- --runner=profiling'

Performance testing with RSpec

I'm trying to incorporate performance tests into a test suite for non-Rails app and have a couple problems.
I don't need to run perf tests every time, how can I exclude them? Commenting and uncommenting config.filter_run_excluding :perf => true seems like a bad idea.
How do I report benchmark results? I think RSpec has some mechanism for that.
I created rspec-benchmark Ruby gem for writing performance tests in RSpec. It has many expectations for testing speed, resources usage, and scalability.
For example, to test how fast your code is:
expect { ... }.to perform_under(60).ms
Or to compare with another implementation:
expect { ... }.to perform_faster_than { ... }.at_least(5).times
Or to test computational complexity:
expect { ... }.to perform_logarithmic.in_range(8, 100_000)
Or to see how many objects get allocated:
expect {
_a = [Object.new]
_b = {Object.new => 'foo'}
}.to perform_allocation({Array => 1, Object => 2}).objects
To filter your tests you can separate the specs into a performance directory and add a rake task
require 'rspec/core/rake_task'
desc 'Run performance specs'
RSpec::Core::RakeTask.new(:perf) do |task|
task.pattern = 'spec/performance{,/*/**}/*_spec.rb'
end
Then run them whenever you need them:
rake perf
First problem partially solved and second problem solved completely with this piece of code in spec/spec_helper.rb
class MessageHelper
class << self
def messages
#messages ||= []
end
def add(msg)
messages << msg
end
end
end
def message(msg)
MessageHelper.add msg
end
RSpec.configure do |c|
c.filter_run_excluding :perf => !ENV["PERF"]
c.after(:suite) do
puts "\nMessages:"
MessageHelper.messages.each {|m| puts m}
end
end

Resources