I'm trying to incorporate performance tests into a test suite for non-Rails app and have a couple problems.
I don't need to run perf tests every time, how can I exclude them? Commenting and uncommenting config.filter_run_excluding :perf => true seems like a bad idea.
How do I report benchmark results? I think RSpec has some mechanism for that.
I created rspec-benchmark Ruby gem for writing performance tests in RSpec. It has many expectations for testing speed, resources usage, and scalability.
For example, to test how fast your code is:
expect { ... }.to perform_under(60).ms
Or to compare with another implementation:
expect { ... }.to perform_faster_than { ... }.at_least(5).times
Or to test computational complexity:
expect { ... }.to perform_logarithmic.in_range(8, 100_000)
Or to see how many objects get allocated:
expect {
_a = [Object.new]
_b = {Object.new => 'foo'}
}.to perform_allocation({Array => 1, Object => 2}).objects
To filter your tests you can separate the specs into a performance directory and add a rake task
require 'rspec/core/rake_task'
desc 'Run performance specs'
RSpec::Core::RakeTask.new(:perf) do |task|
task.pattern = 'spec/performance{,/*/**}/*_spec.rb'
end
Then run them whenever you need them:
rake perf
First problem partially solved and second problem solved completely with this piece of code in spec/spec_helper.rb
class MessageHelper
class << self
def messages
#messages ||= []
end
def add(msg)
messages << msg
end
end
end
def message(msg)
MessageHelper.add msg
end
RSpec.configure do |c|
c.filter_run_excluding :perf => !ENV["PERF"]
c.after(:suite) do
puts "\nMessages:"
MessageHelper.messages.each {|m| puts m}
end
end
Related
I have a large amount of Minitest unit tests (methods), over 300. They all take some time, from a few milliseconds to a few seconds. Some of them hang up, sporadically. I can't understand which one and when.
I want to apply Timeout to each of them, to make sure anyone fails if it takes longer than, say, 5 seconds. Is it achievable?
For example:
class FooTest < Minitest::Test
def test_calculates_something
# Something potentially too slow
end
end
You can use the Minitest PLugin loader to load a plugin. This is, by far, the cleanest solution. The plugin system is not very well documented, though.
Luckily, Adam Sanderson wrote an article on the plugin system.
The best news is that this article explains the plugin system by writing a sample plugin that reports slow tests. Try out minitest-snail, it is probably almost what you want.
With a little modification we can use the Reporter to mark a test as failed if it is too slow, like so (untested):
File minitest/snail_reporter.rb:
module Minitest
class SnailReporter < Reporter
attr_reader :max_duration
def self.options
#default_options ||= {
:max_duration => 2
}
end
def self.enable!(options = {})
#enabled = true
self.options.merge!(options)
end
def self.enabled?
#enabled ||= false
end
def initialize(io = STDOUT, options = self.class.options)
super
#max_duration = options.fetch(:max_duration)
end
def record result
#passed = result.time < max_duration
slow_tests << result if !#passed
end
def passed?
#passed
end
def report
return if slow_tests.empty?
slow_tests.sort_by!{|r| -r.time}
io.puts
io.puts "#{slow_tests.length} slow tests."
slow_tests.each_with_index do |result, i|
io.puts "%3d) %s: %.2f s" % [i+1, result.location, result.time]
end
end
end
end
File minitest/snail_plugin.rb:
require_relative './snail_reporter'
module Minitest
def self.plugin_snail_options(opts, options)
opts.on "--max-duration TIME", "Report tests that take longer than TIME seconds." do |max_duration|
SnailReporter.enable! :max_duration => max_duration.to_f
end
end
def self.plugin_snail_init(options)
if SnailReporter.enabled?
io = options[:io]
Minitest.reporter.reporters << SnailReporter.new(io)
end
end
end
I am using a rake task to run tests written in Ruby.
The rake task:
desc "This Run Tests on my ruby app"
Rake::TestTask.new do |t|
t.libs << File.dirname(__FILE__)
t.test_files = FileList['test*.rb']
t.verbose = true
end
I would like to create a timeout so that if any test (or the entire suite) hangs a timeout exception will be thrown and the test will fail.
I tried to create a new task that would run the test task with a timeout:
desc "Run Tests with timeout"
task :run_tests do
Timeout::timeout(200) do
Rake::Task['test'].invoke
end
end
The result was that a timeout was thrown, but the test continued to run.
I've been looking for something similar, and ended up writing this:
require 'timeout'
# Provides an individual timeout for every test.
#
# In general tests should run for less then 1s so 5s is quite a generous timeout.
#
# Timeouts can be overridden per-class (in rare cases where tests should take more time)
# by setting for example `self.test_timeout = 10 #s`
module TestTimeoutHelper
def self.included(base)
class << base
attr_accessor :test_timeout
end
base.test_timeout = 5 # s
end
# This overrides the default minitest behaviour of measuring time, with extra timeout :)
# This helps out with: (a) keeping tests fast :) (b) detecting infinite loops
#
# In general however benchmarks test should be used instead.
# Timeout is very unreliable by the way but in general works.
def time_it
t0 = Minitest.clock_time
Timeout.timeout(self.class.test_timeout, Timeout::Error, 'Test took to long (infinite loop?)') do
yield
end
ensure
self.time = Minitest.clock_time - t0
end
end
This module should be included into either specific test cases, or a general test case.
It works with a MiniTest 5.x
This code adds a timeout for entire suite:
def self.suite
mysuite = super
def mysuite.run(*args)
Timeout::timeout(600) do
super
end
end
mysuite
end
When one of my it blocks fails, I want to run a cleanup step. When all of the it blocks succeed I don't want to run the cleanup step.
RSpec.describe 'my describe' do
it 'first it' do
logic_that_might_fail
end
it 'second it' do
logic_that_might_fail
end
after(:all) do
cleanup_logic if ONE_OF_THE_ITS_FAILED
end
end
How do I implement ONE_OF_THE_ITS_FAILED?
Not sure if RSpec provides something out of the box, but this would work:
RSpec.describe 'my describe' do
before(:all) do
#exceptions = []
end
after(:each) do |example|
#exceptions << example.exception
end
after(:all) do |a|
cleanup_logic if #exceptions.any?
end
# ...
end
I digged a little into the RSpec Code and found a way to monkey patch the RSpec Reporter class. Put this into your spec_helper.rb:
class RSpecHook
class << self
attr_accessor :hooked
end
def example_failed(example)
# Code goes here
end
end
module FailureDetection
def register_listener(listener, *notifications)
super
return if ::RSpecHook.hooked
#listeners[:example_failed] << ::RSpecHook.new
::RSpecHook.hooked = true
end
end
RSpec::Core::Reporter.prepend FailureDetection
Of course it gets a little more complex if you wish to execute different callbacks depending on the spec you're running at the moment.
Anyway, this way you do not have to mess up your testing code with exceptions or counters to detect failures.
I recently decided to write a simple test runtime profiler for our Rails 3.0 app's test suite. It's a very simple (read: hacky) script that adds each test's time to a global, and then outputs the result at the end of the run:
require 'test/unit/ui/console/testrunner'
module ProfilingHelper
def self.included mod
$test_times ||= []
mod.class_eval do
setup :setup_profiling
def setup_profiling
#test_start_time = Time.now
end
teardown :teardown_profiling
def teardown_profiling
#test_took_time = Time.now - #test_start_time
$test_times << [name, #test_took_time]
end
end
end
end
class ProfilingRunner < Test::Unit::UI::Console::TestRunner
def finished(elapsed_time)
super
tests = $test_times.sort{|x,y| y[1] <=> x[1]}.first(100)
output("Top 100 slowest tests:")
tests.each do |t|
output("#{t[1].round(2)}s: \t #{t[0]}")
end
end
end
Test::Unit::AutoRunner::RUNNERS[:profiling] = proc do |r|
ProfilingRunner
end
This allows me to run the suites like so rake test:xxx TESTOPTS="--runner=profiling" and get a list of Top 100 tests appended to the end of the default runner's output. It works great for test:functionals and test:integration, and even for test:units TEST='test/unit/an_example_test.rb'. But if I do not specify a test for test:units, the TESTOPTS appears to be ignored.
In classic SO style, I found the answer after articulating clearly to myself, so here it is:
When run without TEST=/test/unit/blah_test.rb, test:units TESTOPTS= needs a -- before its contents. So the solution in its entirety is simply:
rake test:units TESTOPTS='-- --runner=profiling'
I am running rspec tests on a catalog object from within a Ruby app, using Rspec::Core::Runner::run:
File.open('/tmp/catalog', 'w') do |out|
YAML.dump(catalog, out)
end
...
unless RSpec::Core::Runner::run(spec_dirs, $stderr, out) == 0
raise Puppet::Error, "Unit tests failed:\n#{out.string}"
end
(The full code can be found at https://github.com/camptocamp/puppet-spec/blob/master/lib/puppet/indirector/catalog/rest_spec.rb)
In order to pass the object I want to test, I dump it as YAML to a file (currently /tmp/catalog) and load it as subject in my tests:
describe 'notrun' do
subject { YAML.load_file('/tmp/catalog') }
it { should contain_package('ppet') }
end
Is there a way I could pass the catalog object as subject to my tests without dumping it to a file?
I am not very clear as to what exactly you are trying to achieve but from my understanding I feel that using a before(:each) hook might be of use to you. You can define variables in this block that are available to all the stories in that scope.
Here is an example:
require "rspec/expectations"
class Thing
def widgets
#widgets ||= []
end
end
describe Thing do
before(:each) do
#thing = Thing.new
end
describe "initialized in before(:each)" do
it "has 0 widgets" do
# #thing is available here
#thing.should have(0).widgets
end
it "can get accept new widgets" do
#thing.widgets << Object.new
end
it "does not share state across examples" do
#thing.should have(0).widgets
end
end
end
You can find more details at:
https://www.relishapp.com/rspec/rspec-core/v/2-2/docs/hooks/before-and-after-hooks#define-before(:each)-block