Are there any good ruby testing traceability solutions? - ruby

I'm writing some ruby (not Rails) and using test/unit with shoulda to write tests.
Are there any gems that'll allow me to implement traceability from my tests back to designs/requirements?
i.e.: I want to tag my tests with the name of the requirements they test, and then generate reports of requirements that aren't tested or have failing tests, etc.
Hopefully that's not too enterprisey for ruby.
Thanks!

Update: This solution is available as a gem: http://rubygems.org/gems/test4requirements
Are there any gems that'll allow me to implement traceability from my
tests back to designs/requirements?
I don't know any gem, but your need was inspiration for a little experiment, how it could be solved.
You have to define your Requirements with RequirementList.new(1,2,3,4)
This requirements can be assigned with requirements in a TestCase
each test can be assigned to a requirement with requirement
after the test results you get an overview which requirements are tested (successfull)
And now the example:
gem 'test-unit'
require 'test/unit'
###########
# This should be a gem
###########
class Test::Unit::TestCase
def self.requirements(req)
##requirements = req
end
def requirement(req)
raise RuntimeError, "No requirements defined for #{self}" unless defined? ##requirements
caller.first =~ /:\d+:in `(.*)'/
##requirements.add_test(req, "#{self.class}##{$1}")
end
alias :run_test_old :run_test
def run_test
run_test_old
#this code is left if a problem occured.
#in other words: if we reach this place, then the test was sucesfull
if defined? ##requirements
##requirements.test_successfull("#{self.class}##{#method_name}")
end
end
end
class RequirementList
def initialize( *reqs )
#requirements = reqs
#tests = {}
#success = {}
#Yes, we need two at_exit.
#tests are done also at_exit. With double at_exit, we are after that.
#Maybe better to be added later.
at_exit {
at_exit do
self.overview
end
}
end
def add_test(key, loc)
#fixme check duplicates
#tests[key] = loc
end
def test_successfull(loc)
#fixme check duplicates
#success[loc] = true
end
def overview()
puts "Requirements overiew"
#requirements.each{|req|
if #tests[req] #test defined
if #success[#tests[req]]
puts "Requirement #{req} was tested in #{#tests[req] }"
else
puts "Requirement #{req} was unsuccessfull tested in #{#tests[req] }"
end
else
puts "Requirement #{req} was not tested"
end
}
end
end #RequirementList
###############
## Here the gem end. The test will come.
###############
$req = RequirementList.new(1,2,3, 4)
class MyTest < Test::Unit::TestCase
#Following requirements exist, and must be tested sucessfull
requirements $req
def test_1()
requirement(1) #this test is testing requirement 1
assert_equal(2,1+1)
end
def test_2()
requirement(2)
assert_equal(3,1+1)
end
def test_3()
#no assignment to requirement 3
pend 'pend'
end
end
class MyTest_4 < Test::Unit::TestCase
#Following requirements exist, and must be tested sucessfull
requirements $req
def test_4()
requirement(4) #this test is testing requirement 4
assert_equal(2,1+1)
end
end
the result:
Loaded suite testing_traceability_solutions
Started
.FP.
1) Failure:
test_2(MyTest)
[testing_traceability_solutions.rb:89:in `test_2'
testing_traceability_solutions.rb:24:in `run_test']:
<3> expected but was
<2>.
2) Pending: pend
test_3(MyTest)
testing_traceability_solutions.rb:92:in `test_3'
testing_traceability_solutions.rb:24:in `run_test'
Finished in 0.65625 seconds.
4 tests, 3 assertions, 1 failures, 0 errors, 1 pendings, 0 omissions, 0 notifications
50% passed
Requirements overview:
Requirement 1 was tested in MyTest#test_1
Requirement 2 was unsuccessfull tested in MyTest#test_2
Requirement 3 was not tested
Requirement 4 was tested in MyTest_4#test_4
If you think, this could be a solution for you, please give me a feedback. Then I will try to build a gem out of it.
Code example for usage with shoulda
#~ require 'test4requirements' ###does not exist/use code above
require 'shoulda'
#use another interface ##not implemented###
#~ $req = Requirement.new_from_file('requirments.txt')
class MyTest_shoulda < Test::Unit::TestCase
#Following requirements exist, and must be tested sucessfull
#~ requirements $req
context 'req. of customer X' do
#Add requirement as parameter of should
# does not work yet
should 'fullfill request 1', requirement: 1 do
assert_equal(2,1+1)
end
#add requirement via requirement command
#works already
should 'fullfill request 1' do
requirement(1) #this test is testing requirement 1
assert_equal(2,1+1)
end
end #context
end #MyTest_shoulda

With cucumber you can have your requirement be the test, doesn't get any more traceable than that :)
So a single requirement is a feature, and a feature has scenario that you want to test.
# addition.feature
Feature: Addition
In order to avoid silly mistakes
As a math idiot
I want to be told the sum of two numbers
Scenario Outline: Add two numbers
Given I have entered <input_1> into the calculator
And I have entered <input_2> into the calculator
When I press <button>
Then the result should be <output> on the screen
Examples:
| input_1 | input_2 | button | output |
| 20 | 30 | add | 50 |
| 2 | 5 | add | 7 |
| 0 | 40 | add | 40 |
Then you have step definitions written in ruby mapped to the scenario
# step_definitons/calculator_steps.rb
begin require 'rspec/expectations'; rescue LoadError; require 'spec/expectations'; end
require 'cucumber/formatter/unicode'
$:.unshift(File.dirname(__FILE__) + '/../../lib')
require 'calculator'
Before do
#calc = Calculator.new
end
After do
end
Given /I have entered (\d+) into the calculator/ do |n|
#calc.push n.to_i
end
When /I press (\w+)/ do |op|
#result = #calc.send op
end
Then /the result should be (.*) on the screen/ do |result|
#result.should == result.to_f
end

Related

how to find the time taken for each testcase in Rspec

I am using Rspec in my project where I would like to print the time taken by each testcase, Is there any way Rspec is providing any prebuilt function? I can take the starting time of the testcase by example.execution_result.started_at but I don't know how to take the end time of testcase, If I can take the end time, then I can subtract the end time from starting time to get the time duration for each testcase. Is there any one help me at this place? I have written this code
around(:each) do |example|
startTime=Time.now
var=example.run
puts var
endTime=Time.now
duration=endTime-startTime
puts "Time Taken->#{duration.to_f/60.to_f}"
end
But I strongly believe Rspec must be giving some predefined method to return the duration of each testcase, do you anyone know that?
RSpec has a example_status_persistence_file_path configuration that generates a file with the run time for each individual test.
For example, given the following RSpec configuration/examples:
require 'rspec/autorun'
# Enable the reporting
RSpec.configure do |c|
c.example_status_persistence_file_path = 'some_file.txt'
end
# Run some tests
RSpec.describe 'some thing' do
it 'does stuff' do
sleep(3)
end
it 'does more stuff' do
sleep(2)
end
end
A report of each example's status and run time is generated:
example_id | status | run_time |
--------------- | ------ | ------------ |
my_spec.rb[1:1] | passed | 3.02 seconds |
my_spec.rb[1:2] | passed | 2.01 seconds |
If you want more detail and/or want to control the formatting, you can create a custom formatter.
For example, given the following specs:
RSpec.describe 'some thing' do
it 'does stuff' do
sleep(3)
raise('some error')
end
it 'does more stuff' do
sleep(2)
end
end
Output - Text
We can add a custom formatter to output the full test description, status, run time and exception:
class ExampleFormatter < RSpec::Core::Formatters::JsonFormatter
RSpec::Core::Formatters.register self
def close(_notification)
#output_hash[:examples].map do |ex|
output.puts [ex[:full_description], ex[:status], ex[:run_time], ex[:exception]].join(' | ')
end
end
end
RSpec.configure do |c|
c.formatter = ExampleFormatter
end
This gives us:
some thing does stuff | failed | 3.010178 | {:class=>"RuntimeError", :message=>"some error", :backtrace=>["my_spec.rb:21:in `block... (truncated for example)
some thing does more stuff | passed | 2.019578 |
The output can be modified to add headers, have nicer formatting, etc.
Output - CSV
The formatter can be modified to output to a CSV:
require 'csv'
class ExampleFormatter < RSpec::Core::Formatters::JsonFormatter
RSpec::Core::Formatters.register self
def close(_notification)
with_headers = {write_headers: true, headers: ['Example', 'Status', 'Run Time', 'Exception']}
CSV.open(output.path, 'w', with_headers) do |csv|
#output_hash[:examples].map do |ex|
csv << [ex[:full_description], ex[:status], ex[:run_time], ex[:exception]]
end
end
end
end
RSpec.configure do |c|
c.add_formatter(ExampleFormatter, 'my_spec_log.csv')
end
Which gives:
Example,Status,Run Time,Exception
some thing does stuff,failed,3.020176,"{:class=>""RuntimeError"", :message=>""some error"", :backtrace=>[""my_spec.rb:25:in `block...(truncated example)"
some thing does more stuff,passed,2.020113,
Every example gets an ExecutionResult object which has a run_time method, so example.execution_result.run_time should give you what you’re asking for

Writing a test for a case statement in Ruby

I'm trying to write a test for a case statement using minitest. Would I need to write separate tests for each "when"? I included my code below. Right now it just puts statements, but eventually it's going to redirect users to different methods. Thanks!
require 'pry'
require_relative 'messages'
class Game
attr_reader :user_answer
def initialize(user_answer = gets.chomp.downcase)
#user_answer = user_answer
end
def input
case user_answer
when "i"
puts "information"
when "q"
puts "quitter"
when "p"
puts "player play"
end
end
end
This answer will help you. Nonetheless I'll post one way of applying it to your situation. As suggested by #phortx when initializing a game, override the default user-input with the relevant string. Then by using assert_output we can do something like:
#test_game.rb
require './game.rb' #name and path of your game script
require 'minitest/autorun' #needed to run tests
class GameTest < MiniTest::Test
def setup
#game_i = Game.new("i") #overrides default user-input
#game_q = Game.new("q")
#game_p = Game.new("p")
end
def test_case_i
assert_output(/information\n/) {#game_i.input}
end
def test_case_q
assert_output(/quitter\n/) {#game_q.input}
end
def test_case_p
assert_output(/player play\n/) {#game_p.input}
end
end
Running the tests...
$ ruby test_game.rb
#Run options: --seed 55321
## Running:
#...
#Finished in 0.002367s, 1267.6099 runs/s, 2535.2197 assertions/s.
#3 runs, 6 assertions, 0 failures, 0 errors, 0 skips
You have to test each case branch. Via RSpec it would work that way:
describe Game do
subject { Game }
describe '#input' do
expect_any_instance_of(Game).to receive(:puts).with('information')
Game.new('i').input
expect_any_instance_of(Game).to receive(:puts).with('quitter')
Game.new('q').input
expect_any_instance_of(Game).to receive(:puts).with('player play')
Game.new('p').input
end
end
However due the fact that puts is ugly to test, you should refactor your code to something like that:
require 'pry'
require_relative 'messages'
class Game
attr_reader :user_answer
def initialize(user_answer = gets.chomp.downcase)
#user_answer = user_answer
end
def input
case user_answer
when "i"
"information"
when "q"
"quitter"
when "p"
"player play"
end
end
def print_input
puts input
end
end
Then you can test with RSpec via:
describe Game do
subject { Game }
describe '#print_input' do
expect_any_instance_of(Game).to receive(:puts).with('quitter')
Game.new('q').print_input
end
describe '#input' do
expect(Game.new('i').input).to eq('information')
expect(Game.new('q').input).to eq('quitter')
expect(Game.new('i').input).to eq('player play')
expect(Game.new('x').input).to eq(nil)
end
end

Minitest - Tests Don't Run - No Rails

I'm just starting a small project to emulate a Carnival's ticket sales booth and one of the guidelines was to test that a user can enter the number of tickets. The program runs in the console and I eventually (hopefully) figured it out how to implement this test thanks to #Stefan's answer on this question.
The problem is that now, when I run the test file, minitest says:
0 runs, 0 assertions, 0 failures, 0 errors, 0 skips
I get the same result when I try to run the test by name using ruby path/to/test/file.rb --name method-name. I'm not sure if this is because my code is still faulty of if it's because I've set up minitest incorrectly. I've tried to look up similar problems on SO but most questions seem to involve using minitest with rails and I just have a plain ruby project.
Here's my test file:
gem 'minitest', '>= 5.0.0'
require 'minitest/spec'
require 'minitest/autorun'
require_relative 'carnival'
class CarnivalTest < MiniTest::Test
def sample
assert_equal(1, 1)
end
def user_can_enter_number_of_tickets
with_stdin do |user|
user.puts "2"
assert_equal(Carnival.new.get_value, "2")
end
end
def with_stdin
stdin = $stdin # global var to remember $stdin
$stdin, write = IO.pipe # assign 'read end' of pipe to $stdin
yield write # pass 'write end' to block
ensure
write.close # close pipe
$stdin = stdin # restore $stdin
end
end
In a file called carnival.rb in the same folder as my test file I have
Class Carnival
def get_value
gets.chomp
end
end
If anyone can help figure out why the test is not running I'd be grateful!
By convention, tests in Minitest are public instance methods that start with test_, so the original test has no actual test methods. You need to update your test class so that the methods with assertions follow the convention as:
class CarnivalTest < Minitest::Test
def test_sample
assert_equal(1, 1)
end
def test_user_can_enter_number_of_tickets
with_stdin do |user|
user.puts "2"
assert_equal(Carnival.new.get_value, "2")
end
end
# snip...
end
Yeah always start all your tests with test_ so it knows that you want to that function/method
class CarnivalTest < MiniTest::Test
def test_sample
assert_equal(1, 1)
end
def test_user_can_enter_number_of_tickets
with_stdin do |user|
user.puts "2"
assert_equal(Carnival.new.get_value, "2")
end
end
and that should work for you

Override rake test:units runner

I recently decided to write a simple test runtime profiler for our Rails 3.0 app's test suite. It's a very simple (read: hacky) script that adds each test's time to a global, and then outputs the result at the end of the run:
require 'test/unit/ui/console/testrunner'
module ProfilingHelper
def self.included mod
$test_times ||= []
mod.class_eval do
setup :setup_profiling
def setup_profiling
#test_start_time = Time.now
end
teardown :teardown_profiling
def teardown_profiling
#test_took_time = Time.now - #test_start_time
$test_times << [name, #test_took_time]
end
end
end
end
class ProfilingRunner < Test::Unit::UI::Console::TestRunner
def finished(elapsed_time)
super
tests = $test_times.sort{|x,y| y[1] <=> x[1]}.first(100)
output("Top 100 slowest tests:")
tests.each do |t|
output("#{t[1].round(2)}s: \t #{t[0]}")
end
end
end
Test::Unit::AutoRunner::RUNNERS[:profiling] = proc do |r|
ProfilingRunner
end
This allows me to run the suites like so rake test:xxx TESTOPTS="--runner=profiling" and get a list of Top 100 tests appended to the end of the default runner's output. It works great for test:functionals and test:integration, and even for test:units TEST='test/unit/an_example_test.rb'. But if I do not specify a test for test:units, the TESTOPTS appears to be ignored.
In classic SO style, I found the answer after articulating clearly to myself, so here it is:
When run without TEST=/test/unit/blah_test.rb, test:units TESTOPTS= needs a -- before its contents. So the solution in its entirety is simply:
rake test:units TESTOPTS='-- --runner=profiling'

With Test::Unit, how can I run a bit of code before all tests (but not each test)?

In my test app, which uses test::unit, I need to start by pulling a bunch of data from various sources. I'd like to only do this once - the data is only read, not written, and doesn't change between tests, and the loading (and error checking for the loading), takes some time.
There are values that I DO want reset every time, and those are easy enough, but what if I want persistant accessible values? What's the best way to do this?
I'm especially interested in solutions that would let my push those assignments to some module that can be included in all my tests, since they all need access to this data.
Why do you need it inside the test? You could define it gloabl:
gem 'test-unit'#, '>= 2.1.1' #startup
require 'test/unit'
GLOBAL_DATA = 11
class My_Tests < Test::Unit::TestCase
def test_1()
puts "Testing startup 1"
assert_equal(11, GLOBAL_DATA)
end
end
GLOBAL_DATA could be a (singleton)-class (respective an instance).
If you have only one testclass, you may use TestCase.startup:
gem 'test-unit'#, '>= 2.1.1' #startup
require 'test/unit'
class My_Tests < Test::Unit::TestCase
def self.startup
puts "Define global_data "
##global_data = 11
end
def test_1()
puts "Testing 1"
assert_equal(11, ##global_data = 11)
end
def test_2()
puts "Testing 2"
assert_equal(11, ##global_data = 11)
end
end
You can just put them at the top of the class. They will get executed, and then your tests will get executed.
You could do this in the setup method:
def setup
if !defined?(##initial_data)
# Whatever you need to do to get your initial data
##initial_data = foo
end
#other_data = bar
end

Resources