While running simplecov on Mac OSX, the resulting coverage makes little sense.
If the following test is run:
rails test test/models/channel_test.rb
> 4 runs, 4 assertions, 0 failures, 0 errors, 0 skips
> Coverage report generated for Minitest to /Volumes/[...]/coverage. 0 / 0 LOC (100.0%) covered.
Yet when running rails test test/models the graphical output indicates for test/models/channel_test.rb
require "test_helper"
class ChannelTest < ActiveSupport::TestCase
test "invalid if name not defined" do
channel = Channel.new(priority: 1, unit_cost: 1, daily_limit: 9999)
assert_not channel.valid?
assert_not channel.save, "Saved the channel without a name"
end
update I presumed it might have been the chosen syntax of the test that might be a flaw - I added a supplemental test and the result is still reported
for the model 3 relevant lines. 0 lines covered and 3 lines missed.
class Channel < ApplicationRecord # red
validates :name, presence: true # red
end # red
Thus the test is passing, but the coverage result is confounding:
a) as a standalone, the coverage count is 0/0, whereas the tests pass
b) what constitutes a miss or conversely coverage by a (the?) test ?
test_helper.rb
require 'simplecov'
SimpleCov.start
ENV['RAILS_ENV'] ||= 'test'
require_relative "../config/environment"
require "rails/test_help"
require 'webmock/minitest'
class ActiveSupport::TestCase
parallelize(workers: :number_of_processors)
fixtures :all
def log_in_as(user, shop)
post user_session_url, params: { user_id: user.id, active_shop_id: #site.shop_id }
end
end
Update 2 As per #BroiSatse suggestion, commenting out parallelize(workers: :number_of_processors)
allows coverage to be measured.
Related
I am using a rake task to run tests written in Ruby.
The rake task:
desc "This Run Tests on my ruby app"
Rake::TestTask.new do |t|
t.libs << File.dirname(__FILE__)
t.test_files = FileList['test*.rb']
t.verbose = true
end
I would like to create a timeout so that if any test (or the entire suite) hangs a timeout exception will be thrown and the test will fail.
I tried to create a new task that would run the test task with a timeout:
desc "Run Tests with timeout"
task :run_tests do
Timeout::timeout(200) do
Rake::Task['test'].invoke
end
end
The result was that a timeout was thrown, but the test continued to run.
I've been looking for something similar, and ended up writing this:
require 'timeout'
# Provides an individual timeout for every test.
#
# In general tests should run for less then 1s so 5s is quite a generous timeout.
#
# Timeouts can be overridden per-class (in rare cases where tests should take more time)
# by setting for example `self.test_timeout = 10 #s`
module TestTimeoutHelper
def self.included(base)
class << base
attr_accessor :test_timeout
end
base.test_timeout = 5 # s
end
# This overrides the default minitest behaviour of measuring time, with extra timeout :)
# This helps out with: (a) keeping tests fast :) (b) detecting infinite loops
#
# In general however benchmarks test should be used instead.
# Timeout is very unreliable by the way but in general works.
def time_it
t0 = Minitest.clock_time
Timeout.timeout(self.class.test_timeout, Timeout::Error, 'Test took to long (infinite loop?)') do
yield
end
ensure
self.time = Minitest.clock_time - t0
end
end
This module should be included into either specific test cases, or a general test case.
It works with a MiniTest 5.x
This code adds a timeout for entire suite:
def self.suite
mysuite = super
def mysuite.run(*args)
Timeout::timeout(600) do
super
end
end
mysuite
end
In Rails I can use the test keyword for my tests which I find very attractive and a bettter choice to Rspec's verbosity.
Example:
class TestMyClass < ActionController::TestCase
test 'one equals one' do
assert 1 == 1
end
end
At the moment I am creating a gem and I want to follow the same way for my tests - by using the test method. I tried inheriting from Minitest and UnitTest and the latter seems to work. However I was under the impression that Rails uses Minitest. So does Minitest actually provide a test directive?
This works:
class TestMyClass < Test::Unit::TestCase
test 'one equals one' do
assert 1 == 1
end
end
This gives me "wrong number of arguments for test":
class TestMyClass < Minitest:Test
test 'one equals one' do
assert 1 == 1
end
end
No, Minitest runs ordinary methods with names started from 'test_'.
Method test from ActionController::TestCase is provided by Rails and works as a simple wrap for 'test_*' methods. It converts this
test 'truish' do
assert true
end
to this
def test_truish
assert true
end
Also it checks if the body of the test was defined, if it wasn't, it will show an error message.
How can someone use vanilla assert in Rspec?
require 'rspec'
describe MyTest do
it 'tests that number 1 equals 1' do
assert 1 == 1
end
end
The error I get:
undefined method `assert' for
#<RSpec::ExampleGroups::Metadata::LoadFile:0x00000002b232a0>
Notice that I don't want to use assert_equal, eq, should, or other mumbo jumbo.
You can do this pretty easily:
require 'rspec/core'
require 'test/unit'
describe 'MyTest' do
include Test::Unit::Assertions
it 'tests that number 1 equals 1' do
assert 1 == 2
end
end
(if you want to be able to run the tests by doing ruby foo.rb then you'll need to require rspec/autorun too). This pulls in all of those assertions. If you really don't want any extra assertions, just define your own assert method that raises an exception when the test should fail.
Conversely you can easily use rspec's expectation syntax outside of rspec by requiring rspec/expectations - rspec3 is designed to be modular.
Configure RSpec to use MiniTest
RSpec.configure do |rspec|
rspec.expect_with :stdlib
end
Then you can use all the asserts offered by MiniTest from the standard library.
...or Wrong
Alternatively, you can use Wrong if you like asserts with a block:
require 'wrong'
RSpec.configure do |rspec|
rspec.expect_with Wrong
end
describe Set do
specify "adding using the << operator" do
set = Set.new
set << 3 << 4
assert { set.include?(3) }
end
Inspired by this blog article on RSpec.info.
I am new into watir and I am using testunit for assertion.
My script looks like this:
Script1 -- has a test method which calls Script2
Script2 -- does all the work and validation. This has all assertion
When i run my test case I have to run Script1, it runs successfully but result shows 1 tests, 0 assertions, 0 failures, 0 errors, 0 skips.
Here is my code:
This is in my first file
require_relative 'RubyDriver'
require 'test/unit'
class RubyTest < Test::Unit::TestCase
def test_method
driver = RubyDriver.new("/home/pratik/study/UIAutomation/WatirScript.xlsx")
driver.call_driver
end
end
And this is part of anotner file
require_relative 'ExcelReader'
require_relative 'RubyUtility'
require "watir-webdriver"
require 'test/unit'
class RubyDriver < Test::Unit::TestCase
def take_action
value_property = #rubyutil.get_property("#{value}")
if value_property
value_prop = value_property.first.strip
value_value = value_property.last.strip
end
case "#{#rubyutil.get_string_upcase("#{keyword}")}"
when "VERIFY"
puts "verifying"
puts value_prop
puts value_value
object.wait_until_present
object.flash
if value_prop == "text"
assert_equal(object.text, value_value,"Text does not match")
# puts object.text.should == value_value
elsif value_prop == "exist"
value_boolean = value_value == "true" ? true : false
assert_equal(object.exists?,value_boolean,"Object does not exist")
# puts object.exists?.should == value_value
end
Everything is working fine except report which shows as
1 tests, 0 assertions, 0 failures, 0 errors, 0 skips.
Where is my number of assertions.
Any help please.
The problem is that you are calling your assertions within an instance of another class. If I recall correctly, assert increments the assertion count within its class instance. Therefore your assertion count is being incremented in your RubyDriver instance rather than the RubyTest instance. As a result, you get no assertions reported.
You need to do the assertions within the actual test case (ie test_method of RubyTest) that is being run by test/unit.
As an example, you could make RubyDriver include your driving logic and logic for retrieving values to test. RubyTest would then call RubyDriver to setup/get values and include your test logic.
class RubyTest < Test::Unit::TestCase
def test_method
# Do some setup using RubyDriver instance
driver = RubyDriver.new("/home/pratik/study/UIAutomation/WatirScript.xlsx")
driver.call_driver
# Use RubyDriver to get some value you want to test
some_value_to_test = driver.get_value_to_test
# Do the assertion of the value within RubyTest
assert_equal(true, some_value_to_test, "Object does not exist")
end
end
An alternative solution might be to pass the test case (RubyTest) to RubyDriver. Then have RubyDriver call the assertion methods using the RubyTest instance.
Here is a simplified working example where you can see that your assertion count is correctly updated. Note that the RubyTest instance is passed to RubyDriver and stored in the #testcase variable. All assertions are then run with the context of the #testcase - eg #testcase.assert(false), which ensures that the original test cases' assertion counts are updated.
require 'test/unit'
class RubyDriver < Test::Unit::TestCase
def initialize(file, testcase)
#testcase = testcase
super(file)
end
def action
#testcase.assert(false)
end
end
class RubyTest < Test::Unit::TestCase
def test_method
driver = RubyDriver.new("/home/pratik/study/UIAutomation/WatirScript.xlsx", self)
driver.action
end
end
I left the RubyDriver as a sub-class of Test::Unit::TestCase, though it seems a bit odd unless you also have actual tests in the RubyDriver.
I'm writing some ruby (not Rails) and using test/unit with shoulda to write tests.
Are there any gems that'll allow me to implement traceability from my tests back to designs/requirements?
i.e.: I want to tag my tests with the name of the requirements they test, and then generate reports of requirements that aren't tested or have failing tests, etc.
Hopefully that's not too enterprisey for ruby.
Thanks!
Update: This solution is available as a gem: http://rubygems.org/gems/test4requirements
Are there any gems that'll allow me to implement traceability from my
tests back to designs/requirements?
I don't know any gem, but your need was inspiration for a little experiment, how it could be solved.
You have to define your Requirements with RequirementList.new(1,2,3,4)
This requirements can be assigned with requirements in a TestCase
each test can be assigned to a requirement with requirement
after the test results you get an overview which requirements are tested (successfull)
And now the example:
gem 'test-unit'
require 'test/unit'
###########
# This should be a gem
###########
class Test::Unit::TestCase
def self.requirements(req)
##requirements = req
end
def requirement(req)
raise RuntimeError, "No requirements defined for #{self}" unless defined? ##requirements
caller.first =~ /:\d+:in `(.*)'/
##requirements.add_test(req, "#{self.class}##{$1}")
end
alias :run_test_old :run_test
def run_test
run_test_old
#this code is left if a problem occured.
#in other words: if we reach this place, then the test was sucesfull
if defined? ##requirements
##requirements.test_successfull("#{self.class}##{#method_name}")
end
end
end
class RequirementList
def initialize( *reqs )
#requirements = reqs
#tests = {}
#success = {}
#Yes, we need two at_exit.
#tests are done also at_exit. With double at_exit, we are after that.
#Maybe better to be added later.
at_exit {
at_exit do
self.overview
end
}
end
def add_test(key, loc)
#fixme check duplicates
#tests[key] = loc
end
def test_successfull(loc)
#fixme check duplicates
#success[loc] = true
end
def overview()
puts "Requirements overiew"
#requirements.each{|req|
if #tests[req] #test defined
if #success[#tests[req]]
puts "Requirement #{req} was tested in #{#tests[req] }"
else
puts "Requirement #{req} was unsuccessfull tested in #{#tests[req] }"
end
else
puts "Requirement #{req} was not tested"
end
}
end
end #RequirementList
###############
## Here the gem end. The test will come.
###############
$req = RequirementList.new(1,2,3, 4)
class MyTest < Test::Unit::TestCase
#Following requirements exist, and must be tested sucessfull
requirements $req
def test_1()
requirement(1) #this test is testing requirement 1
assert_equal(2,1+1)
end
def test_2()
requirement(2)
assert_equal(3,1+1)
end
def test_3()
#no assignment to requirement 3
pend 'pend'
end
end
class MyTest_4 < Test::Unit::TestCase
#Following requirements exist, and must be tested sucessfull
requirements $req
def test_4()
requirement(4) #this test is testing requirement 4
assert_equal(2,1+1)
end
end
the result:
Loaded suite testing_traceability_solutions
Started
.FP.
1) Failure:
test_2(MyTest)
[testing_traceability_solutions.rb:89:in `test_2'
testing_traceability_solutions.rb:24:in `run_test']:
<3> expected but was
<2>.
2) Pending: pend
test_3(MyTest)
testing_traceability_solutions.rb:92:in `test_3'
testing_traceability_solutions.rb:24:in `run_test'
Finished in 0.65625 seconds.
4 tests, 3 assertions, 1 failures, 0 errors, 1 pendings, 0 omissions, 0 notifications
50% passed
Requirements overview:
Requirement 1 was tested in MyTest#test_1
Requirement 2 was unsuccessfull tested in MyTest#test_2
Requirement 3 was not tested
Requirement 4 was tested in MyTest_4#test_4
If you think, this could be a solution for you, please give me a feedback. Then I will try to build a gem out of it.
Code example for usage with shoulda
#~ require 'test4requirements' ###does not exist/use code above
require 'shoulda'
#use another interface ##not implemented###
#~ $req = Requirement.new_from_file('requirments.txt')
class MyTest_shoulda < Test::Unit::TestCase
#Following requirements exist, and must be tested sucessfull
#~ requirements $req
context 'req. of customer X' do
#Add requirement as parameter of should
# does not work yet
should 'fullfill request 1', requirement: 1 do
assert_equal(2,1+1)
end
#add requirement via requirement command
#works already
should 'fullfill request 1' do
requirement(1) #this test is testing requirement 1
assert_equal(2,1+1)
end
end #context
end #MyTest_shoulda
With cucumber you can have your requirement be the test, doesn't get any more traceable than that :)
So a single requirement is a feature, and a feature has scenario that you want to test.
# addition.feature
Feature: Addition
In order to avoid silly mistakes
As a math idiot
I want to be told the sum of two numbers
Scenario Outline: Add two numbers
Given I have entered <input_1> into the calculator
And I have entered <input_2> into the calculator
When I press <button>
Then the result should be <output> on the screen
Examples:
| input_1 | input_2 | button | output |
| 20 | 30 | add | 50 |
| 2 | 5 | add | 7 |
| 0 | 40 | add | 40 |
Then you have step definitions written in ruby mapped to the scenario
# step_definitons/calculator_steps.rb
begin require 'rspec/expectations'; rescue LoadError; require 'spec/expectations'; end
require 'cucumber/formatter/unicode'
$:.unshift(File.dirname(__FILE__) + '/../../lib')
require 'calculator'
Before do
#calc = Calculator.new
end
After do
end
Given /I have entered (\d+) into the calculator/ do |n|
#calc.push n.to_i
end
When /I press (\w+)/ do |op|
#result = #calc.send op
end
Then /the result should be (.*) on the screen/ do |result|
#result.should == result.to_f
end